-
-
Articles récents
- How Intel® Liftoff Startups Are Winning with DeepSeek
- Finetuning & Inference on GenAI Models using Optimum Habana and the GPU Migration Toolkit on Intel®
- Agentic AI and Confidential Computing: A Perfect Synergy for Secure Innovation
- AI PC Pilot Hackathon ‘24 Where Intel® Student Ambassadors Built High-performance AI Solutions
- Discover the Power of DeepSeek-R1: A Cost-Efficient AI Model
-
Neural networks news
Intel NN News
- Intel Labs AI Tool Research Protects Artist Data and Human Voices from Use by Generative AI
The Trusted Media research team at Intel Labs is working on several projects to help artists and […]
- How Intel® Liftoff Startups Are Winning with DeepSeek
From security and efficiency to testing, Intel® Liftoff Startups have jumped at the chance to […]
- AI PC Pilot Hackathon ‘24 Where Intel® Student Ambassadors Built High-performance AI Solutions
Top projects built by Intel® Student Ambassadors at the AI PC Pilot hackathon ’24.
- Intel Labs AI Tool Research Protects Artist Data and Human Voices from Use by Generative AI
-
Archives mensuelles : avril 2022
Accelerating AI/ML and Data-centric Applications with Temporal Caching
Intel® Optane™ persistent memory can help unify data access methods for temporal caching to improve data-centric workload performance.
Publié dans Non classé
Laisser un commentaire
Easily Optimize Deep Learning with 8-Bit Quantization
Discover how to use the Neural Network Compression Framework of the OpenVINOTM toolkit for 8-bit quantization in PyTorch
Publié dans Non classé
Laisser un commentaire
Tech for Good and AI within Intel’s Health and Life Science’s division.
How Intel’s AI tech innovation can be applied to solve healthcare challenges.
Publié dans Non classé
Laisser un commentaire
Accelerating Media Analytics with Intel® DL Streamer featuring OpenVINO with Intel® RealSense
Optimize, tune, and run comprehensive AI inference with OpenVINO Toolkit.
Publié dans Non classé
Laisser un commentaire
How to use Model Downloader for Encoder Decoder Models using OpenVINO™ toolkit
Model Converter converts a public model into Inference Engine IR format using Model Optimizer
Publié dans Non classé
Laisser un commentaire
AI Data Processing: Near-Memory Compute for Energy-Efficient Systems
Near Memory Compute is becoming important for future AI processing systems that need improvement in system performance and energy-efficiency. The Von Neumann computing model requires data to commute from memory to compute and this data movement burns energy. Is it … Continuer la lecture
Publié dans Non classé
Laisser un commentaire
BootstrapNAS: Automated Hardware-Aware Model Optimization Tool Shows Up to 11.3x AI Improvement
Intel Labs is developing an automated hardware-aware model optimization tool called BootstrapNAS to simplify the optimization of pre-trained AI models on Intel hardware, including Intel® Xeon® Scalable processors, which delivers built-in AI acceleration and flexibility. The tool will provide considerable … Continuer la lecture
Publié dans Non classé
Laisser un commentaire
OpenVINO™ Execution Provider + Model Caching = Better First Inference Latency for your ONNX Models
Developers, like yourself, can now leverage model caching through the OpenVINO Execution Provider for ONNX Runtime, a product that accelerates inferencing of ONNX models using ONNX Runtime API’s while using OpenVINO™ toolkit as a backend. With the OpenVINO Execution Provider, … Continuer la lecture
Publié dans Non classé
Laisser un commentaire