-
Articles récents
- Beewant’s Multimodal AI: Smarter Solutions for Training, Travel, and Safety
- Get Your Innovation to Go with Innovation Select Videos
- Building AI for Low-Resource Languages: Bezoku’s Innovative Approach
- Accelerate PyTorch* Inference with torch.compile on Windows* CPU
- DubHacks’24 Hackathon Where Developers Innovatively Utilized Intel® Tiber™ AI Cloud and AI PCs
-
Neural networks news
Intel NN News
- Beewant’s Multimodal AI: Smarter Solutions for Training, Travel, and Safety
Beewant’s cutting-edge multimodal AI redefines multimedia, driving innovative applications across […]
- Get Your Innovation to Go with Innovation Select Videos
Catch up on the latest Intel Innovation developer and technical content with demos, tech talks and […]
- Building AI for Low-Resource Languages: Bezoku's Innovative Approach
Bezoku, a member of the Intel® Liftoff program, is addressing the challenges of low-resource […]
- Beewant’s Multimodal AI: Smarter Solutions for Training, Travel, and Safety
Archives mensuelles : avril 2022
Accelerating AI/ML and Data-centric Applications with Temporal Caching
Intel® Optane™ persistent memory can help unify data access methods for temporal caching to improve data-centric workload performance.
Publié dans Non classé
Laisser un commentaire
Easily Optimize Deep Learning with 8-Bit Quantization
Discover how to use the Neural Network Compression Framework of the OpenVINOTM toolkit for 8-bit quantization in PyTorch
Publié dans Non classé
Laisser un commentaire
Tech for Good and AI within Intel’s Health and Life Science’s division.
How Intel’s AI tech innovation can be applied to solve healthcare challenges.
Publié dans Non classé
Laisser un commentaire
Accelerating Media Analytics with Intel® DL Streamer featuring OpenVINO with Intel® RealSense
Optimize, tune, and run comprehensive AI inference with OpenVINO Toolkit.
Publié dans Non classé
Laisser un commentaire
How to use Model Downloader for Encoder Decoder Models using OpenVINO™ toolkit
Model Converter converts a public model into Inference Engine IR format using Model Optimizer
Publié dans Non classé
Laisser un commentaire
AI Data Processing: Near-Memory Compute for Energy-Efficient Systems
Near Memory Compute is becoming important for future AI processing systems that need improvement in system performance and energy-efficiency. The Von Neumann computing model requires data to commute from memory to compute and this data movement burns energy. Is it … Continuer la lecture
Publié dans Non classé
Laisser un commentaire
BootstrapNAS: Automated Hardware-Aware Model Optimization Tool Shows Up to 11.3x AI Improvement
Intel Labs is developing an automated hardware-aware model optimization tool called BootstrapNAS to simplify the optimization of pre-trained AI models on Intel hardware, including Intel® Xeon® Scalable processors, which delivers built-in AI acceleration and flexibility. The tool will provide considerable … Continuer la lecture
Publié dans Non classé
Laisser un commentaire
OpenVINO™ Execution Provider + Model Caching = Better First Inference Latency for your ONNX Models
Developers, like yourself, can now leverage model caching through the OpenVINO Execution Provider for ONNX Runtime, a product that accelerates inferencing of ONNX models using ONNX Runtime API’s while using OpenVINO™ toolkit as a backend. With the OpenVINO Execution Provider, … Continuer la lecture
Publié dans Non classé
Laisser un commentaire