Archives de catégorie : Non classé

Tech for Good and AI within Intel’s Health and Life Science’s division.

How Intel’s AI tech innovation can be applied to solve healthcare challenges.

Publié dans Non classé | Laisser un commentaire

Easily Optimize Deep Learning with 8-Bit Quantization

Discover how to use the Neural Network Compression Framework of the OpenVINOTM toolkit for 8-bit quantization in PyTorch

Publié dans Non classé | Laisser un commentaire

Accelerating AI/ML and Data-centric Applications with Temporal Caching

Intel® Optane™ persistent memory can help unify data access methods for temporal caching to improve data-centric workload performance.

Publié dans Non classé | Laisser un commentaire

AI in Semiconductor Manufacturing: Solving the n-i, n Problem

Artificial intelligence (AI) is a powerful tool to transform vast amounts of manufacturing data into insights that can improve the manufacturing process. To maximize success, Intel has fine-tuned its approach to AI in manufacturing by focusing on those applications that … Continuer la lecture

Publié dans Non classé | Laisser un commentaire

Quantizing ONNX Models using Intel® Neural Compressor

In this tutorial, we will show step-by-step how to quantize ONNX models with Intel® Neural Compressor.

Publié dans Non classé | Laisser un commentaire

OpenVINO™ Execution Provider + Model Caching = Better First Inference Latency for your ONNX Models

Developers, like yourself, can now leverage model caching through the OpenVINO Execution Provider for ONNX Runtime, a product that accelerates inferencing of ONNX models using ONNX Runtime API’s while using OpenVINO™ toolkit as a backend. With the OpenVINO Execution Provider, … Continuer la lecture

Publié dans Non classé | Laisser un commentaire

BootstrapNAS: Automated Hardware-Aware Model Optimization Tool Shows Up to 11.3x AI Improvement

Intel Labs is developing an automated hardware-aware model optimization tool called BootstrapNAS to simplify the optimization of pre-trained AI models on Intel hardware, including Intel® Xeon® Scalable processors, which delivers built-in AI acceleration and flexibility. The tool will provide considerable … Continuer la lecture

Publié dans Non classé | Laisser un commentaire

AI Data Processing: Near-Memory Compute for Energy-Efficient Systems

Near Memory Compute is becoming important for future AI processing systems that need improvement in system performance and energy-efficiency. The Von Neumann computing model requires data to commute from memory to compute and this data movement burns energy. Is it … Continuer la lecture

Publié dans Non classé | Laisser un commentaire

How to use Model Downloader for Encoder Decoder Models using OpenVINO™ toolkit

Model Converter converts a public model into Inference Engine IR format using Model Optimizer

Publié dans Non classé | Laisser un commentaire

Accelerating Media Analytics with Intel® DL Streamer featuring OpenVINO with Intel® RealSense

Optimize, tune, and run comprehensive AI inference with OpenVINO Toolkit.

Publié dans Non classé | Laisser un commentaire