-
Articles récents
- Clustering Time Series with PCA and DBSCAN
- Deploy Enterprise-Ready AI with Dell PowerEdge and Intel® Gaudi® 3
- Roofline AI’s Role in Advancing Compiler Technology with oneAPI
- UT Austin’s HackTX 2024 Hackathon: Top Projects Built Using Intel® AI Technologies
- Transforming 2D Designs into Stunning 3D Creations Using AI with Adobe Creative Cloud and Substance
-
Neural networks news
Intel NN News
- Clustering Time Series with PCA and DBSCAN
This article shows how to perform clustering of time series data using PCA and DBSCAN.
- Deploy Enterprise-Ready AI with Dell PowerEdge and Intel® Gaudi® 3
Learn about the newly launched Dell Generative AI Solutions with Intel, powered by Dell PowerEdge […]
- Roofline AI's Role in Advancing Compiler Technology with oneAPI
Intel® Liftoff member, Roofline AI took the stage at the oneAPI DevSummit to showcase its […]
- Clustering Time Series with PCA and DBSCAN
Archives mensuelles : avril 2022
Accelerating AI/ML and Data-centric Applications with Temporal Caching
Intel® Optane™ persistent memory can help unify data access methods for temporal caching to improve data-centric workload performance.
Publié dans Non classé
Laisser un commentaire
Easily Optimize Deep Learning with 8-Bit Quantization
Discover how to use the Neural Network Compression Framework of the OpenVINOTM toolkit for 8-bit quantization in PyTorch
Publié dans Non classé
Laisser un commentaire
Tech for Good and AI within Intel’s Health and Life Science’s division.
How Intel’s AI tech innovation can be applied to solve healthcare challenges.
Publié dans Non classé
Laisser un commentaire
Accelerating Media Analytics with Intel® DL Streamer featuring OpenVINO with Intel® RealSense
Optimize, tune, and run comprehensive AI inference with OpenVINO Toolkit.
Publié dans Non classé
Laisser un commentaire
How to use Model Downloader for Encoder Decoder Models using OpenVINO™ toolkit
Model Converter converts a public model into Inference Engine IR format using Model Optimizer
Publié dans Non classé
Laisser un commentaire
AI Data Processing: Near-Memory Compute for Energy-Efficient Systems
Near Memory Compute is becoming important for future AI processing systems that need improvement in system performance and energy-efficiency. The Von Neumann computing model requires data to commute from memory to compute and this data movement burns energy. Is it … Continuer la lecture
Publié dans Non classé
Laisser un commentaire
BootstrapNAS: Automated Hardware-Aware Model Optimization Tool Shows Up to 11.3x AI Improvement
Intel Labs is developing an automated hardware-aware model optimization tool called BootstrapNAS to simplify the optimization of pre-trained AI models on Intel hardware, including Intel® Xeon® Scalable processors, which delivers built-in AI acceleration and flexibility. The tool will provide considerable … Continuer la lecture
Publié dans Non classé
Laisser un commentaire
OpenVINO™ Execution Provider + Model Caching = Better First Inference Latency for your ONNX Models
Developers, like yourself, can now leverage model caching through the OpenVINO Execution Provider for ONNX Runtime, a product that accelerates inferencing of ONNX models using ONNX Runtime API’s while using OpenVINO™ toolkit as a backend. With the OpenVINO Execution Provider, … Continuer la lecture
Publié dans Non classé
Laisser un commentaire