Developers, like yourself, can now leverage model caching through the OpenVINO Execution Provider for ONNX Runtime, a product that accelerates inferencing of ONNX models using ONNX Runtime API’s while using OpenVINO™ toolkit as a backend. With the OpenVINO Execution Provider, ONNX Runtime delivers better inferencing performance on the same hardware compared to generic acceleration on Intel® CPU, GPU, and VPU.
-
-
Articles récents
- Evaluating Trustworthiness of Explanations in Agentic AI Systems
- Unlocking AI Development with Windows* ML: Intel and Microsoft’s Strategic Partnership
- Multi-Modal Brand Agent: Connecting Visual Logos to Business Intelligence
- Building Efficient Multi-Modal AI Agents with Model Context Protocol (MCP)
- Intel Presents Novel Research at NAACL 2025
-
Neural networks news
Intel NN News
- Evaluating Trustworthiness of Explanations in Agentic AI Systems
Intel Labs research published at the ACM CHI 2025 Human-Centered Explainable Workshop found that […]
- Unlocking AI Development with Windows* ML: Intel and Microsoft's Strategic Partnership
We are thrilled to introduce a technical preview of Windows ML, enhanced by the built-in […]
- Multi-Modal Brand Agent: Connecting Visual Logos to Business Intelligence
Identify brands from logos and retrieve business data in seconds. This AI agent links vision models […]
- Evaluating Trustworthiness of Explanations in Agentic AI Systems
-