-
-
Articles récents
- Multi-Modal Brand Agent: Connecting Visual Logos to Business Intelligence
- Building Efficient Multi-Modal AI Agents with Model Context Protocol (MCP)
- Intel Presents Novel Research at NAACL 2025
- Getting Started with Intel® Tiber™ AI Cloud
- AI Playground: Experience the Latest GenAI Software on AI PCs Powered by Intel® Arc™ Graphics
-
Neural networks news
Intel NN News
- Multi-Modal Brand Agent: Connecting Visual Logos to Business Intelligence
Identify brands from logos and retrieve business data in seconds. This AI agent links vision models […]
- Building Efficient Multi-Modal AI Agents with Model Context Protocol (MCP)
A deep dive into creating modular, specialized AI systems that rival large language models.
- Intel Presents Novel Research at NAACL 2025
Intel is proud to present four papers this year’s Annual Conference of the Nations of the […]
- Multi-Modal Brand Agent: Connecting Visual Logos to Business Intelligence
-
Archives de catégorie : Non classé
Accelerating Media Analytics with Intel® DL Streamer featuring OpenVINO with Intel® RealSense
Optimize, tune, and run comprehensive AI inference with OpenVINO Toolkit.
Publié dans Non classé
Laisser un commentaire
Tech for Good and AI within Intel’s Health and Life Science’s division.
How Intel’s AI tech innovation can be applied to solve healthcare challenges.
Publié dans Non classé
Laisser un commentaire
Easily Optimize Deep Learning with 8-Bit Quantization
Discover how to use the Neural Network Compression Framework of the OpenVINOTM toolkit for 8-bit quantization in PyTorch
Publié dans Non classé
Laisser un commentaire
Accelerating AI/ML and Data-centric Applications with Temporal Caching
Intel® Optane™ persistent memory can help unify data access methods for temporal caching to improve data-centric workload performance.
Publié dans Non classé
Laisser un commentaire
AI in Semiconductor Manufacturing: Solving the n-i, n Problem
Artificial intelligence (AI) is a powerful tool to transform vast amounts of manufacturing data into insights that can improve the manufacturing process. To maximize success, Intel has fine-tuned its approach to AI in manufacturing by focusing on those applications that … Continuer la lecture
Publié dans Non classé
Laisser un commentaire
Quantizing ONNX Models using Intel® Neural Compressor
In this tutorial, we will show step-by-step how to quantize ONNX models with Intel® Neural Compressor.
Publié dans Non classé
Laisser un commentaire
OpenVINO™ Execution Provider + Model Caching = Better First Inference Latency for your ONNX Models
Developers, like yourself, can now leverage model caching through the OpenVINO Execution Provider for ONNX Runtime, a product that accelerates inferencing of ONNX models using ONNX Runtime API’s while using OpenVINO™ toolkit as a backend. With the OpenVINO Execution Provider, … Continuer la lecture
Publié dans Non classé
Laisser un commentaire
Création du site du projet THINK !
Ce site contient les actualités du projet THINK ainsi que des liens utiles sur les techniques neuronales (cours, systèmes de développements, résultats comparaisons, etc …
Publié dans Non classé
Laisser un commentaire