-
-
Articles récents
- Multi-Modal Brand Agent: Connecting Visual Logos to Business Intelligence
- Building Efficient Multi-Modal AI Agents with Model Context Protocol (MCP)
- Intel Presents Novel Research at NAACL 2025
- Getting Started with Intel® Tiber™ AI Cloud
- AI Playground: Experience the Latest GenAI Software on AI PCs Powered by Intel® Arc™ Graphics
-
Neural networks news
Intel NN News
- Multi-Modal Brand Agent: Connecting Visual Logos to Business Intelligence
Identify brands from logos and retrieve business data in seconds. This AI agent links vision models […]
- Building Efficient Multi-Modal AI Agents with Model Context Protocol (MCP)
A deep dive into creating modular, specialized AI systems that rival large language models.
- Intel Presents Novel Research at NAACL 2025
Intel is proud to present four papers this year’s Annual Conference of the Nations of the […]
- Multi-Modal Brand Agent: Connecting Visual Logos to Business Intelligence
-
Archives mensuelles : juillet 2023
Revolutionizing Recycling: Smart Garbage Classification using oneDNN
Using oneDNN for Improved Efficiency and Sustainability
Publié dans Non classé
Commentaires fermés sur Revolutionizing Recycling: Smart Garbage Classification using oneDNN
Heart Disease Risk Prediction using scikit-learn* (sklearn) and XGBoost: Developer Spotlight
Developer Spotlight: Arnab Das in his blog proposed a solution to Heart Disease Risk Prediction
Publié dans Non classé
Commentaires fermés sur Heart Disease Risk Prediction using scikit-learn* (sklearn) and XGBoost: Developer Spotlight
Survival of the Fittest: Compact Generative AI Models Are the Future for Cost-Effective AI at Scale
The case for nimble, targeted, retrieval-based models as the best solution for generative AI applications deployed at scale.
Publié dans Non classé
Commentaires fermés sur Survival of the Fittest: Compact Generative AI Models Are the Future for Cost-Effective AI at Scale
Intel Labs Presents Five Papers on Novel AI Research at ICML 2023
Intel Labs had five papers accepted at the 40th International Conference on Machine Learning (ICML) 2023, happening now through July 29. Two papers were selected as spotlight oral papers at the conference: ProtST, which uses a ChatGPT-style design interface for … Continuer la lecture
Publié dans Non classé
Commentaires fermés sur Intel Labs Presents Five Papers on Novel AI Research at ICML 2023
Intel® Xeon® trains Graph Neural Network models in record time
The 4th gen Intel® Xeon® Scalable Processor, formerly codenamed Sapphire Rapids is a balanced platform for Graph Neural Networks (GNN) training, accelerating both sparse and dense compute. In this article, Intel CPU refers to 4th gen Intel® Xeon® Scalable Processor … Continuer la lecture
Publié dans Non classé
Commentaires fermés sur Intel® Xeon® trains Graph Neural Network models in record time
Intel Xeon is all you need for AI inference: Performance Leadership on Real World Applications
Intel is democratizing AI inference by delivering a better price and performance forreal-world use cases on the 4th gen Intel® Xeon® Scalable Processors, formerly codenamed Sapphire Rapids. In this article, Intel® CPU refers to 4th gen Intel® Xeon® Scalable Processors. … Continuer la lecture
Publié dans Non classé
Commentaires fermés sur Intel Xeon is all you need for AI inference: Performance Leadership on Real World Applications
JUMP and Intel Labs Success Story: UCSD Professor Leads HD Computing Research Efforts
For the past five years at the JUMP CRISP Center at UCSD, Professor Tajana Simunic Rosing has led hyperdimensional computing research efforts to solve memory and storage challenges in COVID-19 wastewater surveillance and personalized recommendation systems.
Publié dans Non classé
Commentaires fermés sur JUMP and Intel Labs Success Story: UCSD Professor Leads HD Computing Research Efforts
Democratizing Generative AI for Medicine
Add domain-specific knowledge to foundation AI models without the AI training costs or expertise.
Publié dans Non classé
Commentaires fermés sur Democratizing Generative AI for Medicine
Accelerate Workloads with OpenVINO and OneDNN
OpenVINO utilizes oneDNN GPU kernels for discrete GPUs to accelerate compute-intensive workloads
Publié dans Non classé
Commentaires fermés sur Accelerate Workloads with OpenVINO and OneDNN
Can we Improve Early-Exit Transformers? Novel Adaptive Inference Method Presented at ACL 2023
Intel Labs and the Hebrew University of Jerusalem present SWEET, an adaptive Inference method for text classification at this year’s ACL conference.
Publié dans Non classé
Commentaires fermés sur Can we Improve Early-Exit Transformers? Novel Adaptive Inference Method Presented at ACL 2023