-
-
Articles récents
- Accelerating Llama 3.3-70B Inference on Intel® Gaudi® 2 via Hugging Face Text Generation Inference
- Exploring Vision-Language Models (VLMs) with Text Generation Inference on Intel® Data Center GPU Max
- A Journey Towards Approaching “Why” Question-Answering for Video
- From Infrastructure to Impact: How Dell is Scaling AI
- Intel Labs’ Kid Space Conversational AI Facilitates Collaborative Problem-Solving Among Students
-
Neural networks news
Intel NN News
- Accelerating Llama 3.3-70B Inference on Intel® Gaudi® 2 via Hugging Face Text Generation Inference
Learn how to deploy Llama 3.3-70B on Intel® Gaudi® 2 AI accelerators using Hugging Face TGI, with […]
- Exploring Vision-Language Models (VLMs) with Text Generation Inference on Intel® Data Center GPU Max
Supercharge VLM deployment with TGI on Intel XPUs. This guide shows how to set up, optimize, and […]
- Evaluating Trustworthiness of Explanations in Agentic AI Systems
Intel Labs research published at the ACM CHI 2025 Human-Centered Explainable AI Workshop found that […]
- Accelerating Llama 3.3-70B Inference on Intel® Gaudi® 2 via Hugging Face Text Generation Inference
-
Archives mensuelles : juillet 2023
Revolutionizing Recycling: Smart Garbage Classification using oneDNN
Using oneDNN for Improved Efficiency and Sustainability
Publié dans Non classé
Commentaires fermés sur Revolutionizing Recycling: Smart Garbage Classification using oneDNN
Heart Disease Risk Prediction using scikit-learn* (sklearn) and XGBoost: Developer Spotlight
Developer Spotlight: Arnab Das in his blog proposed a solution to Heart Disease Risk Prediction
Publié dans Non classé
Commentaires fermés sur Heart Disease Risk Prediction using scikit-learn* (sklearn) and XGBoost: Developer Spotlight
Survival of the Fittest: Compact Generative AI Models Are the Future for Cost-Effective AI at Scale
The case for nimble, targeted, retrieval-based models as the best solution for generative AI applications deployed at scale.
Publié dans Non classé
Commentaires fermés sur Survival of the Fittest: Compact Generative AI Models Are the Future for Cost-Effective AI at Scale
Intel Labs Presents Five Papers on Novel AI Research at ICML 2023
Intel Labs had five papers accepted at the 40th International Conference on Machine Learning (ICML) 2023, happening now through July 29. Two papers were selected as spotlight oral papers at the conference: ProtST, which uses a ChatGPT-style design interface for … Continuer la lecture
Publié dans Non classé
Commentaires fermés sur Intel Labs Presents Five Papers on Novel AI Research at ICML 2023
Intel® Xeon® trains Graph Neural Network models in record time
The 4th gen Intel® Xeon® Scalable Processor, formerly codenamed Sapphire Rapids is a balanced platform for Graph Neural Networks (GNN) training, accelerating both sparse and dense compute. In this article, Intel CPU refers to 4th gen Intel® Xeon® Scalable Processor … Continuer la lecture
Publié dans Non classé
Commentaires fermés sur Intel® Xeon® trains Graph Neural Network models in record time
Intel Xeon is all you need for AI inference: Performance Leadership on Real World Applications
Intel is democratizing AI inference by delivering a better price and performance forreal-world use cases on the 4th gen Intel® Xeon® Scalable Processors, formerly codenamed Sapphire Rapids. In this article, Intel® CPU refers to 4th gen Intel® Xeon® Scalable Processors. … Continuer la lecture
Publié dans Non classé
Commentaires fermés sur Intel Xeon is all you need for AI inference: Performance Leadership on Real World Applications
JUMP and Intel Labs Success Story: UCSD Professor Leads HD Computing Research Efforts
For the past five years at the JUMP CRISP Center at UCSD, Professor Tajana Simunic Rosing has led hyperdimensional computing research efforts to solve memory and storage challenges in COVID-19 wastewater surveillance and personalized recommendation systems.
Publié dans Non classé
Commentaires fermés sur JUMP and Intel Labs Success Story: UCSD Professor Leads HD Computing Research Efforts
Democratizing Generative AI for Medicine
Add domain-specific knowledge to foundation AI models without the AI training costs or expertise.
Publié dans Non classé
Commentaires fermés sur Democratizing Generative AI for Medicine
Accelerate Workloads with OpenVINO and OneDNN
OpenVINO utilizes oneDNN GPU kernels for discrete GPUs to accelerate compute-intensive workloads
Publié dans Non classé
Commentaires fermés sur Accelerate Workloads with OpenVINO and OneDNN
Can we Improve Early-Exit Transformers? Novel Adaptive Inference Method Presented at ACL 2023
Intel Labs and the Hebrew University of Jerusalem present SWEET, an adaptive Inference method for text classification at this year’s ACL conference.
Publié dans Non classé
Commentaires fermés sur Can we Improve Early-Exit Transformers? Novel Adaptive Inference Method Presented at ACL 2023