-
-
Articles récents
- Accelerating Llama 3.3-70B Inference on Intel® Gaudi® 2 via Hugging Face Text Generation Inference
- Exploring Vision-Language Models (VLMs) with Text Generation Inference on Intel® Data Center GPU Max
- A Journey Towards Approaching “Why” Question-Answering for Video
- From Infrastructure to Impact: How Dell is Scaling AI
- Intel Labs’ Kid Space Conversational AI Facilitates Collaborative Problem-Solving Among Students
-
Neural networks news
Intel NN News
- Accelerating Llama 3.3-70B Inference on Intel® Gaudi® 2 via Hugging Face Text Generation Inference
Learn how to deploy Llama 3.3-70B on Intel® Gaudi® 2 AI accelerators using Hugging Face TGI, with […]
- Exploring Vision-Language Models (VLMs) with Text Generation Inference on Intel® Data Center GPU Max
Supercharge VLM deployment with TGI on Intel XPUs. This guide shows how to set up, optimize, and […]
- Evaluating Trustworthiness of Explanations in Agentic AI Systems
Intel Labs research published at the ACM CHI 2025 Human-Centered Explainable AI Workshop found that […]
- Accelerating Llama 3.3-70B Inference on Intel® Gaudi® 2 via Hugging Face Text Generation Inference
-
Archives mensuelles : mars 2025
High-Quality Data for Smarter AI with Argilla & Hugging Face
Discover how Argilla empowers AI engineers and domain experts to collaborate seamlessly, transforming raw data into high-quality insights for smarter AI solutions.
Publié dans Non classé
Commentaires fermés sur High-Quality Data for Smarter AI with Argilla & Hugging Face
The Urgent Need for Intrinsic Alignment Technologies for Responsible Agentic AI
Rethinking AI Alignment and Safety in the Age of Deep Scheming
Publié dans Non classé
Commentaires fermés sur The Urgent Need for Intrinsic Alignment Technologies for Responsible Agentic AI
Vector Quantization for Scalable Vector Search
The first post in this series introduced vector search, its relevance in today’s world, and the important metrics used to characterize it. We can achieve dramatic gains in vector search systems by improving their internal vector representations, as the majority … Continuer la lecture
Publié dans Non classé
Commentaires fermés sur Vector Quantization for Scalable Vector Search
dstack: Now offering support for Intel® Gaudi® AI Accelerators & Intel® Tiber® AI Cloud
Discover how dstack’s integration with Intel® Gaudi® AI accelerators enhances AI workflow efficiency, reduces costs, and accelerates scalable model deployment across cloud and on-prem environments.
Publié dans Non classé
Commentaires fermés sur dstack: Now offering support for Intel® Gaudi® AI Accelerators & Intel® Tiber® AI Cloud
Introducing Intel® Tiber™ Secure Federated AI
A new product from Intel Tiber Trust Services brings zero-trust security to AI model training.
Publié dans Non classé
Commentaires fermés sur Introducing Intel® Tiber™ Secure Federated AI