-
-
Articles récents
- Next-Gen AI Inference: Intel® Xeon® Processors Power Vision, NLP, and Recommender Workloads
- Document Summarization: Transforming Enterprise Content with Intel® AI for Enterprise RAG
- AutoRound Meets SGLang: Enabling Quantized Model Inference with AutoRound
- In-production AI Optimization Guide for Xeon: Search and Recommendation Use Case
- Argonne’s Aurora Supercomputer Helps Power Breakthrough Simulations of Quantum Materials
-
Neural networks news
Intel NN News
- Next-Gen AI Inference: Intel® Xeon® Processors Power Vision, NLP, and Recommender Workloads
Intel® Xeon® processors can deliver a CPU-first platform built for modern AI workloads without […]
- Document Summarization: Transforming Enterprise Content with Intel® AI for Enterprise RAG
Transform enterprise documents into insights with Document Summarization, optimized for Intel® […]
- AutoRound Meets SGLang: Enabling Quantized Model Inference with AutoRound
We are thrilled to announce an official collaboration between SGLang and AutoRound, enabling […]
- Next-Gen AI Inference: Intel® Xeon® Processors Power Vision, NLP, and Recommender Workloads
-
Archives de catégorie : Non classé
Deploying Llama 4 Scout and Maverick Models on Intel® Gaudi® 3 with vLLM
Learn how to deploy Llama 4 Scout and Maverick models on Intel® Gaudi® 3 using vLLM for efficient, high-performance inference across complex AI tasks.
Publié dans Non classé
Commentaires fermés sur Deploying Llama 4 Scout and Maverick Models on Intel® Gaudi® 3 with vLLM
Running Llama3.3-70B on Intel® Gaudi® 2 with vLLM: A Step-by-Step Inference Guide
Run Llama 3.3-70B efficiently on Intel® Gaudi® 2 using vLLM. Learn setup, configuration, and performance tips for scalable, production-ready inference.
Publié dans Non classé
Commentaires fermés sur Running Llama3.3-70B on Intel® Gaudi® 2 with vLLM: A Step-by-Step Inference Guide
Accelerating Llama 3.3-70B Inference on Intel® Gaudi® 2 via Hugging Face Text Generation Inference
Learn how to deploy Llama 3.3-70B on Intel® Gaudi® 2 AI accelerators using Hugging Face TGI, with practical setup steps and optimization tips.
Publié dans Non classé
Commentaires fermés sur Accelerating Llama 3.3-70B Inference on Intel® Gaudi® 2 via Hugging Face Text Generation Inference
Exploring Vision-Language Models (VLMs) with Text Generation Inference on Intel® Data Center GPU Max
Supercharge VLM deployment with TGI on Intel XPUs. This guide shows how to set up, optimize, and serve blazing-fast models using Intel® Tiber AI Cloud.
Publié dans Non classé
Commentaires fermés sur Exploring Vision-Language Models (VLMs) with Text Generation Inference on Intel® Data Center GPU Max
A Journey Towards Approaching “Why” Question-Answering for Video
Let’s take a super fast journey summarizing the strides taken in an era (2012 to 2025 period) from simple image classification to recent video-LLMs to understand how to proceed with “why” questions in video understanding
Publié dans Non classé
Commentaires fermés sur A Journey Towards Approaching “Why” Question-Answering for Video
From Infrastructure to Impact: How Dell is Scaling AI
Unlocking AI’s Potential: Insights from Dell’s Varun Chhabra on Storytelling, Innovation, and Transformation.
Publié dans Non classé
Commentaires fermés sur From Infrastructure to Impact: How Dell is Scaling AI
Intel Labs’ Kid Space Conversational AI Facilitates Collaborative Problem-Solving Among Students
Scientists involved in the multi-year research project completed several prototype studies, demonstrating the potential of the program to facilitate student engagement in various learning environments, including classrooms, after-school programs, and home-based learning. The final Kid Space research study shows the value … Continuer la lecture
Publié dans Non classé
Commentaires fermés sur Intel Labs’ Kid Space Conversational AI Facilitates Collaborative Problem-Solving Among Students
HPE Sets World Record with HPE ProLiant DL380 Gen11 Server powered by 5th Gen Intel® Xeon® Processor
Hewlett Packard Enterprises (HPE) set a world record for the TPC Benchmark™ Express AI (TPCx-AI) SF100 benchmark.
Publié dans Non classé
Commentaires fermés sur HPE Sets World Record with HPE ProLiant DL380 Gen11 Server powered by 5th Gen Intel® Xeon® Processor
New Atlas CLI Open Source Tool Manages Machine Learning Model Provenance and Transparency
Intel Labs offers Atlas CLI, an open source tool for managing machine learning (ML) model provenance, including model artifact integrity and model lineage in ML pipelines
Publié dans Non classé
Commentaires fermés sur New Atlas CLI Open Source Tool Manages Machine Learning Model Provenance and Transparency
Intel Labs Presents Leading Multimodal and Agentic Research at CVPR 2025
Intel Labs researchers will present eleven papers at conference workshops as part of CVPR 2025. These works include a framework for systematic hierarchical analysis of vision model representations; a flexible graph-learning framework for fine-grained keystep recognition; and a novel interpretability … Continuer la lecture
Publié dans Non classé
Commentaires fermés sur Intel Labs Presents Leading Multimodal and Agentic Research at CVPR 2025