-
-
Articles récents
- Accelerating Llama 3.3-70B Inference on Intel® Gaudi® 2 via Hugging Face Text Generation Inference
- Exploring Vision-Language Models (VLMs) with Text Generation Inference on Intel® Data Center GPU Max
- A Journey Towards Approaching “Why” Question-Answering for Video
- From Infrastructure to Impact: How Dell is Scaling AI
- Intel Labs’ Kid Space Conversational AI Facilitates Collaborative Problem-Solving Among Students
-
Neural networks news
Intel NN News
- Accelerating Llama 3.3-70B Inference on Intel® Gaudi® 2 via Hugging Face Text Generation Inference
Learn how to deploy Llama 3.3-70B on Intel® Gaudi® 2 AI accelerators using Hugging Face TGI, with […]
- Exploring Vision-Language Models (VLMs) with Text Generation Inference on Intel® Data Center GPU Max
Supercharge VLM deployment with TGI on Intel XPUs. This guide shows how to set up, optimize, and […]
- Evaluating Trustworthiness of Explanations in Agentic AI Systems
Intel Labs research published at the ACM CHI 2025 Human-Centered Explainable AI Workshop found that […]
- Accelerating Llama 3.3-70B Inference on Intel® Gaudi® 2 via Hugging Face Text Generation Inference
-
Archives mensuelles : décembre 2024
Unlocking the Future of AI with Federated Learning
OpenFL 1.6 Is Released and Federated Learning for Healthcare Is Showcasing Exciting Possibilities
Publié dans Non classé
Commentaires fermés sur Unlocking the Future of AI with Federated Learning
Building Trust in AI: An End-to-End Approach for the Machine Learning Model Lifecycle
At Intel Labs, we believe that responsible AI begins with ensuring the integrity and transparency of ML systems, from model training to inferencing, making the ability to verify model lineage an essential foundation of ethical ML development.
Publié dans Non classé
Commentaires fermés sur Building Trust in AI: An End-to-End Approach for the Machine Learning Model Lifecycle
Intel Presents Novel AI Research at NeurIPS 2024
The Conference on Neural Information Processing Systems (NeurIPS 2024) will run from Tuesday, December 10th, through Sunday, December 15th, at the Vancouver Convention Center in Vancouver, B.C., Canada. This year, Intel presents 36 papers at NeurIPS, including eight at the main … Continuer la lecture
Publié dans Non classé
Commentaires fermés sur Intel Presents Novel AI Research at NeurIPS 2024
How Ultralytics is Advancing AI with YOLO Models and Intel’s Support
Discover how Ultralytics tackles real-world challenges with customized YOLO models, solving problems from hazardous zone monitoring to parking management.
Publié dans Non classé
Commentaires fermés sur How Ultralytics is Advancing AI with YOLO Models and Intel’s Support
From Centralized Machine Learning to Federated Learning with OpenFL
The Federated Learning (FL) machine learning paradigm addresses model bias through diverse data, while maintaining privacy and security for data owners.
Publié dans Non classé
Commentaires fermés sur From Centralized Machine Learning to Federated Learning with OpenFL
Intel Researchers Organize Workshops & Socials at NeurIPS 2024
Intel employees have organized several workshops at NeurIPS 2024, which are co-located with this year’s conference, including AI for Accelerated Materials Design (AI4Mat-NeurIPS-2024), Breaking Silos Open Community for AI x Science, Responsibly Building the Next Generation of Multimodal Foundational Models, … Continuer la lecture
Publié dans Non classé
Commentaires fermés sur Intel Researchers Organize Workshops & Socials at NeurIPS 2024
Introducing OpenFL 1.6: Federated LLM Fine-Tuning and Evaluation
The most recent OpenFL release enables the next wave of federated learning development.
Publié dans Non classé
Commentaires fermés sur Introducing OpenFL 1.6: Federated LLM Fine-Tuning and Evaluation