In this recap of Rahul Nair’s workshop from Intel® Liftoff Days 2024, we’ll dive into optimization strategies for multimodal AI inference on Intel hardware. Learn how to leverage Intel’s software tools to boost performance and streamline AI workflows.
-
-
Articles récents
- Optimizing LLM Inference on Intel® Gaudi® Accelerators with llm-d Decoupling
- Robots Meet Humans: Intel Labs Extends Robotics Safety to Cover 3D Environments
- Bringing AI Back to the Device: Real-World Transformer Models on Intel® AI PCs
- Cost Effective Deployment of DeepSeek R1 with Intel® Xeon® 6 CPU on SGLang
- Intel and Elementary Partner to Deliver Self-Learning Quality Inspection for Manufacturing
-
Neural networks news
Intel NN News
- Optimizing LLM Inference on Intel® Gaudi® Accelerators with llm-d Decoupling
Discover how Intel® Gaudi® accelerators and the llm-d stack improve large language model […]
- Robots Meet Humans: Intel Labs Extends Robotics Safety to Cover 3D Environments
Intel Labs researchers have developed a new set of safety concepts for mobile and stationary robots […]
- Bringing AI Back to the Device: Real-World Transformer Models on Intel® AI PCs
Intel and Fluid Inference optimized transformer models to run locally on Intel AI PCs, enabling […]
- Optimizing LLM Inference on Intel® Gaudi® Accelerators with llm-d Decoupling
-