In this article, we focus on fine-tuning the DeepSeek-R1-Distill-Qwen-1.5B Reasoning Model to improve its performance on task-specific data using Intel® Data Center GPU Max 1100 GPU.
-
-
Articles récents
- Intel® Xeon® Processors: The Most Preferred CPU for AI Host Nodes
- Building AI With Empathy: Sorenson’s Mission for Accessibility
- Multi-node deployments using Intel® AI for Enterprise RAG
- Connected Data is the Future: How Neo4j Is Enabling the Next Generation of AI
- Orchestrating AI for Real Business Value: Google Cloud’s Approach to Scalable Intelligence
-
Neural networks news
Intel NN News
- Intel® Xeon® Processors: The Most Preferred CPU for AI Host Nodes
Today’s AI workloads are not purely offloaded to GPU accelerators. Host CPUs such as the Intel® […]
- Multi-node deployments using Intel® AI for Enterprise RAG
As enterprises scale generative AI across diverse infrastructures, Intel® AI for Enterprise RAG […]
- Building AI With Empathy: Sorenson’s Mission for Accessibility
For Sorenson Senior Director of AI Mariam Rahmani, the future of AI isn’t about building the […]
- Intel® Xeon® Processors: The Most Preferred CPU for AI Host Nodes
-