Intel is democratizing AI inference by delivering a better price and performance for
real-world use cases on the 4th gen Intel® Xeon® Scalable Processors, formerly codenamed Sapphire Rapids. In this article, Intel® CPU refers to 4th gen Intel® Xeon® Scalable Processors. For protein folding of a set of proteins of lengths less than a thousand, using DeepMind’s AlphaFold2 inference based end-to-end pipeline, a dual socket Intel® CPU node delivers 30% better performance compared to our measured performance of an Intel® CPU with an A100 offload.
-
-
Articles récents
- Specialized Cognitive Experts Emerge in Large AI Reasoning Models
- Evaluating Trustworthiness of Explanations in Agentic AI Systems
- Unlocking AI Development with Windows* ML: Intel and Microsoft’s Strategic Partnership
- Multi-Modal Brand Agent: Connecting Visual Logos to Business Intelligence
- Building Efficient Multi-Modal AI Agents with Model Context Protocol (MCP)
-
Neural networks news
Intel NN News
- Specialized Cognitive Experts Emerge in Large AI Reasoning Models
Intel researchers found that DeepSeek-R1 demonstrates greater semantic specialization in expert […]
- Evaluating Trustworthiness of Explanations in Agentic AI Systems
Intel Labs research published at the ACM CHI 2025 Human-Centered Explainable Workshop found that […]
- Unlocking AI Development with Windows* ML: Intel and Microsoft's Strategic Partnership
We are thrilled to introduce a technical preview of Windows ML, enhanced by the built-in […]
- Specialized Cognitive Experts Emerge in Large AI Reasoning Models
-