Intel is democratizing AI inference by delivering a better price and performance for
real-world use cases on the 4th gen Intel® Xeon® Scalable Processors, formerly codenamed Sapphire Rapids. In this article, Intel® CPU refers to 4th gen Intel® Xeon® Scalable Processors. For protein folding of a set of proteins of lengths less than a thousand, using DeepMind’s AlphaFold2 inference based end-to-end pipeline, a dual socket Intel® CPU node delivers 30% better performance compared to our measured performance of an Intel® CPU with an A100 offload.
-
-
Articles récents
- How Intel® Liftoff Startups Are Winning with DeepSeek
- Finetuning & Inference on GenAI Models using Optimum Habana and the GPU Migration Toolkit on Intel®
- Agentic AI and Confidential Computing: A Perfect Synergy for Secure Innovation
- AI PC Pilot Hackathon ‘24 Where Intel® Student Ambassadors Built High-performance AI Solutions
- Discover the Power of DeepSeek-R1: A Cost-Efficient AI Model
-
Neural networks news
Intel NN News
- Intel Labs AI Tool Research Protects Artist Data and Human Voices from Use by Generative AI
The Trusted Media research team at Intel Labs is working on several projects to help artists and […]
- How Intel® Liftoff Startups Are Winning with DeepSeek
From security and efficiency to testing, Intel® Liftoff Startups have jumped at the chance to […]
- AI PC Pilot Hackathon ‘24 Where Intel® Student Ambassadors Built High-performance AI Solutions
Top projects built by Intel® Student Ambassadors at the AI PC Pilot hackathon ’24.
- Intel Labs AI Tool Research Protects Artist Data and Human Voices from Use by Generative AI
-