Intel is democratizing AI inference by delivering a better price and performance for
real-world use cases on the 4th gen Intel® Xeon® Scalable Processors, formerly codenamed Sapphire Rapids. In this article, Intel® CPU refers to 4th gen Intel® Xeon® Scalable Processors. For protein folding of a set of proteins of lengths less than a thousand, using DeepMind’s AlphaFold2 inference based end-to-end pipeline, a dual socket Intel® CPU node delivers 30% better performance compared to our measured performance of an Intel® CPU with an A100 offload.
-
Articles récents
- Deciphering the AI Startup Ecosystem: Insights from the Intel® Liftoff AI Startups Index Report
- From FLOPs to Watts: Energy Measurement Skills for Sustainable AI in Data Centers
- Advent of Multimodal AI Hackathon: A Recap of Innovation and Global Talent
- Chooch AI: The Secret Behind Smarter Retail Decisions This Holiday Season
- Intel AI PCs Deliver an Industry Validated Defense vs Real World Attacks
-
Neural networks news
Intel NN News
- Deciphering the AI Startup Ecosystem: Insights from the Intel® Liftoff AI Startups Index Report
Intel’s AI Startup Index Report 2024, published by Intel® Liftoff for AI Startups, offers an […]
- From FLOPs to Watts: Energy Measurement Skills for Sustainable AI in Data Centers
Energy transparency is increasingly a priority for policymakers in the responsible deployment and […]
- Advent of Multimodal AI Hackathon: A Recap of Innovation and Global Talent
Discover the highlights of the Advent of Multimodal AI Hackathon, where global talent came together […]
- Deciphering the AI Startup Ecosystem: Insights from the Intel® Liftoff AI Startups Index Report