Intel is democratizing AI inference by delivering a better price and performance for
real-world use cases on the 4th gen Intel® Xeon® Scalable Processors, formerly codenamed Sapphire Rapids. In this article, Intel® CPU refers to 4th gen Intel® Xeon® Scalable Processors. For protein folding of a set of proteins of lengths less than a thousand, using DeepMind’s AlphaFold2 inference based end-to-end pipeline, a dual socket Intel® CPU node delivers 30% better performance compared to our measured performance of an Intel® CPU with an A100 offload.
-
-
Articles récents
- Intel Presents Cutting-Edge AI Research at ICLR 2025
- Streamlining AI Development: Building and Deploying with Podman, Red Hat OpenShift AI, and OPEA
- AI PC: Achieving Success at Scale with Windows Copilot + Experiences
- Introducing OPEA: The Open Platform for Enterprise AI
- Optimizing AI for Scaled Adoption
-
Neural networks news
Intel NN News
- Intel Presents Cutting-Edge AI Research at ICLR 2025
Intel is pleased to present 10 main conference papers, five workshop papers and leading the […]
- Streamlining AI Development: Building and Deploying with Podman, Red Hat OpenShift AI, and OPEA
How developers can optimize performance of AI applications with Podman, Red Hat OpenShift AI, and […]
- AI PC: Achieving Success at Scale with Windows Copilot + Experiences
Highlights from Intel AI DevSummit Keynote
- Intel Presents Cutting-Edge AI Research at ICLR 2025
-