Intel is democratizing AI inference by delivering a better price and performance for
real-world use cases on the 4th gen Intel® Xeon® Scalable Processors, formerly codenamed Sapphire Rapids. In this article, Intel® CPU refers to 4th gen Intel® Xeon® Scalable Processors. For protein folding of a set of proteins of lengths less than a thousand, using DeepMind’s AlphaFold2 inference based end-to-end pipeline, a dual socket Intel® CPU node delivers 30% better performance compared to our measured performance of an Intel® CPU with an A100 offload.
-
-
Neural networks news
Intel NN News
- Streamlining AI Development: Building and Deploying with Podman, Red Hat OpenShift AI, and OPEA
How developers can optimize performance of AI applications with Podman, Red Hat OpenShift AI, and […]
- AI PC: Achieving Success at Scale with Windows Copilot + Experiences
Highlights from Intel AI DevSummit Keynote
- Introducing OPEA: The Open Platform for Enterprise AI
AI startups move fast, but enterprise integration slows them down. OPEA offers an open, […]
- Streamlining AI Development: Building and Deploying with Podman, Red Hat OpenShift AI, and OPEA
-