Intel Xeon is all you need for AI inference: Performance Leadership on Real World Applications

Intel is democratizing AI inference by delivering a better price and performance for
real-world use cases on the 4th gen Intel® Xeon® Scalable Processors, formerly codenamed Sapphire Rapids. In this article, Intel® CPU refers to 4th gen Intel® Xeon® Scalable Processors. For protein folding of a set of proteins of lengths less than a thousand, using DeepMind’s AlphaFold2 inference based end-to-end pipeline, a dual socket Intel® CPU node delivers 30% better performance compared to our measured performance of an Intel® CPU with an A100 offload.

Ce contenu a été publié dans Non classé. Vous pouvez le mettre en favoris avec ce permalien.