Intel is democratizing AI inference by delivering a better price and performance for
real-world use cases on the 4th gen Intel® Xeon® Scalable Processors, formerly codenamed Sapphire Rapids. In this article, Intel® CPU refers to 4th gen Intel® Xeon® Scalable Processors. For protein folding of a set of proteins of lengths less than a thousand, using DeepMind’s AlphaFold2 inference based end-to-end pipeline, a dual socket Intel® CPU node delivers 30% better performance compared to our measured performance of an Intel® CPU with an A100 offload.
-
Articles récents
- Beewant’s Multimodal AI: Smarter Solutions for Training, Travel, and Safety
- Get Your Innovation to Go with Innovation Select Videos
- Building AI for Low-Resource Languages: Bezoku’s Innovative Approach
- Accelerate PyTorch* Inference with torch.compile on Windows* CPU
- DubHacks’24 Hackathon Where Developers Innovatively Utilized Intel® Tiber™ AI Cloud and AI PCs
-
Neural networks news
Intel NN News
- Beewant’s Multimodal AI: Smarter Solutions for Training, Travel, and Safety
Beewant’s cutting-edge multimodal AI redefines multimedia, driving innovative applications across […]
- Get Your Innovation to Go with Innovation Select Videos
Catch up on the latest Intel Innovation developer and technical content with demos, tech talks and […]
- Building AI for Low-Resource Languages: Bezoku's Innovative Approach
Bezoku, a member of the Intel® Liftoff program, is addressing the challenges of low-resource […]
- Beewant’s Multimodal AI: Smarter Solutions for Training, Travel, and Safety