Intel is democratizing AI inference by delivering a better price and performance for
real-world use cases on the 4th gen Intel® Xeon® Scalable Processors, formerly codenamed Sapphire Rapids. In this article, Intel® CPU refers to 4th gen Intel® Xeon® Scalable Processors. For protein folding of a set of proteins of lengths less than a thousand, using DeepMind’s AlphaFold2 inference based end-to-end pipeline, a dual socket Intel® CPU node delivers 30% better performance compared to our measured performance of an Intel® CPU with an A100 offload.
-
-
Articles récents
- Securing AI Beyond Shadow Practices: Insights from the Intel® Liftoff Startup Ecosystem
- Tackling Network Security: AI Agents at the Edge with Red Hat AI on Intel® Processors and Graphics
- Practical Deployment of LLMs for Network Traffic Classification – Part 1
- Practical Deployment of LLMs for Network Traffic Classification
- Intel Labs Presents Latest Machine Learning Research Among Eight Papers at ICML 2025
-
Neural networks news
Intel NN News
- Securing AI Beyond Shadow Practices: Insights from the Intel® Liftoff Startup Ecosystem
Shadow AI is rising fast. Intel® Liftoff startups are building secure, scalable tools to protect […]
- Practical Deployment of LLMs for Network Traffic Classification - Part 1
Additional contributing authors: Rui Li, Vishakh Nair, Mrittika Ganguli Executive SummaryThe […]
- Tackling Network Security: AI Agents at the Edge with Red Hat AI on Intel® Processors and Graphics
Executive Summary: The cybersecurity landscape is evolving rapidly, with organizations facing […]
- Securing AI Beyond Shadow Practices: Insights from the Intel® Liftoff Startup Ecosystem
-