Near Memory Compute is becoming important for future AI processing systems that need improvement in system performance and energy-efficiency. The Von Neumann computing model requires data to commute from memory to compute and this data movement burns energy. Is it time for NMC to solve this data movement bottleneck? This blog addresses this question and is inspired by Intel Fellow, Dr. Frank Hady’s recent presentation at the International Solid State Circuits Conference (ISSCC), titled “We have rethought our commute; Can we rethink our data’s commute?”
-
-
Articles récents
- Leveling Up Your AI Skills in 30 Minutes
- Building Agentic AI Foundations: How Intel® Liftoff Startups Are Preparing for the Next GPT Moment
- Designing Empathetic AI: The Future of Human-Centered Technology
- Deploying Llama 4 Scout and Maverick Models on Intel® Gaudi® 3 with vLLM
- Intel Labs’ Innovative Low-Rank Model Adaptation Increases Model Accuracy and Compression
-
Neural networks news
Intel NN News
- Leveling Up Your AI Skills in 30 Minutes
- Building Agentic AI Foundations: How Intel® Liftoff Startups Are Preparing for the Next GPT Moment
Agentic AI is here: See how Intel® Liftoff startups are building smarter, more autonomous systems […]
- Designing Empathetic AI: The Future of Human-Centered Technology
Ted Shelton, Chief Operating Officer at Inflection AI, discusses how emotionally intelligent AI is […]
- Leveling Up Your AI Skills in 30 Minutes
-