Near Memory Compute is becoming important for future AI processing systems that need improvement in system performance and energy-efficiency. The Von Neumann computing model requires data to commute from memory to compute and this data movement burns energy. Is it time for NMC to solve this data movement bottleneck? This blog addresses this question and is inspired by Intel Fellow, Dr. Frank Hady’s recent presentation at the International Solid State Circuits Conference (ISSCC), titled “We have rethought our commute; Can we rethink our data’s commute?”
-
-
Articles récents
- Intel Brings the Future of Retail to Life at Cisco Live in San Diego
- Building Agentic Systems for Preventative Healthcare with AutoGen
- Making Vector Search Work Best for RAG
- GenAI-driven Music Composer Chorus.AI: Developer Spotlight
- Tangible Immersion: How Intel Labs Programs Cobots Using Haptic Mixed Reality
-
Neural networks news
Intel NN News
- Intel Brings the Future of Retail to Life at Cisco Live in San Diego
At Cisco Live 2025 in San Diego, Intel is redefining what’s possible for the retail industry.
- Building Agentic Systems for Preventative Healthcare with AutoGen
This blog demonstrates the preventative healthcare outreach agentic system built using AutoGen.
- Making Vector Search Work Best for RAG
This blog in the series on Scalable Vector Search summarizes insights from our study on optimizing […]
- Intel Brings the Future of Retail to Life at Cisco Live in San Diego
-