Near Memory Compute is becoming important for future AI processing systems that need improvement in system performance and energy-efficiency. The Von Neumann computing model requires data to commute from memory to compute and this data movement burns energy. Is it time for NMC to solve this data movement bottleneck? This blog addresses this question and is inspired by Intel Fellow, Dr. Frank Hady’s recent presentation at the International Solid State Circuits Conference (ISSCC), titled “We have rethought our commute; Can we rethink our data’s commute?”
-
-
Articles récents
- Dimensionality Reduction for Scalable Vector Search
- Intel® Liftoff Days Q1 2025: Building Better AI, Together
- How to Sell AI Products Without the Hype: Inside Intel Liftoff’s Workshop on What Really Works
- Transform your AI Applications with Agentic LLM Workflows
- 3 Recent Updates to the Intel Tiber AI Cloud for Developers
-
Neural networks news
Intel NN News
- Dimensionality Reduction for Scalable Vector Search
In this post, we present LeanVec, a framework that combines dimensionality reduction with vector […]
- Intel® Liftoff Days Q1 2025: Building Better AI, Together
A week of hands-on product development, mentoring, and tech workshops brought Intel® Liftoff […]
- How to Sell AI Products Without the Hype: Inside Intel Liftoff’s Workshop on What Really Works
AI isn’t SaaS. In this Intel® Liftoff session, Mohamed Ahmed shares hard-earned advice for […]
- Dimensionality Reduction for Scalable Vector Search
-