The first post in this series introduced vector search, its relevance in today’s world, and the important metrics used to characterize it. We can achieve dramatic gains in vector search systems by improving their internal vector representations, as the majority of the search runtime is spent bringing vectors from memory to compute their similarity with the query. The focus of this post, Locally-adaptive Vector Quantization (LVQ), accelerates the search, lowers the memory footprint, and preserves the efficiency of the similarity computation.
-
-
Articles récents
- Practical Deployment of LLMs for Network Traffic Classification – Part 1
- Practical Deployment of LLMs for Network Traffic Classification
- Intel Labs Presents Latest Machine Learning Research Among Eight Papers at ICML 2025
- Intel Labs Researcher Souvik Kundu Receives DAC Under-40 Innovators Award for Impactful AI Research
- How Startups Can Benefit from Corporates: Learnings from Intel® Liftoff for AI Startups
-
Neural networks news
Intel NN News
- Practical Deployment of LLMs for Network Traffic Classification - Part 1
Additional contributing authors: Rui Li, Vishakh Nair, Mrittika Ganguli Executive SummaryThe […]
- Intel Labs Presents Latest Machine Learning Research Among Eight Papers at ICML 2025
Intel Labs is excited to present six works at this year's ICML conference in Vancouver Canada, […]
- Intel Labs Researcher Souvik Kundu Receives DAC Under-40 Innovators Award for Impactful AI Research
Souvik Kundu is a Staff Research Scientist at Intel Labs, leading scalable and efficient AI […]
- Practical Deployment of LLMs for Network Traffic Classification - Part 1
-