This evaluation shows materially higher concurrency and improved latency scaling when moving from a 64-core to a 96-core Intel® Xeon® configuration for Intel® AI for Enterprise RAG inference. The 96-core SKU doubles SLA-compliant concurrency for Llama-AWQ and Mistral-AWQ (32 → 64 users) across all workloads and increases Qwen-AWQ SLA concurrency by 33–50% (workload dependent) versus the 64-core system.
-
-
Neural networks news
Intel NN News
- Edge AI
Clinical Insight When Decisions Can’t Wait
- Confidential AI with GPU Acceleration: Bounce Buffers Offer a Solution Today
by Mike Ferron-Jones (Intel) and Dan Middleton (NVIDIA) As AI workloads increasingly process […]
- Unleash Fast and Optimized AI Inference with Intel® AI for Enterprise Inference
Intel® AI for Enterprise Inference reduces infrastructure complexity with a one-click packaged […]
- Edge AI
-