-
-
Articles récents
- AI PCs and the Future of Cybersecurity: AI-Powered Protection from Deepfakes
- Powering Agentic AI with CPUs: LangChain, MCP, and vLLM on Google Cloud
- Building a Sovereign GenAI Stack for the United Nations with Intel and OPEA
- Accelerating vLLM Inference: Intel® Xeon® 6 Processor Advantage over AMD EPYC
- KVCrush: Rethinking KV Cache Alternative Representation for Faster LLM Inference
-
Neural networks news
Intel NN News
- AI PCs and the Future of Cybersecurity: AI-Powered Protection from Deepfakes
The rise of deepfakes has introduced a new dimension of risk in today’s digital landscape. What […]
- Accelerating vLLM Inference: Intel® Xeon® 6 Processor Advantage over AMD EPYC
The vLLM (Virtualized Large Language Model) framework, optimized for CPU inference, is emerging as […]
- Building a Sovereign GenAI Stack for the United Nations with Intel and OPEA
The United Nations (UN) has taken a bold step toward digital sovereignty by developing an […]
- AI PCs and the Future of Cybersecurity: AI-Powered Protection from Deepfakes
-
Archives mensuelles : juillet 2023
Revolutionizing Recycling: Smart Garbage Classification using oneDNN
Using oneDNN for Improved Efficiency and Sustainability
Publié dans Non classé
Commentaires fermés sur Revolutionizing Recycling: Smart Garbage Classification using oneDNN
Heart Disease Risk Prediction using scikit-learn* (sklearn) and XGBoost: Developer Spotlight
Developer Spotlight: Arnab Das in his blog proposed a solution to Heart Disease Risk Prediction
Publié dans Non classé
Commentaires fermés sur Heart Disease Risk Prediction using scikit-learn* (sklearn) and XGBoost: Developer Spotlight
Survival of the Fittest: Compact Generative AI Models Are the Future for Cost-Effective AI at Scale
The case for nimble, targeted, retrieval-based models as the best solution for generative AI applications deployed at scale.
Publié dans Non classé
Commentaires fermés sur Survival of the Fittest: Compact Generative AI Models Are the Future for Cost-Effective AI at Scale
Intel Labs Presents Five Papers on Novel AI Research at ICML 2023
Intel Labs had five papers accepted at the 40th International Conference on Machine Learning (ICML) 2023, happening now through July 29. Two papers were selected as spotlight oral papers at the conference: ProtST, which uses a ChatGPT-style design interface for … Continuer la lecture
Publié dans Non classé
Commentaires fermés sur Intel Labs Presents Five Papers on Novel AI Research at ICML 2023
Intel® Xeon® trains Graph Neural Network models in record time
The 4th gen Intel® Xeon® Scalable Processor, formerly codenamed Sapphire Rapids is a balanced platform for Graph Neural Networks (GNN) training, accelerating both sparse and dense compute. In this article, Intel CPU refers to 4th gen Intel® Xeon® Scalable Processor … Continuer la lecture
Publié dans Non classé
Commentaires fermés sur Intel® Xeon® trains Graph Neural Network models in record time
Intel Xeon is all you need for AI inference: Performance Leadership on Real World Applications
Intel is democratizing AI inference by delivering a better price and performance forreal-world use cases on the 4th gen Intel® Xeon® Scalable Processors, formerly codenamed Sapphire Rapids. In this article, Intel® CPU refers to 4th gen Intel® Xeon® Scalable Processors. … Continuer la lecture
Publié dans Non classé
Commentaires fermés sur Intel Xeon is all you need for AI inference: Performance Leadership on Real World Applications
JUMP and Intel Labs Success Story: UCSD Professor Leads HD Computing Research Efforts
For the past five years at the JUMP CRISP Center at UCSD, Professor Tajana Simunic Rosing has led hyperdimensional computing research efforts to solve memory and storage challenges in COVID-19 wastewater surveillance and personalized recommendation systems.
Publié dans Non classé
Commentaires fermés sur JUMP and Intel Labs Success Story: UCSD Professor Leads HD Computing Research Efforts
Democratizing Generative AI for Medicine
Add domain-specific knowledge to foundation AI models without the AI training costs or expertise.
Publié dans Non classé
Commentaires fermés sur Democratizing Generative AI for Medicine
Accelerate Workloads with OpenVINO and OneDNN
OpenVINO utilizes oneDNN GPU kernels for discrete GPUs to accelerate compute-intensive workloads
Publié dans Non classé
Commentaires fermés sur Accelerate Workloads with OpenVINO and OneDNN
Can we Improve Early-Exit Transformers? Novel Adaptive Inference Method Presented at ACL 2023
Intel Labs and the Hebrew University of Jerusalem present SWEET, an adaptive Inference method for text classification at this year’s ACL conference.
Publié dans Non classé
Commentaires fermés sur Can we Improve Early-Exit Transformers? Novel Adaptive Inference Method Presented at ACL 2023