-
-
Articles récents
- 3 Recent Updates to the Intel Tiber AI Cloud for Developers
- Predictive Tool Maintenance: oneAPI Enhances Aerospace Industry Application for Manufacturing
- GenAI Winner Projects Built on Intel® Tiber™ AI Cloud at 2024 Collegiate Hackathons
- Optimize LLM serving with vLLM on Intel® GPUs
- Intel® Liftoff Startups TurinTech AI and Kamiwaza AI To Showcase at Intel® Vision 2025
-
Neural networks news
Intel NN News
- 3 Recent Updates to the Intel Tiber AI Cloud for Developers
Unlock AI's potential with Intel Tiber AI Cloud: new PyTorch, oneAPI updates, DeepSeek-R1, Whisper […]
- Predictive Tool Maintenance: oneAPI Enhances Aerospace Industry Application for Manufacturing
Intel Student Ambassador's tech talk at oneAPI DevSummit Oct'24
- GenAI Winner Projects Built on Intel® Tiber™ AI Cloud at 2024 Collegiate Hackathons
GenAI solutions built on Intel Tiber AI Cloud: Winners of 2024 collegiate hackathons
- 3 Recent Updates to the Intel Tiber AI Cloud for Developers
-
Archives mensuelles : juillet 2023
Revolutionizing Recycling: Smart Garbage Classification using oneDNN
Using oneDNN for Improved Efficiency and Sustainability
Publié dans Non classé
Commentaires fermés sur Revolutionizing Recycling: Smart Garbage Classification using oneDNN
Heart Disease Risk Prediction using scikit-learn* (sklearn) and XGBoost: Developer Spotlight
Developer Spotlight: Arnab Das in his blog proposed a solution to Heart Disease Risk Prediction
Publié dans Non classé
Commentaires fermés sur Heart Disease Risk Prediction using scikit-learn* (sklearn) and XGBoost: Developer Spotlight
Survival of the Fittest: Compact Generative AI Models Are the Future for Cost-Effective AI at Scale
The case for nimble, targeted, retrieval-based models as the best solution for generative AI applications deployed at scale.
Publié dans Non classé
Commentaires fermés sur Survival of the Fittest: Compact Generative AI Models Are the Future for Cost-Effective AI at Scale
Intel Labs Presents Five Papers on Novel AI Research at ICML 2023
Intel Labs had five papers accepted at the 40th International Conference on Machine Learning (ICML) 2023, happening now through July 29. Two papers were selected as spotlight oral papers at the conference: ProtST, which uses a ChatGPT-style design interface for … Continuer la lecture
Publié dans Non classé
Commentaires fermés sur Intel Labs Presents Five Papers on Novel AI Research at ICML 2023
Intel® Xeon® trains Graph Neural Network models in record time
The 4th gen Intel® Xeon® Scalable Processor, formerly codenamed Sapphire Rapids is a balanced platform for Graph Neural Networks (GNN) training, accelerating both sparse and dense compute. In this article, Intel CPU refers to 4th gen Intel® Xeon® Scalable Processor … Continuer la lecture
Publié dans Non classé
Commentaires fermés sur Intel® Xeon® trains Graph Neural Network models in record time
Intel Xeon is all you need for AI inference: Performance Leadership on Real World Applications
Intel is democratizing AI inference by delivering a better price and performance forreal-world use cases on the 4th gen Intel® Xeon® Scalable Processors, formerly codenamed Sapphire Rapids. In this article, Intel® CPU refers to 4th gen Intel® Xeon® Scalable Processors. … Continuer la lecture
Publié dans Non classé
Commentaires fermés sur Intel Xeon is all you need for AI inference: Performance Leadership on Real World Applications
JUMP and Intel Labs Success Story: UCSD Professor Leads HD Computing Research Efforts
For the past five years at the JUMP CRISP Center at UCSD, Professor Tajana Simunic Rosing has led hyperdimensional computing research efforts to solve memory and storage challenges in COVID-19 wastewater surveillance and personalized recommendation systems.
Publié dans Non classé
Commentaires fermés sur JUMP and Intel Labs Success Story: UCSD Professor Leads HD Computing Research Efforts
Democratizing Generative AI for Medicine
Add domain-specific knowledge to foundation AI models without the AI training costs or expertise.
Publié dans Non classé
Commentaires fermés sur Democratizing Generative AI for Medicine
Accelerate Workloads with OpenVINO and OneDNN
OpenVINO utilizes oneDNN GPU kernels for discrete GPUs to accelerate compute-intensive workloads
Publié dans Non classé
Commentaires fermés sur Accelerate Workloads with OpenVINO and OneDNN
Can we Improve Early-Exit Transformers? Novel Adaptive Inference Method Presented at ACL 2023
Intel Labs and the Hebrew University of Jerusalem present SWEET, an adaptive Inference method for text classification at this year’s ACL conference.
Publié dans Non classé
Commentaires fermés sur Can we Improve Early-Exit Transformers? Novel Adaptive Inference Method Presented at ACL 2023