-
-
Articles récents
- Bringing Polish AI to Life: Running Bielik LLMs Natively on Intel® Gaudi® 3 Accelerators
- Optimizing SLMs on Intel® Xeon® Processors: A llama.cpp Performance Study
- Intel® Xeon® 6 Processors: The Smart Total Cost of Ownership Choice
- Next-Gen AI Inference: Intel® Xeon® Processors Power Vision, NLP, and Recommender Workloads
- Document Summarization: Transforming Enterprise Content with Intel® AI for Enterprise RAG
-
Neural networks news
Intel NN News
- Bringing Polish AI to Life: Running Bielik LLMs Natively on Intel® Gaudi® 3 Accelerators
From community curiosity to real-world inference – showing how local language models run with […]
- Optimizing SLMs on Intel® Xeon® Processors: A llama.cpp Performance Study
In this post, we'll dicuss how to run responsive, CPU-only applications using a quantized SLM in […]
- Intel® AI for Enterprise Inference as a Deployable Architecture on IBM Cloud
Intel® AI for Enterprise Inference as a Deployable Architecture on IBM CloudAuthored by: Pai […]
- Bringing Polish AI to Life: Running Bielik LLMs Natively on Intel® Gaudi® 3 Accelerators
-
Archives de catégorie : Non classé
Intel® Xeon® Processors Are Still the Only CPU With MLPerf Results, Raising the Bar By 5x
4th Gen Xeon processors deliver remarkable gen-on-gen gains across all MLPerf workloads
Publié dans Non classé
Commentaires fermés sur Intel® Xeon® Processors Are Still the Only CPU With MLPerf Results, Raising the Bar By 5x
Get Better TensorFlow* Performance on CPUs and GPUs
Ongoing collaboration between Intel® and Google, the Intel® Optimization for TensorFlow*
Publié dans Non classé
Commentaires fermés sur Get Better TensorFlow* Performance on CPUs and GPUs
Enabling In-Memory Computing for Artificial Intelligence Part 2: The Digital Approach
Intel Labs is actively pursuing multiple avenues for In-Memory Computing. Part 2 of this blog series discusses the digital approach and Intel Labs’ work in the area.
Publié dans Non classé
Commentaires fermés sur Enabling In-Memory Computing for Artificial Intelligence Part 2: The Digital Approach
Join the Neurofibromatosis Tumor Segmentation Challenge
Intel is co-hosting a MICCAI 2023 challenge to improve medical image analysis deep learning results
Publié dans Non classé
Commentaires fermés sur Join the Neurofibromatosis Tumor Segmentation Challenge
Philomag: IA et ChatGPT
https://www.philomag.com/articles/ia-comment-surmonter-langoisse-du-grand-remplacement
Azure ML Based Federated Learning with Intel® Xeon® Platforms
Federated Learning using Azure ML and on-premises Intel Xeon Platforms
Publié dans Non classé
Commentaires fermés sur Azure ML Based Federated Learning with Intel® Xeon® Platforms
oneTBB Concurrent Container Class: An Efficient Way To Scale Your C++ Application
This content has been moved to a different Blog category.
Publié dans Non classé
Commentaires fermés sur oneTBB Concurrent Container Class: An Efficient Way To Scale Your C++ Application
Intel Labs Releases Models for Computer Vision Depth Estimation: VI-Depth 1.0 and MiDaS 3.1
Intel Labs introduces VI-Depth version 1.0, an open-source model that integrates monocular depth estimation and visual inertial odometry to produce dense depth estimates with metric scale.
Publié dans Non classé
Commentaires fermés sur Intel Labs Releases Models for Computer Vision Depth Estimation: VI-Depth 1.0 and MiDaS 3.1
oneTBB Concurrent Container Class: An Efficient Way To Scale Your C++ Application
Intel® oneTBB can assist in scaling your C++ application, even if not a threading concepts’ expert.
Publié dans Non classé
Commentaires fermés sur oneTBB Concurrent Container Class: An Efficient Way To Scale Your C++ Application
oneTBB Concurrent Container Class: An Efficient Way To Scale Your C++ Application
Intel® oneTBB can assist in scaling your C++ application, even if not a threading concepts’ expert.
Publié dans Non classé
Commentaires fermés sur oneTBB Concurrent Container Class: An Efficient Way To Scale Your C++ Application