-
-
Articles récents
- Next-Gen AI Inference: Intel® Xeon® Processors Power Vision, NLP, and Recommender Workloads
- Document Summarization: Transforming Enterprise Content with Intel® AI for Enterprise RAG
- AutoRound Meets SGLang: Enabling Quantized Model Inference with AutoRound
- In-production AI Optimization Guide for Xeon: Search and Recommendation Use Case
- Argonne’s Aurora Supercomputer Helps Power Breakthrough Simulations of Quantum Materials
-
Neural networks news
Intel NN News
- Next-Gen AI Inference: Intel® Xeon® Processors Power Vision, NLP, and Recommender Workloads
Intel® Xeon® processors can deliver a CPU-first platform built for modern AI workloads without […]
- Document Summarization: Transforming Enterprise Content with Intel® AI for Enterprise RAG
Transform enterprise documents into insights with Document Summarization, optimized for Intel® […]
- AutoRound Meets SGLang: Enabling Quantized Model Inference with AutoRound
We are thrilled to announce an official collaboration between SGLang and AutoRound, enabling […]
- Next-Gen AI Inference: Intel® Xeon® Processors Power Vision, NLP, and Recommender Workloads
-
Archives mensuelles : octobre 2024
Building AI for Low-Resource Languages: Bezoku’s Innovative Approach
Bezoku, a member of the Intel® Liftoff program, is addressing the challenges of low-resource language modeling with innovative solutions designed to enhance communication across diverse dialects. Recently featured on the Intel On AI podcast, find out how they’re shaping the … Continuer la lecture
Publié dans Non classé
Commentaires fermés sur Building AI for Low-Resource Languages: Bezoku’s Innovative Approach
Accelerate PyTorch* Inference with torch.compile on Windows* CPU
We are excited to announce that PyTorch* 2.5 has introduced support for the torch.compile feature on Windows* CPU, thanks to the collaborative efforts of Intel and Meta*. This enhancement aims to speed up PyTorch code execution over the default eager … Continuer la lecture
Publié dans Non classé
Commentaires fermés sur Accelerate PyTorch* Inference with torch.compile on Windows* CPU
DubHacks’24 Hackathon Where Developers Innovatively Utilized Intel® Tiber™ AI Cloud and AI PCs
DubHacks’24 Hackathon: Highlights and Winning Projects of the ‘Best Use of Intel AI’ Track
Publié dans Non classé
Commentaires fermés sur DubHacks’24 Hackathon Where Developers Innovatively Utilized Intel® Tiber™ AI Cloud and AI PCs
Boost the Performance of AI/ML Applications using Intel® VTune™ Profiler
Enhance the performance of Python* and OpenVINO™ based AI/ML workloads using Intel® VTune™ Profiler
Publié dans Non classé
Commentaires fermés sur Boost the Performance of AI/ML Applications using Intel® VTune™ Profiler
AI Developers, Join Team Intel at the UT Austin’s HackTX 2024 Hackathon!
Bringing Intel® Tiber™ AI Cloud and AI PC for developers at the UT Austin’s HackTX 2024 hackathon
Publié dans Non classé
Commentaires fermés sur AI Developers, Join Team Intel at the UT Austin’s HackTX 2024 Hackathon!
Model Quantization with OpenVINO
Discover how Desmond Grealy’s workshop on model quantization with OpenVINO explored ways to enhance AI performance and reduce model size for edge deployment.
Publié dans Non classé
Commentaires fermés sur Model Quantization with OpenVINO
Optimizing Federated Learning Workloads: A Practical Evaluation
This webinar provides a deep dive into the technical aspects of using federated learning in healthcare, particularly for AI-based diagnostic tools in medical imaging.
Publié dans Non classé
Commentaires fermés sur Optimizing Federated Learning Workloads: A Practical Evaluation
Generative AI as a Life Saver: CerebraAI Helps Detecting Strokes Quicker And More Precisely
With the help of CerebraAI’s generative AI software the detection and treatment of strokes can be significantly enhanced. In order to optimize their software CerebraAI is using Intel® OpenVINO toolkit within several of its models.
Publié dans Non classé
Commentaires fermés sur Generative AI as a Life Saver: CerebraAI Helps Detecting Strokes Quicker And More Precisely
Optimizing Multimodal AI Inference
In this recap of Rahul Nair’s workshop from Intel® Liftoff Days 2024, we’ll dive into optimization strategies for multimodal AI inference on Intel hardware. Learn how to leverage Intel’s software tools to boost performance and streamline AI workflows.
Publié dans Non classé
Commentaires fermés sur Optimizing Multimodal AI Inference
The 24.08 Intel® Tiber™ Edge Platform Release is Now Available
Discover the latest Intel Tiber Edge Platform features and revolutionize your edge AI solutions!
Publié dans Non classé
Commentaires fermés sur The 24.08 Intel® Tiber™ Edge Platform Release is Now Available