-
-
Articles récents
- Curious Case of Chain of Thought: Improving CoT Efficiency via Training-Free Steerable Reasoning
- Intel Labs Works with Hugging Face to Deploy Tools for Enhanced LLM Efficiency
- AI’s Next Frontier: Human Collaboration, Data Strategy, and Scale
- Efficient PDF Summarization with CrewAI and Intel® XPU Optimization
- Rethinking AI Infrastructure: How NetApp and Intel Are Unlocking the Future with AIPod Mini
-
Neural networks news
Intel NN News
- Curious Case of Chain of Thought: Improving CoT Efficiency via Training-Free Steerable Reasoning
Researchers from the University of Texas at Austin and Intel Labs investigated chain-of-thought […]
- AI’s Next Frontier: Human Collaboration, Data Strategy, and Scale
Ramtin Davanlou, CTO of the Accenture and Intel Partnership, explores what it really takes for […]
- Intel Labs Works with Hugging Face to Deploy Tools for Enhanced LLM Efficiency
Large Language Models are revolutionizing AI applications; however, slow inference speeds continue […]
- Curious Case of Chain of Thought: Improving CoT Efficiency via Training-Free Steerable Reasoning
-
Archives mensuelles : octobre 2024
Building AI for Low-Resource Languages: Bezoku’s Innovative Approach
Bezoku, a member of the Intel® Liftoff program, is addressing the challenges of low-resource language modeling with innovative solutions designed to enhance communication across diverse dialects. Recently featured on the Intel On AI podcast, find out how they’re shaping the … Continuer la lecture
Publié dans Non classé
Commentaires fermés sur Building AI for Low-Resource Languages: Bezoku’s Innovative Approach
Accelerate PyTorch* Inference with torch.compile on Windows* CPU
We are excited to announce that PyTorch* 2.5 has introduced support for the torch.compile feature on Windows* CPU, thanks to the collaborative efforts of Intel and Meta*. This enhancement aims to speed up PyTorch code execution over the default eager … Continuer la lecture
Publié dans Non classé
Commentaires fermés sur Accelerate PyTorch* Inference with torch.compile on Windows* CPU
DubHacks’24 Hackathon Where Developers Innovatively Utilized Intel® Tiber™ AI Cloud and AI PCs
DubHacks’24 Hackathon: Highlights and Winning Projects of the ‘Best Use of Intel AI’ Track
Publié dans Non classé
Commentaires fermés sur DubHacks’24 Hackathon Where Developers Innovatively Utilized Intel® Tiber™ AI Cloud and AI PCs
Boost the Performance of AI/ML Applications using Intel® VTune™ Profiler
Enhance the performance of Python* and OpenVINO™ based AI/ML workloads using Intel® VTune™ Profiler
Publié dans Non classé
Commentaires fermés sur Boost the Performance of AI/ML Applications using Intel® VTune™ Profiler
AI Developers, Join Team Intel at the UT Austin’s HackTX 2024 Hackathon!
Bringing Intel® Tiber™ AI Cloud and AI PC for developers at the UT Austin’s HackTX 2024 hackathon
Publié dans Non classé
Commentaires fermés sur AI Developers, Join Team Intel at the UT Austin’s HackTX 2024 Hackathon!
Model Quantization with OpenVINO
Discover how Desmond Grealy’s workshop on model quantization with OpenVINO explored ways to enhance AI performance and reduce model size for edge deployment.
Publié dans Non classé
Commentaires fermés sur Model Quantization with OpenVINO
Optimizing Federated Learning Workloads: A Practical Evaluation
This webinar provides a deep dive into the technical aspects of using federated learning in healthcare, particularly for AI-based diagnostic tools in medical imaging.
Publié dans Non classé
Commentaires fermés sur Optimizing Federated Learning Workloads: A Practical Evaluation
Generative AI as a Life Saver: CerebraAI Helps Detecting Strokes Quicker And More Precisely
With the help of CerebraAI’s generative AI software the detection and treatment of strokes can be significantly enhanced. In order to optimize their software CerebraAI is using Intel® OpenVINO toolkit within several of its models.
Publié dans Non classé
Commentaires fermés sur Generative AI as a Life Saver: CerebraAI Helps Detecting Strokes Quicker And More Precisely
Optimizing Multimodal AI Inference
In this recap of Rahul Nair’s workshop from Intel® Liftoff Days 2024, we’ll dive into optimization strategies for multimodal AI inference on Intel hardware. Learn how to leverage Intel’s software tools to boost performance and streamline AI workflows.
Publié dans Non classé
Commentaires fermés sur Optimizing Multimodal AI Inference
The 24.08 Intel® Tiber™ Edge Platform Release is Now Available
Discover the latest Intel Tiber Edge Platform features and revolutionize your edge AI solutions!
Publié dans Non classé
Commentaires fermés sur The 24.08 Intel® Tiber™ Edge Platform Release is Now Available