-
-
Articles récents
- Unlocking AI Development with Windows* ML: Intel and Microsoft’s Strategic Partnership
- Multi-Modal Brand Agent: Connecting Visual Logos to Business Intelligence
- Building Efficient Multi-Modal AI Agents with Model Context Protocol (MCP)
- Intel Presents Novel Research at NAACL 2025
- Getting Started with Intel® Tiber™ AI Cloud
-
Neural networks news
Intel NN News
- Unlocking AI Development with Windows* ML: Intel and Microsoft's Strategic Partnership
We are thrilled to introduce a technical preview of Windows ML, enhanced by the built-in […]
- Multi-Modal Brand Agent: Connecting Visual Logos to Business Intelligence
Identify brands from logos and retrieve business data in seconds. This AI agent links vision models […]
- Building Efficient Multi-Modal AI Agents with Model Context Protocol (MCP)
A deep dive into creating modular, specialized AI systems that rival large language models.
- Unlocking AI Development with Windows* ML: Intel and Microsoft's Strategic Partnership
-
Archives de catégorie : Non classé
Running Falcon Inference on a CPU with Hugging Face Pipelines
Learn how to run inference with 7-billion and 40-billion Falcon on a 4th Gen Xeon CPU with Hugging Face Pipelines
Publié dans Non classé
Commentaires fermés sur Running Falcon Inference on a CPU with Hugging Face Pipelines
It Takes More Than Hardware to Make a Great AI PC
In 2023 and 2024, many companies will be competing for your time and attention on artificial intelligence. So let’s talk about sure bets, instead: Intel employs thousands of software developers, and their day job is to work closely with AI … Continuer la lecture
Publié dans Non classé
Commentaires fermés sur It Takes More Than Hardware to Make a Great AI PC
Advent of GenAI Hackathon: Recap of Challenge 4
Prediction Guard, member of the Intel® Liftoff for Startups program, is hosting a generative AI hackathon with the support of the Liftoff team.
Publié dans Non classé
Commentaires fermés sur Advent of GenAI Hackathon: Recap of Challenge 4
Fireside Chat Recap: Insights and Innovations from the Advent of GenAI Hackathon
his week, we hosted a Fireside Chat to look back at our recent Advent of GenAI Hackathon and reflect on the biggest successes and highlights. The brainchild of Intel® Liftoff’s Rahul Unnikrishnan Nair and Prediction Guard’s Daniel Whitenack.
Publié dans Non classé
Commentaires fermés sur Fireside Chat Recap: Insights and Innovations from the Advent of GenAI Hackathon
Text-to-SQL Generation Using Fine-tuned LLMs on Intel GPUs (XPUs) and QLoRA
In this blog, Rahul Unnikrishnan Nair, currently serving as an Architect and Engineering Lead, and dedicated mentor at Intel® Liftoff for AI Startups program, explores Text-to-SQL Generation using fine-tuned Large Language Models (LLMs) on Intel GPUs (XPUs) and QLoRA.
Publié dans Non classé
Commentaires fermés sur Text-to-SQL Generation Using Fine-tuned LLMs on Intel GPUs (XPUs) and QLoRA
“AI Everywhere” is Connected by Ethernet Everywhere
As we realize AI is Everywhere, how does the data move “everywhere?” The answer is Ethernet.
Publié dans Non classé
Commentaires fermés sur “AI Everywhere” is Connected by Ethernet Everywhere
Unlocking Intel’s Neural Processing Unit with DirectML
As Artificial Intelligence (AI) is infused into every application, it’s transforming the PC experience, enabling new AI capabilities for creators to innovate, new tools to enhance productivity, and innovative ways to collaborate. To enable these new AI experiences, it requires … Continuer la lecture
Publié dans Non classé
Commentaires fermés sur Unlocking Intel’s Neural Processing Unit with DirectML
Advent of GenAI Hackathon: Recap of Challenge 3
Prediction Guard, member of the Intel® Liftoff for Startups program, is hosting a generative AI hackathon with the support of the Liftoff team.
Publié dans Non classé
Commentaires fermés sur Advent of GenAI Hackathon: Recap of Challenge 3
Optimizing AI Application Performance on AWS With Intel® Cloud Optimization Modules
Learn more about optimizations available for AI projects on AWS
Publié dans Non classé
Commentaires fermés sur Optimizing AI Application Performance on AWS With Intel® Cloud Optimization Modules
HoneyBee: Intel Labs and Mila Collaborate on State-of-the-Art Language Model for Materials Science
Intel Labs and Mila collaborate on HoneyBee, a large language model specialized to materials science. The team uses MatSci-Instruct, an instruction-based process for trustworthy data curation in materials science, to fine-tune HoneyBee.
Publié dans Non classé
Commentaires fermés sur HoneyBee: Intel Labs and Mila Collaborate on State-of-the-Art Language Model for Materials Science