Archives de catégorie : Non classé

Intel® Labs Uses AI and Audio Anomaly Detection to Prevent Semiconductor Manufacturing Malfunctions

Research in anomalous sound detection to improve semiconductor manufacturing production for both Intel and its partners by using artificial intelligence (AI) and audio detection to monitor machine condition and health.

Publié dans Non classé | Laisser un commentaire

OpenVINO™ Execution Provider + Model Caching = Better First Inference Latency for your ONNX Models

Developers can now leverage model caching through the OpenVINO™ Execution Provider for ONNX Runtime

Publié dans Non classé | Laisser un commentaire

AttentionLite: Towards Efficient Self-Attention Models for Vision

Intel Labs has created a novel framework for producing a class of parameter- and compute-efficient models called AttentionLite, which leverages recent advances in self-attention as a substitute for convolutions.

Publié dans Non classé | Laisser un commentaire

NEMO: A Novel Multi-Objective Optimization Method for AI Challenges

Neuroevolution-Enhanced Multi-Objective Optimization (NEMO) for Mixed-Precision Quantization delivers state-of-the-art compute speedups and memory improvements for artificial intelligence (AI) applications.

Publié dans Non classé | Laisser un commentaire

Best Practices for Text-Classification with Distillation Part (3/4) – Word Order Sensitivity (WOS)

In this post, I introduce a metric for estimating the complexity level of your dataset and task, and I describe how to utilize it to optimize distillation performance.

Publié dans Non classé | Laisser un commentaire

Seat of Knowledge: AI Systems with Deeply Structured Knowledge

This blog will outline the third class in this classification and its promising role in supporting machine understanding, context-based decision making, and other aspects of higher machine intelligence.

Publié dans Non classé | Laisser un commentaire

On the Geometry of Generalization and Memorization in Deep Neural Networks

Our latest work, presented recently at the 2021 International Conference on Learning Representations (ICLR), forces a deep network to memorize some of the training examples by randomly changing their labels.

Publié dans Non classé | Laisser un commentaire

Best Practices for Text Classification with Distillation (Part 2/4) – Challenging Use Cases

In this blog, I intend to explore this method further and investigate other test classification datasets and sub-tasks in an effort to duplicate these results.

Publié dans Non classé | Laisser un commentaire

Learning to Optimize Memory Allocation on Hardware using Reinforcement Learning

We describe a scalable framework that combines Deep RL with genetic algorithms to search in extremely large combinatorial spaces to solve a critical memory allocation problem in hardware.

Publié dans Non classé | Laisser un commentaire

Best Practices for Text Classification with Distillation (Part 1/4) – How to achieve BERT results by

Model distillation is a powerful pruning technique, and in many use cases, it yields significant speedup and memory size reduction.

Publié dans Non classé | Laisser un commentaire