AttentionLite: Towards Efficient Self-Attention Models for Vision

Intel Labs has created a novel framework for producing a class of parameter- and compute-efficient models called AttentionLite, which leverages recent advances in self-attention as a substitute for convolutions.

Ce contenu a été publié dans Non classé. Vous pouvez le mettre en favoris avec ce permalien.

Laisser un commentaire