Researchers at Intel Labs, in collaboration with Xiamen University, have presented LiSA, the first semantic aware AI framework for highly accurate 3D visual localization. LiSA can leverage pre-trained semantic segmentation models to significantly improve state-of-the-art 3D visual localization accuracy, without introducing computational overhead during inference. LiSA was presented at CVPR 2024 as a highlight paper, an award given to the top 3.6% of conference papers.
-
Articles récents
- AI Efficiency and Practical Performance: Delivering What Enterprises Really Need
- Beewant’s Multimodal AI: Smarter Solutions for Training, Travel, and Safety
- Get Your Innovation to Go with Innovation Select Videos
- Building AI for Low-Resource Languages: Bezoku’s Innovative Approach
- Accelerate PyTorch* Inference with torch.compile on Windows* CPU
-
Neural networks news
Intel NN News
- AI Efficiency and Practical Performance: Delivering What Enterprises Really Need
Learn how professional services firms are unlocking AI’s full potential with Intel’s open, […]
- Beewant’s Multimodal AI: Smarter Solutions for Training, Travel, and Safety
Beewant’s cutting-edge multimodal AI redefines multimedia, driving innovative applications across […]
- Get Your Innovation to Go with Innovation Select Videos
Catch up on the latest Intel Innovation developer and technical content with demos, tech talks and […]
- AI Efficiency and Practical Performance: Delivering What Enterprises Really Need