Researchers at Intel Labs, in collaboration with Xiamen University, have presented LiSA, the first semantic aware AI framework for highly accurate 3D visual localization. LiSA can leverage pre-trained semantic segmentation models to significantly improve state-of-the-art 3D visual localization accuracy, without introducing computational overhead during inference. LiSA was presented at CVPR 2024 as a highlight paper, an award given to the top 3.6% of conference papers.
-
Articles récents
- Bezuko: Bridging the Gap in AI Representation
- CloudConstable and Intel® Liftoff: Advancing AI-Powered Customer Service
- Intel® Liftoff Days 2024 – Highlights from the Third Edition
- LLMart: Intel Labs’ Large Language Model Adversarial Robustness Toolkit Improves Security in GenAI
- Paving the Future of Cloud Security and Sovereignty
-
Neural networks news
Intel NN News
- Intel® Liftoff Days 2024 - Highlights from the Third Edition
Intel® Liftoff Days returned for its third edition, running from October 21–25, 2024. This […]
- Bezuko: Bridging the Gap in AI Representation
Discover how Bezoku, an Intel® Liftoff member, is using Intel’s cutting-edge technology to […]
- Confidential Federated Learning with OpenFL
This article gives recipes to enhance OpenFL-based federations for greater privacy and security by […]
- Intel® Liftoff Days 2024 - Highlights from the Third Edition