Researchers at Intel Labs, in collaboration with Xiamen University, have presented LiSA, the first semantic aware AI framework for highly accurate 3D visual localization. LiSA can leverage pre-trained semantic segmentation models to significantly improve state-of-the-art 3D visual localization accuracy, without introducing computational overhead during inference. LiSA was presented at CVPR 2024 as a highlight paper, an award given to the top 3.6% of conference papers.
-
Articles récents
- Beewant’s Multimodal AI: Smarter Solutions for Training, Travel, and Safety
- Get Your Innovation to Go with Innovation Select Videos
- Building AI for Low-Resource Languages: Bezoku’s Innovative Approach
- Accelerate PyTorch* Inference with torch.compile on Windows* CPU
- DubHacks’24 Hackathon Where Developers Innovatively Utilized Intel® Tiber™ AI Cloud and AI PCs
-
Neural networks news
Intel NN News
- Beewant’s Multimodal AI: Smarter Solutions for Training, Travel, and Safety
Beewant’s cutting-edge multimodal AI redefines multimedia, driving innovative applications across […]
- Get Your Innovation to Go with Innovation Select Videos
Catch up on the latest Intel Innovation developer and technical content with demos, tech talks and […]
- Building AI for Low-Resource Languages: Bezoku's Innovative Approach
Bezoku, a member of the Intel® Liftoff program, is addressing the challenges of low-resource […]
- Beewant’s Multimodal AI: Smarter Solutions for Training, Travel, and Safety