LAPTNet: LiDAR-Aided Perspective Transform Network - Grid'5000 Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

LAPTNet: LiDAR-Aided Perspective Transform Network

Résumé

Semantic grids are a useful representation of the environment around a robot. They can be used in autonomous vehicles to concisely represent the scene around the car, capturing vital information for downstream tasks like navigation or collision assessment. Information from different sensors can be used to generate these grids. Some methods rely only on RGB images, whereas others choose to incorporate information from other sensors, such as radar or LiDAR. In this paper, we present an architecture that fuses LiDAR and camera information to generate semantic grids. By using the 3D information from a LiDAR point cloud, the LiDAR-Aided Perspective Transform Network (LAPTNet) is able to associate features in the camera plane to the bird's eye view without having to predict any depth information about the scene. Compared to state-of-theart camera-only methods, LAPTNet achieves an improvement of up to 8.8 points (or 38.13%) over state-of-art competing approaches for the classes proposed in the NuScenes dataset validation split.
Fichier principal
Vignette du fichier
LaserAidedProjectiveTrans.pdf (1.59 Mo) Télécharger le fichier
manuscript.pdf (1.57 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03851513 , version 1 (14-11-2022)

Identifiants

  • HAL Id : hal-03851513 , version 1

Citer

Manuel Alejandro Diaz-Zapata, Özgür Erkent, Christian Laugier, Jilles Dibangoye, David Sierra González. LAPTNet: LiDAR-Aided Perspective Transform Network. ICARCV 2022 - 17th International Conference on Control, Automation, Robotics and Vision, Dec 2022, Singapore, Singapore. ⟨hal-03851513⟩
66 Consultations
87 Téléchargements

Partager

Gmail Facebook X LinkedIn More