Learning of Multimodal Point Descriptors in Radar and LIDAR Point Clouds

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des Sammelwerks2024 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)
ISBN (elektronisch)979-8-3503-6803-1
PublikationsstatusVeröffentlicht - 2024

Publikationsreihe

NameIEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems
ISSN (Print)2835-947X
ISSN (elektronisch)2767-9357

Abstract

Registration of point clouds is a fundamental task in robotic SLAM pipelines. Typically this task is performed only on point clouds of the same sensor or at least the same sensing modality. However, robots designed for challenging environments are often equipped with redundant sensors for the same task where some sensors are more accurate and others are more robust against disturbing environmental conditions. Being able to register the data across the modalities is an important step to more fault-tolerant localization and mapping. We therefore propose a learning framework, which describes the points in the point cloud invariant of their modality. This description is then used in a transformer-like model to find point matches for the registration process. We demonstrate our results using a scanning lidar and radar sensor on our own and publicly available datasets.

Zitieren

Learning of Multimodal Point Descriptors in Radar and LIDAR Point Clouds. / Rotter, Jan M.; Cohrs, Simon; Blume, Holger et al.
2024 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI). 2024. (IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Rotter, JM, Cohrs, S, Blume, H & Wagner, B 2024, Learning of Multimodal Point Descriptors in Radar and LIDAR Point Clouds. in 2024 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI). IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems. https://doi.org/10.1109/mfi62651.2024.10705777
Rotter, J. M., Cohrs, S., Blume, H., & Wagner, B. (2024). Learning of Multimodal Point Descriptors in Radar and LIDAR Point Clouds. In 2024 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI) (IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems). https://doi.org/10.1109/mfi62651.2024.10705777
Rotter JM, Cohrs S, Blume H, Wagner B. Learning of Multimodal Point Descriptors in Radar and LIDAR Point Clouds. in 2024 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI). 2024. (IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems). doi: 10.1109/mfi62651.2024.10705777
Rotter, Jan M. ; Cohrs, Simon ; Blume, Holger et al. / Learning of Multimodal Point Descriptors in Radar and LIDAR Point Clouds. 2024 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI). 2024. (IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems).
Download
@inproceedings{3773efa7e9bd46b8b2210856de2ce5cf,
title = "Learning of Multimodal Point Descriptors in Radar and LIDAR Point Clouds",
abstract = "Registration of point clouds is a fundamental task in robotic SLAM pipelines. Typically this task is performed only on point clouds of the same sensor or at least the same sensing modality. However, robots designed for challenging environments are often equipped with redundant sensors for the same task where some sensors are more accurate and others are more robust against disturbing environmental conditions. Being able to register the data across the modalities is an important step to more fault-tolerant localization and mapping. We therefore propose a learning framework, which describes the points in the point cloud invariant of their modality. This description is then used in a transformer-like model to find point matches for the registration process. We demonstrate our results using a scanning lidar and radar sensor on our own and publicly available datasets.",
author = "Rotter, {Jan M.} and Simon Cohrs and Holger Blume and Bernardo Wagner",
note = "Publisher Copyright: {\textcopyright} 2024 IEEE.",
year = "2024",
doi = "10.1109/mfi62651.2024.10705777",
language = "English",
isbn = "979-8-3503-6804-8",
series = "IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems",
booktitle = "2024 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)",

}

Download

TY - GEN

T1 - Learning of Multimodal Point Descriptors in Radar and LIDAR Point Clouds

AU - Rotter, Jan M.

AU - Cohrs, Simon

AU - Blume, Holger

AU - Wagner, Bernardo

N1 - Publisher Copyright: © 2024 IEEE.

PY - 2024

Y1 - 2024

N2 - Registration of point clouds is a fundamental task in robotic SLAM pipelines. Typically this task is performed only on point clouds of the same sensor or at least the same sensing modality. However, robots designed for challenging environments are often equipped with redundant sensors for the same task where some sensors are more accurate and others are more robust against disturbing environmental conditions. Being able to register the data across the modalities is an important step to more fault-tolerant localization and mapping. We therefore propose a learning framework, which describes the points in the point cloud invariant of their modality. This description is then used in a transformer-like model to find point matches for the registration process. We demonstrate our results using a scanning lidar and radar sensor on our own and publicly available datasets.

AB - Registration of point clouds is a fundamental task in robotic SLAM pipelines. Typically this task is performed only on point clouds of the same sensor or at least the same sensing modality. However, robots designed for challenging environments are often equipped with redundant sensors for the same task where some sensors are more accurate and others are more robust against disturbing environmental conditions. Being able to register the data across the modalities is an important step to more fault-tolerant localization and mapping. We therefore propose a learning framework, which describes the points in the point cloud invariant of their modality. This description is then used in a transformer-like model to find point matches for the registration process. We demonstrate our results using a scanning lidar and radar sensor on our own and publicly available datasets.

UR - http://www.scopus.com/inward/record.url?scp=85207854260&partnerID=8YFLogxK

U2 - 10.1109/mfi62651.2024.10705777

DO - 10.1109/mfi62651.2024.10705777

M3 - Conference contribution

SN - 979-8-3503-6804-8

T3 - IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems

BT - 2024 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)

ER -

Von denselben Autoren