Visual Multimodal Odometry - Robust Visual Odometry in Harsh Environments.

Publikation: KonferenzbeitragPaperForschungPeer-Review

Autoren

Organisationseinheiten

Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Seiten1-8
PublikationsstatusVeröffentlicht - 19 Sept. 2018
Veranstaltung2018 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR) - University of Pennsylvania, Pennsylvania, USA / Vereinigte Staaten
Dauer: 6 Aug. 20188 Aug. 2018

Konferenz

Konferenz2018 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)
KurztitelSSRR
Land/GebietUSA / Vereinigte Staaten
OrtPennsylvania
Zeitraum6 Aug. 20188 Aug. 2018

Abstract

For autonomous localization and navigation, a robot's ego-motion estimation is fundamental. RGB camera-based visual odometry (VO) has proven to be a robust technique used to determine a robot's motion. In situations when direct sunlight, the absence of light or presence of dust as well as smoke make vision difficult, RGB cameras may not provide a sufficient number of RGB features for an accurate and robust visual odometry. In contrast to the visual spectrum of light, imaging modalities like thermal cameras can still be used to identify a stable but small number of image features in the described situations. Unfortunately, the smaller number of image features results in a less accurate VO. In this paper, we present an approach to monocular visual odometry using multimodal image features of different imaging modalities as RGB, thermal and hyperspectral images. By using the strengths of various imaging modalities, the robustness and accuracy of VO can be drastically increased compared to traditional unimodal approaches. The presented method merges different motion hypotheses based on various imaging modalities to create a more accurate motion estimation as well as a map of multimodal image features. The uni- and multimodal motion estimations are evaluated regarding the absolute and relative trajectory errors. The results show that our multimodal approach works robustly in presence of partial sensor failures still creating a multimodal map containing image features of all modalities.

ASJC Scopus Sachgebiete

Zitieren

Visual Multimodal Odometry - Robust Visual Odometry in Harsh Environments. / Kleinschmidt, Sebastian P.; Wagner, Bernardo.
2018. 1-8 Beitrag in 2018 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), Pennsylvania, USA / Vereinigte Staaten.

Publikation: KonferenzbeitragPaperForschungPeer-Review

Kleinschmidt, SP & Wagner, B 2018, 'Visual Multimodal Odometry - Robust Visual Odometry in Harsh Environments.', Beitrag in 2018 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), Pennsylvania, USA / Vereinigte Staaten, 6 Aug. 2018 - 8 Aug. 2018 S. 1-8. https://doi.org/10.1109/ssrr.2018.8468653
Kleinschmidt, S. P., & Wagner, B. (2018). Visual Multimodal Odometry - Robust Visual Odometry in Harsh Environments.. 1-8. Beitrag in 2018 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), Pennsylvania, USA / Vereinigte Staaten. https://doi.org/10.1109/ssrr.2018.8468653
Kleinschmidt SP, Wagner B. Visual Multimodal Odometry - Robust Visual Odometry in Harsh Environments.. 2018. Beitrag in 2018 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), Pennsylvania, USA / Vereinigte Staaten. doi: 10.1109/ssrr.2018.8468653
Kleinschmidt, Sebastian P. ; Wagner, Bernardo. / Visual Multimodal Odometry - Robust Visual Odometry in Harsh Environments. Beitrag in 2018 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), Pennsylvania, USA / Vereinigte Staaten.
Download
@conference{44ae9d6f78e94537ab352aeabf364c0a,
title = "Visual Multimodal Odometry - Robust Visual Odometry in Harsh Environments.",
abstract = "For autonomous localization and navigation, a robot's ego-motion estimation is fundamental. RGB camera-based visual odometry (VO) has proven to be a robust technique used to determine a robot's motion. In situations when direct sunlight, the absence of light or presence of dust as well as smoke make vision difficult, RGB cameras may not provide a sufficient number of RGB features for an accurate and robust visual odometry. In contrast to the visual spectrum of light, imaging modalities like thermal cameras can still be used to identify a stable but small number of image features in the described situations. Unfortunately, the smaller number of image features results in a less accurate VO. In this paper, we present an approach to monocular visual odometry using multimodal image features of different imaging modalities as RGB, thermal and hyperspectral images. By using the strengths of various imaging modalities, the robustness and accuracy of VO can be drastically increased compared to traditional unimodal approaches. The presented method merges different motion hypotheses based on various imaging modalities to create a more accurate motion estimation as well as a map of multimodal image features. The uni- and multimodal motion estimations are evaluated regarding the absolute and relative trajectory errors. The results show that our multimodal approach works robustly in presence of partial sensor failures still creating a multimodal map containing image features of all modalities.",
keywords = "Multimodal Visual Odometry, Robust Pose Estimation, Sensor Fusion",
author = "Kleinschmidt, {Sebastian P.} and Bernardo Wagner",
note = "DBLP's bibliographic metadata records provided through http://dblp.org/search/publ/api are distributed under a Creative Commons CC0 1.0 Universal Public Domain Dedication. Although the bibliographic metadata records are provided consistent with CC0 1.0 Dedication, the content described by the metadata records is not. Content may be subject to copyright, rights of privacy, rights of publicity and other restrictions.; 2018 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), SSRR ; Conference date: 06-08-2018 Through 08-08-2018",
year = "2018",
month = sep,
day = "19",
doi = "10.1109/ssrr.2018.8468653",
language = "English",
pages = "1--8",

}

Download

TY - CONF

T1 - Visual Multimodal Odometry - Robust Visual Odometry in Harsh Environments.

AU - Kleinschmidt, Sebastian P.

AU - Wagner, Bernardo

N1 - DBLP's bibliographic metadata records provided through http://dblp.org/search/publ/api are distributed under a Creative Commons CC0 1.0 Universal Public Domain Dedication. Although the bibliographic metadata records are provided consistent with CC0 1.0 Dedication, the content described by the metadata records is not. Content may be subject to copyright, rights of privacy, rights of publicity and other restrictions.

PY - 2018/9/19

Y1 - 2018/9/19

N2 - For autonomous localization and navigation, a robot's ego-motion estimation is fundamental. RGB camera-based visual odometry (VO) has proven to be a robust technique used to determine a robot's motion. In situations when direct sunlight, the absence of light or presence of dust as well as smoke make vision difficult, RGB cameras may not provide a sufficient number of RGB features for an accurate and robust visual odometry. In contrast to the visual spectrum of light, imaging modalities like thermal cameras can still be used to identify a stable but small number of image features in the described situations. Unfortunately, the smaller number of image features results in a less accurate VO. In this paper, we present an approach to monocular visual odometry using multimodal image features of different imaging modalities as RGB, thermal and hyperspectral images. By using the strengths of various imaging modalities, the robustness and accuracy of VO can be drastically increased compared to traditional unimodal approaches. The presented method merges different motion hypotheses based on various imaging modalities to create a more accurate motion estimation as well as a map of multimodal image features. The uni- and multimodal motion estimations are evaluated regarding the absolute and relative trajectory errors. The results show that our multimodal approach works robustly in presence of partial sensor failures still creating a multimodal map containing image features of all modalities.

AB - For autonomous localization and navigation, a robot's ego-motion estimation is fundamental. RGB camera-based visual odometry (VO) has proven to be a robust technique used to determine a robot's motion. In situations when direct sunlight, the absence of light or presence of dust as well as smoke make vision difficult, RGB cameras may not provide a sufficient number of RGB features for an accurate and robust visual odometry. In contrast to the visual spectrum of light, imaging modalities like thermal cameras can still be used to identify a stable but small number of image features in the described situations. Unfortunately, the smaller number of image features results in a less accurate VO. In this paper, we present an approach to monocular visual odometry using multimodal image features of different imaging modalities as RGB, thermal and hyperspectral images. By using the strengths of various imaging modalities, the robustness and accuracy of VO can be drastically increased compared to traditional unimodal approaches. The presented method merges different motion hypotheses based on various imaging modalities to create a more accurate motion estimation as well as a map of multimodal image features. The uni- and multimodal motion estimations are evaluated regarding the absolute and relative trajectory errors. The results show that our multimodal approach works robustly in presence of partial sensor failures still creating a multimodal map containing image features of all modalities.

KW - Multimodal Visual Odometry

KW - Robust Pose Estimation

KW - Sensor Fusion

UR - http://www.scopus.com/inward/record.url?scp=85055542848&partnerID=8YFLogxK

U2 - 10.1109/ssrr.2018.8468653

DO - 10.1109/ssrr.2018.8468653

M3 - Paper

SP - 1

EP - 8

T2 - 2018 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)

Y2 - 6 August 2018 through 8 August 2018

ER -

Von denselben Autoren