Details
Original language | English |
---|---|
Pages | 1-8 |
Publication status | Published - 19 Sept 2018 |
Event | 2018 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR) - University of Pennsylvania, Pennsylvania, United States Duration: 6 Aug 2018 → 8 Aug 2018 |
Conference
Conference | 2018 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR) |
---|---|
Abbreviated title | SSRR |
Country/Territory | United States |
City | Pennsylvania |
Period | 6 Aug 2018 → 8 Aug 2018 |
Abstract
For autonomous localization and navigation, a robot's ego-motion estimation is fundamental. RGB camera-based visual odometry (VO) has proven to be a robust technique used to determine a robot's motion. In situations when direct sunlight, the absence of light or presence of dust as well as smoke make vision difficult, RGB cameras may not provide a sufficient number of RGB features for an accurate and robust visual odometry. In contrast to the visual spectrum of light, imaging modalities like thermal cameras can still be used to identify a stable but small number of image features in the described situations. Unfortunately, the smaller number of image features results in a less accurate VO. In this paper, we present an approach to monocular visual odometry using multimodal image features of different imaging modalities as RGB, thermal and hyperspectral images. By using the strengths of various imaging modalities, the robustness and accuracy of VO can be drastically increased compared to traditional unimodal approaches. The presented method merges different motion hypotheses based on various imaging modalities to create a more accurate motion estimation as well as a map of multimodal image features. The uni- and multimodal motion estimations are evaluated regarding the absolute and relative trajectory errors. The results show that our multimodal approach works robustly in presence of partial sensor failures still creating a multimodal map containing image features of all modalities.
Keywords
- Multimodal Visual Odometry, Robust Pose Estimation, Sensor Fusion
ASJC Scopus subject areas
- Engineering(all)
- Aerospace Engineering
- Mathematics(all)
- Control and Optimization
- Social Sciences(all)
- Safety Research
- Computer Science(all)
- Artificial Intelligence
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
2018. 1-8 Paper presented at 2018 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), Pennsylvania, United States.
Research output: Contribution to conference › Paper › Research › peer review
}
TY - CONF
T1 - Visual Multimodal Odometry - Robust Visual Odometry in Harsh Environments.
AU - Kleinschmidt, Sebastian P.
AU - Wagner, Bernardo
N1 - DBLP's bibliographic metadata records provided through http://dblp.org/search/publ/api are distributed under a Creative Commons CC0 1.0 Universal Public Domain Dedication. Although the bibliographic metadata records are provided consistent with CC0 1.0 Dedication, the content described by the metadata records is not. Content may be subject to copyright, rights of privacy, rights of publicity and other restrictions.
PY - 2018/9/19
Y1 - 2018/9/19
N2 - For autonomous localization and navigation, a robot's ego-motion estimation is fundamental. RGB camera-based visual odometry (VO) has proven to be a robust technique used to determine a robot's motion. In situations when direct sunlight, the absence of light or presence of dust as well as smoke make vision difficult, RGB cameras may not provide a sufficient number of RGB features for an accurate and robust visual odometry. In contrast to the visual spectrum of light, imaging modalities like thermal cameras can still be used to identify a stable but small number of image features in the described situations. Unfortunately, the smaller number of image features results in a less accurate VO. In this paper, we present an approach to monocular visual odometry using multimodal image features of different imaging modalities as RGB, thermal and hyperspectral images. By using the strengths of various imaging modalities, the robustness and accuracy of VO can be drastically increased compared to traditional unimodal approaches. The presented method merges different motion hypotheses based on various imaging modalities to create a more accurate motion estimation as well as a map of multimodal image features. The uni- and multimodal motion estimations are evaluated regarding the absolute and relative trajectory errors. The results show that our multimodal approach works robustly in presence of partial sensor failures still creating a multimodal map containing image features of all modalities.
AB - For autonomous localization and navigation, a robot's ego-motion estimation is fundamental. RGB camera-based visual odometry (VO) has proven to be a robust technique used to determine a robot's motion. In situations when direct sunlight, the absence of light or presence of dust as well as smoke make vision difficult, RGB cameras may not provide a sufficient number of RGB features for an accurate and robust visual odometry. In contrast to the visual spectrum of light, imaging modalities like thermal cameras can still be used to identify a stable but small number of image features in the described situations. Unfortunately, the smaller number of image features results in a less accurate VO. In this paper, we present an approach to monocular visual odometry using multimodal image features of different imaging modalities as RGB, thermal and hyperspectral images. By using the strengths of various imaging modalities, the robustness and accuracy of VO can be drastically increased compared to traditional unimodal approaches. The presented method merges different motion hypotheses based on various imaging modalities to create a more accurate motion estimation as well as a map of multimodal image features. The uni- and multimodal motion estimations are evaluated regarding the absolute and relative trajectory errors. The results show that our multimodal approach works robustly in presence of partial sensor failures still creating a multimodal map containing image features of all modalities.
KW - Multimodal Visual Odometry
KW - Robust Pose Estimation
KW - Sensor Fusion
UR - http://www.scopus.com/inward/record.url?scp=85055542848&partnerID=8YFLogxK
U2 - 10.1109/ssrr.2018.8468653
DO - 10.1109/ssrr.2018.8468653
M3 - Paper
SP - 1
EP - 8
T2 - 2018 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)
Y2 - 6 August 2018 through 8 August 2018
ER -