Details
Original language | English |
---|---|
Title of host publication | Informatics in Control, Automation and Robotics |
Editors | Dimitri Peaucelle, Kurosh Madani, Oleg Gusikhin |
Pages | 153-174 |
Number of pages | 22 |
Publication status | Published - 2018 |
Publication series
Name | Lecture Notes in Electrical Engineering |
---|---|
Volume | 430 |
ISSN (Print) | 1876-1100 |
ISSN (electronic) | 1876-1119 |
Abstract
To analyze potential relations between different imaging technologies such as RGB, hyperspectral, IR and thermal cameras, spatially corresponding image regions need to be identified. Regarding the fact that images of different cameras cannot be taken from the same pose simultaneously, corresponding pixels in the taken images are spatially displaced or subject to time variant factors. Furthermore, additional spatial deviations in the images are caused by varying camera parameters such as focal length, principal point and lens distortion. To reestablish the spatial relationship between images of different modalities, additional constraints need to be taken into account. For this reason, a new intermodal sensor fusion technique called Virtual Multimodal Camera (VMC) is presented in this paper. Using the presented approach, spatially corresponding images can be rendered for different camera technologies from the same virtual pose using a common parameter set. As a result, image points of the different modalities can be set into a spatial relationship so that the pixel locations in the images correspond to the same physical location. Additional contributions of this paper are the introduction of an hybrid calibration pattern for intrinsic and extrinsic intermodal camera calibration and a high performance 2D-to-3D mapping procedure. All steps of the algorithm are performed parallelly on a graphics processing unit (GPU). As a result, large amount of spatially corresponding images can be generated online for later analysis of intermodal relations.
Keywords
- GPU-acceleration, Intermodal sensor fusion, Virtual multimodal camera
ASJC Scopus subject areas
- Engineering(all)
- Industrial and Manufacturing Engineering
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Informatics in Control, Automation and Robotics. ed. / Dimitri Peaucelle; Kurosh Madani; Oleg Gusikhin. 2018. p. 153-174 (Lecture Notes in Electrical Engineering; Vol. 430).
Research output: Chapter in book/report/conference proceeding › Contribution to book/anthology › Research › peer review
}
TY - CHAP
T1 - Spatial Fusion of Different Imaging Technologies Using a Virtual Multimodal Camera.
AU - Kleinschmidt, Sebastian P.
AU - Wagner, Bernardo
N1 - DBLP's bibliographic metadata records provided through http://dblp.org/search/publ/api are distributed under a Creative Commons CC0 1.0 Universal Public Domain Dedication. Although the bibliographic metadata records are provided consistent with CC0 1.0 Dedication, the content described by the metadata records is not. Content may be subject to copyright, rights of privacy, rights of publicity and other restrictions. Publisher Copyright: © Springer International Publishing AG 2018. Copyright: Copyright 2017 Elsevier B.V., All rights reserved.
PY - 2018
Y1 - 2018
N2 - To analyze potential relations between different imaging technologies such as RGB, hyperspectral, IR and thermal cameras, spatially corresponding image regions need to be identified. Regarding the fact that images of different cameras cannot be taken from the same pose simultaneously, corresponding pixels in the taken images are spatially displaced or subject to time variant factors. Furthermore, additional spatial deviations in the images are caused by varying camera parameters such as focal length, principal point and lens distortion. To reestablish the spatial relationship between images of different modalities, additional constraints need to be taken into account. For this reason, a new intermodal sensor fusion technique called Virtual Multimodal Camera (VMC) is presented in this paper. Using the presented approach, spatially corresponding images can be rendered for different camera technologies from the same virtual pose using a common parameter set. As a result, image points of the different modalities can be set into a spatial relationship so that the pixel locations in the images correspond to the same physical location. Additional contributions of this paper are the introduction of an hybrid calibration pattern for intrinsic and extrinsic intermodal camera calibration and a high performance 2D-to-3D mapping procedure. All steps of the algorithm are performed parallelly on a graphics processing unit (GPU). As a result, large amount of spatially corresponding images can be generated online for later analysis of intermodal relations.
AB - To analyze potential relations between different imaging technologies such as RGB, hyperspectral, IR and thermal cameras, spatially corresponding image regions need to be identified. Regarding the fact that images of different cameras cannot be taken from the same pose simultaneously, corresponding pixels in the taken images are spatially displaced or subject to time variant factors. Furthermore, additional spatial deviations in the images are caused by varying camera parameters such as focal length, principal point and lens distortion. To reestablish the spatial relationship between images of different modalities, additional constraints need to be taken into account. For this reason, a new intermodal sensor fusion technique called Virtual Multimodal Camera (VMC) is presented in this paper. Using the presented approach, spatially corresponding images can be rendered for different camera technologies from the same virtual pose using a common parameter set. As a result, image points of the different modalities can be set into a spatial relationship so that the pixel locations in the images correspond to the same physical location. Additional contributions of this paper are the introduction of an hybrid calibration pattern for intrinsic and extrinsic intermodal camera calibration and a high performance 2D-to-3D mapping procedure. All steps of the algorithm are performed parallelly on a graphics processing unit (GPU). As a result, large amount of spatially corresponding images can be generated online for later analysis of intermodal relations.
KW - GPU-acceleration
KW - Intermodal sensor fusion
KW - Virtual multimodal camera
UR - http://www.scopus.com/inward/record.url?scp=85034231405&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-55011-4_8
DO - 10.1007/978-3-319-55011-4_8
M3 - Contribution to book/anthology
SN - 9783319550107
T3 - Lecture Notes in Electrical Engineering
SP - 153
EP - 174
BT - Informatics in Control, Automation and Robotics
A2 - Peaucelle, Dimitri
A2 - Madani, Kurosh
A2 - Gusikhin, Oleg
ER -