Spatial Fusion of Different Imaging Technologies Using a Virtual Multimodal Camera.

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandBeitrag in Buch/SammelwerkForschungPeer-Review

Autoren

Organisationseinheiten

Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksInformatics in Control, Automation and Robotics
Herausgeber/-innenDimitri Peaucelle, Kurosh Madani, Oleg Gusikhin
Seiten153-174
Seitenumfang22
PublikationsstatusVeröffentlicht - 2018

Publikationsreihe

NameLecture Notes in Electrical Engineering
Band430
ISSN (Print)1876-1100
ISSN (elektronisch)1876-1119

Abstract

To analyze potential relations between different imaging technologies such as RGB, hyperspectral, IR and thermal cameras, spatially corresponding image regions need to be identified. Regarding the fact that images of different cameras cannot be taken from the same pose simultaneously, corresponding pixels in the taken images are spatially displaced or subject to time variant factors. Furthermore, additional spatial deviations in the images are caused by varying camera parameters such as focal length, principal point and lens distortion. To reestablish the spatial relationship between images of different modalities, additional constraints need to be taken into account. For this reason, a new intermodal sensor fusion technique called Virtual Multimodal Camera (VMC) is presented in this paper. Using the presented approach, spatially corresponding images can be rendered for different camera technologies from the same virtual pose using a common parameter set. As a result, image points of the different modalities can be set into a spatial relationship so that the pixel locations in the images correspond to the same physical location. Additional contributions of this paper are the introduction of an hybrid calibration pattern for intrinsic and extrinsic intermodal camera calibration and a high performance 2D-to-3D mapping procedure. All steps of the algorithm are performed parallelly on a graphics processing unit (GPU). As a result, large amount of spatially corresponding images can be generated online for later analysis of intermodal relations.

ASJC Scopus Sachgebiete

Zitieren

Spatial Fusion of Different Imaging Technologies Using a Virtual Multimodal Camera. / Kleinschmidt, Sebastian P.; Wagner, Bernardo.
Informatics in Control, Automation and Robotics. Hrsg. / Dimitri Peaucelle; Kurosh Madani; Oleg Gusikhin. 2018. S. 153-174 (Lecture Notes in Electrical Engineering; Band 430).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandBeitrag in Buch/SammelwerkForschungPeer-Review

Kleinschmidt, SP & Wagner, B 2018, Spatial Fusion of Different Imaging Technologies Using a Virtual Multimodal Camera. in D Peaucelle, K Madani & O Gusikhin (Hrsg.), Informatics in Control, Automation and Robotics. Lecture Notes in Electrical Engineering, Bd. 430, S. 153-174. https://doi.org/10.1007/978-3-319-55011-4_8
Kleinschmidt, S. P., & Wagner, B. (2018). Spatial Fusion of Different Imaging Technologies Using a Virtual Multimodal Camera. In D. Peaucelle, K. Madani, & O. Gusikhin (Hrsg.), Informatics in Control, Automation and Robotics (S. 153-174). (Lecture Notes in Electrical Engineering; Band 430). https://doi.org/10.1007/978-3-319-55011-4_8
Kleinschmidt SP, Wagner B. Spatial Fusion of Different Imaging Technologies Using a Virtual Multimodal Camera. in Peaucelle D, Madani K, Gusikhin O, Hrsg., Informatics in Control, Automation and Robotics. 2018. S. 153-174. (Lecture Notes in Electrical Engineering). Epub 2017 Nov 3. doi: 10.1007/978-3-319-55011-4_8
Kleinschmidt, Sebastian P. ; Wagner, Bernardo. / Spatial Fusion of Different Imaging Technologies Using a Virtual Multimodal Camera. Informatics in Control, Automation and Robotics. Hrsg. / Dimitri Peaucelle ; Kurosh Madani ; Oleg Gusikhin. 2018. S. 153-174 (Lecture Notes in Electrical Engineering).
Download
@inbook{df6971fe0c9944af9409bbb25e9a1c84,
title = "Spatial Fusion of Different Imaging Technologies Using a Virtual Multimodal Camera.",
abstract = "To analyze potential relations between different imaging technologies such as RGB, hyperspectral, IR and thermal cameras, spatially corresponding image regions need to be identified. Regarding the fact that images of different cameras cannot be taken from the same pose simultaneously, corresponding pixels in the taken images are spatially displaced or subject to time variant factors. Furthermore, additional spatial deviations in the images are caused by varying camera parameters such as focal length, principal point and lens distortion. To reestablish the spatial relationship between images of different modalities, additional constraints need to be taken into account. For this reason, a new intermodal sensor fusion technique called Virtual Multimodal Camera (VMC) is presented in this paper. Using the presented approach, spatially corresponding images can be rendered for different camera technologies from the same virtual pose using a common parameter set. As a result, image points of the different modalities can be set into a spatial relationship so that the pixel locations in the images correspond to the same physical location. Additional contributions of this paper are the introduction of an hybrid calibration pattern for intrinsic and extrinsic intermodal camera calibration and a high performance 2D-to-3D mapping procedure. All steps of the algorithm are performed parallelly on a graphics processing unit (GPU). As a result, large amount of spatially corresponding images can be generated online for later analysis of intermodal relations.",
keywords = "GPU-acceleration, Intermodal sensor fusion, Virtual multimodal camera",
author = "Kleinschmidt, {Sebastian P.} and Bernardo Wagner",
note = "DBLP's bibliographic metadata records provided through http://dblp.org/search/publ/api are distributed under a Creative Commons CC0 1.0 Universal Public Domain Dedication. Although the bibliographic metadata records are provided consistent with CC0 1.0 Dedication, the content described by the metadata records is not. Content may be subject to copyright, rights of privacy, rights of publicity and other restrictions. Publisher Copyright: {\textcopyright} Springer International Publishing AG 2018. Copyright: Copyright 2017 Elsevier B.V., All rights reserved.",
year = "2018",
doi = "10.1007/978-3-319-55011-4_8",
language = "English",
isbn = "9783319550107",
series = "Lecture Notes in Electrical Engineering",
pages = "153--174",
editor = "Dimitri Peaucelle and Kurosh Madani and Oleg Gusikhin",
booktitle = "Informatics in Control, Automation and Robotics",

}

Download

TY - CHAP

T1 - Spatial Fusion of Different Imaging Technologies Using a Virtual Multimodal Camera.

AU - Kleinschmidt, Sebastian P.

AU - Wagner, Bernardo

N1 - DBLP's bibliographic metadata records provided through http://dblp.org/search/publ/api are distributed under a Creative Commons CC0 1.0 Universal Public Domain Dedication. Although the bibliographic metadata records are provided consistent with CC0 1.0 Dedication, the content described by the metadata records is not. Content may be subject to copyright, rights of privacy, rights of publicity and other restrictions. Publisher Copyright: © Springer International Publishing AG 2018. Copyright: Copyright 2017 Elsevier B.V., All rights reserved.

PY - 2018

Y1 - 2018

N2 - To analyze potential relations between different imaging technologies such as RGB, hyperspectral, IR and thermal cameras, spatially corresponding image regions need to be identified. Regarding the fact that images of different cameras cannot be taken from the same pose simultaneously, corresponding pixels in the taken images are spatially displaced or subject to time variant factors. Furthermore, additional spatial deviations in the images are caused by varying camera parameters such as focal length, principal point and lens distortion. To reestablish the spatial relationship between images of different modalities, additional constraints need to be taken into account. For this reason, a new intermodal sensor fusion technique called Virtual Multimodal Camera (VMC) is presented in this paper. Using the presented approach, spatially corresponding images can be rendered for different camera technologies from the same virtual pose using a common parameter set. As a result, image points of the different modalities can be set into a spatial relationship so that the pixel locations in the images correspond to the same physical location. Additional contributions of this paper are the introduction of an hybrid calibration pattern for intrinsic and extrinsic intermodal camera calibration and a high performance 2D-to-3D mapping procedure. All steps of the algorithm are performed parallelly on a graphics processing unit (GPU). As a result, large amount of spatially corresponding images can be generated online for later analysis of intermodal relations.

AB - To analyze potential relations between different imaging technologies such as RGB, hyperspectral, IR and thermal cameras, spatially corresponding image regions need to be identified. Regarding the fact that images of different cameras cannot be taken from the same pose simultaneously, corresponding pixels in the taken images are spatially displaced or subject to time variant factors. Furthermore, additional spatial deviations in the images are caused by varying camera parameters such as focal length, principal point and lens distortion. To reestablish the spatial relationship between images of different modalities, additional constraints need to be taken into account. For this reason, a new intermodal sensor fusion technique called Virtual Multimodal Camera (VMC) is presented in this paper. Using the presented approach, spatially corresponding images can be rendered for different camera technologies from the same virtual pose using a common parameter set. As a result, image points of the different modalities can be set into a spatial relationship so that the pixel locations in the images correspond to the same physical location. Additional contributions of this paper are the introduction of an hybrid calibration pattern for intrinsic and extrinsic intermodal camera calibration and a high performance 2D-to-3D mapping procedure. All steps of the algorithm are performed parallelly on a graphics processing unit (GPU). As a result, large amount of spatially corresponding images can be generated online for later analysis of intermodal relations.

KW - GPU-acceleration

KW - Intermodal sensor fusion

KW - Virtual multimodal camera

UR - http://www.scopus.com/inward/record.url?scp=85034231405&partnerID=8YFLogxK

U2 - 10.1007/978-3-319-55011-4_8

DO - 10.1007/978-3-319-55011-4_8

M3 - Contribution to book/anthology

SN - 9783319550107

T3 - Lecture Notes in Electrical Engineering

SP - 153

EP - 174

BT - Informatics in Control, Automation and Robotics

A2 - Peaucelle, Dimitri

A2 - Madani, Kurosh

A2 - Gusikhin, Oleg

ER -

Von denselben Autoren