Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | Proceedings of the 13th International Conference on Informatics in Control, Automation and Robotics |
Herausgeber/-innen | Oleg Gusikhin, Dimitri Peaucelle, Kurosh Madani |
Seiten | 19-29 |
Seitenumfang | 11 |
ISBN (elektronisch) | 9789897581984 |
Publikationsstatus | Veröffentlicht - 2016 |
Abstract
In this paper, a new virtual reality (VR) control concept for operating robots in search and rescue (SAR) scenarios is introduced. The presented approach intuitively provides different sensor signals as RGB, thermal and active infrared images by projecting them onto 3D structures generated by a Time of Flight (ToF)-based depth camera. The multichannel 3D data are displayed using an Oculus Rift head-up-display providing additional head tracking information. The usage of 3D structures can improve the perception of scale and depth by providing stereoscopic images which cannot be generated for stand-alone 2D images. Besides the described operating concept, the main contributions of this paper are the introduction of an hybrid calibration pattern for multi-sensor calibration and a high performance 2D-to-3D mapping procedure. To ensure low latencies, all steps of the algorithm are performed parallelly on a graphics processing unit (GPU) which reduces the traditional processing time on a central processing unit (CPU) by 80.03%. Furthermore, different input images are merged according to their importance for the operator to create a multi-sensor point cloud.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Artificial intelligence
- Informatik (insg.)
- Information systems
- Ingenieurwesen (insg.)
- Steuerungs- und Systemtechnik
- Informatik (insg.)
- Maschinelles Sehen und Mustererkennung
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
Proceedings of the 13th International Conference on Informatics in Control, Automation and Robotics. Hrsg. / Oleg Gusikhin; Dimitri Peaucelle; Kurosh Madani. 2016. S. 19-29.
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - GPU-accelerated Multi-sensor 3D Mapping for Remote Control of Mobile Robots using Virtual Reality
AU - Kleinschmidt, Sebastian P.
AU - Wagner, Bernardo
PY - 2016
Y1 - 2016
N2 - In this paper, a new virtual reality (VR) control concept for operating robots in search and rescue (SAR) scenarios is introduced. The presented approach intuitively provides different sensor signals as RGB, thermal and active infrared images by projecting them onto 3D structures generated by a Time of Flight (ToF)-based depth camera. The multichannel 3D data are displayed using an Oculus Rift head-up-display providing additional head tracking information. The usage of 3D structures can improve the perception of scale and depth by providing stereoscopic images which cannot be generated for stand-alone 2D images. Besides the described operating concept, the main contributions of this paper are the introduction of an hybrid calibration pattern for multi-sensor calibration and a high performance 2D-to-3D mapping procedure. To ensure low latencies, all steps of the algorithm are performed parallelly on a graphics processing unit (GPU) which reduces the traditional processing time on a central processing unit (CPU) by 80.03%. Furthermore, different input images are merged according to their importance for the operator to create a multi-sensor point cloud.
AB - In this paper, a new virtual reality (VR) control concept for operating robots in search and rescue (SAR) scenarios is introduced. The presented approach intuitively provides different sensor signals as RGB, thermal and active infrared images by projecting them onto 3D structures generated by a Time of Flight (ToF)-based depth camera. The multichannel 3D data are displayed using an Oculus Rift head-up-display providing additional head tracking information. The usage of 3D structures can improve the perception of scale and depth by providing stereoscopic images which cannot be generated for stand-alone 2D images. Besides the described operating concept, the main contributions of this paper are the introduction of an hybrid calibration pattern for multi-sensor calibration and a high performance 2D-to-3D mapping procedure. To ensure low latencies, all steps of the algorithm are performed parallelly on a graphics processing unit (GPU) which reduces the traditional processing time on a central processing unit (CPU) by 80.03%. Furthermore, different input images are merged according to their importance for the operator to create a multi-sensor point cloud.
KW - Augumented reality
KW - GPU-acceleration
KW - Sensorfusion
KW - Virtual environments
KW - Augumented Reality
KW - Virtual Environments
UR - http://www.scopus.com/inward/record.url?scp=85013066178&partnerID=8YFLogxK
U2 - 10.5220/0005692200190029
DO - 10.5220/0005692200190029
M3 - Conference contribution
SP - 19
EP - 29
BT - Proceedings of the 13th International Conference on Informatics in Control, Automation and Robotics
A2 - Gusikhin, Oleg
A2 - Peaucelle, Dimitri
A2 - Madani, Kurosh
ER -