Details
Original language | English |
---|---|
Title of host publication | 2022 IEEE Intelligent Vehicles Symposium |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 1127-1134 |
Number of pages | 8 |
ISBN (electronic) | 9781665488211 |
ISBN (print) | 9781665488228 |
Publication status | Published - 19 Jul 2022 |
Event | 2022 IEEE Intelligent Vehicles Symposium, IV 2022 - Aachen, Germany Duration: 5 Jun 2022 → 9 Jun 2022 |
Publication series
Name | IEEE Intelligent Vehicles Symposium, Proceedings |
---|---|
Volume | 2022-June |
Abstract
Improvements in sensor technologies as well as machine learning methods allow an efficient collection, processing and analysis of the dynamic environment, which can be used for detection and tracking of traffic participants. Current datasets in this domain mostly present a single view, making highly accurate pose estimation impossible due to occlusions. The integration of different, simultaneously acquired data allows to exploit and develop collaboration principles to increase the quality, reliability and integrity of the derived information. This work addresses this problem by providing a multi-view dataset, including 2D image information (videos) obtained by up to three cameras and 3D point clouds from up to five LiDAR sensors together with labels of the traffic participants in the scene. The measurements were conducted during different weather conditions on several days at a large junction in Hanover, Germany, resulting in a total duration of 145 minutes.
ASJC Scopus subject areas
- Computer Science(all)
- Computer Science Applications
- Engineering(all)
- Automotive Engineering
- Mathematics(all)
- Modelling and Simulation
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
2022 IEEE Intelligent Vehicles Symposium. Institute of Electrical and Electronics Engineers Inc., 2022. p. 1127-1134 (IEEE Intelligent Vehicles Symposium, Proceedings; Vol. 2022-June).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - LUMPI
T2 - 2022 IEEE Intelligent Vehicles Symposium, IV 2022
AU - Busch, Steffen
AU - Koetsier, Christian
AU - Axmann, Jeldrik
AU - Brenner, Claus
N1 - Funding Information: This project is supported by the German Research Foundation (DFG), as part of the Research Training Group i.c.sens, GRK 2159, ”Integrity and Collaboration in Dynamic Sensor Networks” as well as the Lower Saxony Ministry of Science and Culture under grant number ZN3493 within the Lower Saxony “Vorab“ of the Volkswagen Foundation and the Center for Digital Innovations.
PY - 2022/7/19
Y1 - 2022/7/19
N2 - Improvements in sensor technologies as well as machine learning methods allow an efficient collection, processing and analysis of the dynamic environment, which can be used for detection and tracking of traffic participants. Current datasets in this domain mostly present a single view, making highly accurate pose estimation impossible due to occlusions. The integration of different, simultaneously acquired data allows to exploit and develop collaboration principles to increase the quality, reliability and integrity of the derived information. This work addresses this problem by providing a multi-view dataset, including 2D image information (videos) obtained by up to three cameras and 3D point clouds from up to five LiDAR sensors together with labels of the traffic participants in the scene. The measurements were conducted during different weather conditions on several days at a large junction in Hanover, Germany, resulting in a total duration of 145 minutes.
AB - Improvements in sensor technologies as well as machine learning methods allow an efficient collection, processing and analysis of the dynamic environment, which can be used for detection and tracking of traffic participants. Current datasets in this domain mostly present a single view, making highly accurate pose estimation impossible due to occlusions. The integration of different, simultaneously acquired data allows to exploit and develop collaboration principles to increase the quality, reliability and integrity of the derived information. This work addresses this problem by providing a multi-view dataset, including 2D image information (videos) obtained by up to three cameras and 3D point clouds from up to five LiDAR sensors together with labels of the traffic participants in the scene. The measurements were conducted during different weather conditions on several days at a large junction in Hanover, Germany, resulting in a total duration of 145 minutes.
UR - http://www.scopus.com/inward/record.url?scp=85135370304&partnerID=8YFLogxK
U2 - 10.1109/IV51971.2022.9827157
DO - 10.1109/IV51971.2022.9827157
M3 - Conference contribution
AN - SCOPUS:85135370304
SN - 9781665488228
T3 - IEEE Intelligent Vehicles Symposium, Proceedings
SP - 1127
EP - 1134
BT - 2022 IEEE Intelligent Vehicles Symposium
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 5 June 2022 through 9 June 2022
ER -