Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | 2022 IEEE Intelligent Vehicles Symposium |
Herausgeber (Verlag) | Institute of Electrical and Electronics Engineers Inc. |
Seiten | 1127-1134 |
Seitenumfang | 8 |
ISBN (elektronisch) | 9781665488211 |
ISBN (Print) | 9781665488228 |
Publikationsstatus | Veröffentlicht - 19 Juli 2022 |
Veranstaltung | 2022 IEEE Intelligent Vehicles Symposium, IV 2022 - Aachen, Deutschland Dauer: 5 Juni 2022 → 9 Juni 2022 |
Publikationsreihe
Name | IEEE Intelligent Vehicles Symposium, Proceedings |
---|---|
Band | 2022-June |
Abstract
Improvements in sensor technologies as well as machine learning methods allow an efficient collection, processing and analysis of the dynamic environment, which can be used for detection and tracking of traffic participants. Current datasets in this domain mostly present a single view, making highly accurate pose estimation impossible due to occlusions. The integration of different, simultaneously acquired data allows to exploit and develop collaboration principles to increase the quality, reliability and integrity of the derived information. This work addresses this problem by providing a multi-view dataset, including 2D image information (videos) obtained by up to three cameras and 3D point clouds from up to five LiDAR sensors together with labels of the traffic participants in the scene. The measurements were conducted during different weather conditions on several days at a large junction in Hanover, Germany, resulting in a total duration of 145 minutes.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Angewandte Informatik
- Ingenieurwesen (insg.)
- Fahrzeugbau
- Mathematik (insg.)
- Modellierung und Simulation
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
2022 IEEE Intelligent Vehicles Symposium. Institute of Electrical and Electronics Engineers Inc., 2022. S. 1127-1134 (IEEE Intelligent Vehicles Symposium, Proceedings; Band 2022-June).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - LUMPI
T2 - 2022 IEEE Intelligent Vehicles Symposium, IV 2022
AU - Busch, Steffen
AU - Koetsier, Christian
AU - Axmann, Jeldrik
AU - Brenner, Claus
N1 - Funding Information: This project is supported by the German Research Foundation (DFG), as part of the Research Training Group i.c.sens, GRK 2159, ”Integrity and Collaboration in Dynamic Sensor Networks” as well as the Lower Saxony Ministry of Science and Culture under grant number ZN3493 within the Lower Saxony “Vorab“ of the Volkswagen Foundation and the Center for Digital Innovations.
PY - 2022/7/19
Y1 - 2022/7/19
N2 - Improvements in sensor technologies as well as machine learning methods allow an efficient collection, processing and analysis of the dynamic environment, which can be used for detection and tracking of traffic participants. Current datasets in this domain mostly present a single view, making highly accurate pose estimation impossible due to occlusions. The integration of different, simultaneously acquired data allows to exploit and develop collaboration principles to increase the quality, reliability and integrity of the derived information. This work addresses this problem by providing a multi-view dataset, including 2D image information (videos) obtained by up to three cameras and 3D point clouds from up to five LiDAR sensors together with labels of the traffic participants in the scene. The measurements were conducted during different weather conditions on several days at a large junction in Hanover, Germany, resulting in a total duration of 145 minutes.
AB - Improvements in sensor technologies as well as machine learning methods allow an efficient collection, processing and analysis of the dynamic environment, which can be used for detection and tracking of traffic participants. Current datasets in this domain mostly present a single view, making highly accurate pose estimation impossible due to occlusions. The integration of different, simultaneously acquired data allows to exploit and develop collaboration principles to increase the quality, reliability and integrity of the derived information. This work addresses this problem by providing a multi-view dataset, including 2D image information (videos) obtained by up to three cameras and 3D point clouds from up to five LiDAR sensors together with labels of the traffic participants in the scene. The measurements were conducted during different weather conditions on several days at a large junction in Hanover, Germany, resulting in a total duration of 145 minutes.
UR - http://www.scopus.com/inward/record.url?scp=85135370304&partnerID=8YFLogxK
U2 - 10.1109/IV51971.2022.9827157
DO - 10.1109/IV51971.2022.9827157
M3 - Conference contribution
AN - SCOPUS:85135370304
SN - 9781665488228
T3 - IEEE Intelligent Vehicles Symposium, Proceedings
SP - 1127
EP - 1134
BT - 2022 IEEE Intelligent Vehicles Symposium
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 5 June 2022 through 9 June 2022
ER -