Details
Original language | English |
---|---|
Pages (from-to) | 27-41 |
Number of pages | 15 |
Journal | ISPRS Journal of Photogrammetry and Remote Sensing |
Volume | 204 |
Early online date | 8 Sept 2023 |
Publication status | Published - Oct 2023 |
Abstract
Keywords
- cs.CV, Semantic segmentation, Bird's eye view, Cooperative perception, Evidential deep learning
ASJC Scopus subject areas
- Earth and Planetary Sciences(all)
- Computers in Earth Sciences
- Engineering(all)
- Engineering (miscellaneous)
- Physics and Astronomy(all)
- Atomic and Molecular Physics, and Optics
- Computer Science(all)
- Computer Science Applications
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: ISPRS Journal of Photogrammetry and Remote Sensing, Vol. 204, 10.2023, p. 27-41.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - Generating Evidential BEV Maps in Continuous Driving Space
AU - Yuan, Yunshuang
AU - Cheng, Hao
AU - Yang, Michael Ying
AU - Sester, Monika
N1 - This work is supported by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – 227198829/GRK1931 and MSCA European Postdoctoral Fellowships under the 101062870 – VeVuSafety project.
PY - 2023/10
Y1 - 2023/10
N2 - Safety is critical for autonomous driving, and one aspect of improving safety is to accurately capture the uncertainties of the perception system, especially knowing the unknown. Different from only providing deterministic or probabilistic results, e.g., probabilistic object detection, that only provide partial information for the perception scenario, we propose a complete probabilistic model named GevBEV. It interprets the 2D driving space as a probabilistic Bird's Eye View (BEV) map with point-based spatial Gaussian distributions, from which one can draw evidence as the parameters for the categorical Dirichlet distribution of any new sample point in the continuous driving space. The experimental results show that GevBEV not only provides more reliable uncertainty quantification but also outperforms the previous works on the benchmarks OPV2V and V2V4Real of BEV map interpretation for cooperative perception in simulated and real-world driving scenarios, respectively. A critical factor in cooperative perception is the data transmission size through the communication channels. GevBEV helps reduce communication overhead by selecting only the most important information to share from the learned uncertainty, reducing the average information communicated by 87% with only a slight performance drop. Our code is published at https://github.com/YuanYunshuang/GevBEV.
AB - Safety is critical for autonomous driving, and one aspect of improving safety is to accurately capture the uncertainties of the perception system, especially knowing the unknown. Different from only providing deterministic or probabilistic results, e.g., probabilistic object detection, that only provide partial information for the perception scenario, we propose a complete probabilistic model named GevBEV. It interprets the 2D driving space as a probabilistic Bird's Eye View (BEV) map with point-based spatial Gaussian distributions, from which one can draw evidence as the parameters for the categorical Dirichlet distribution of any new sample point in the continuous driving space. The experimental results show that GevBEV not only provides more reliable uncertainty quantification but also outperforms the previous works on the benchmarks OPV2V and V2V4Real of BEV map interpretation for cooperative perception in simulated and real-world driving scenarios, respectively. A critical factor in cooperative perception is the data transmission size through the communication channels. GevBEV helps reduce communication overhead by selecting only the most important information to share from the learned uncertainty, reducing the average information communicated by 87% with only a slight performance drop. Our code is published at https://github.com/YuanYunshuang/GevBEV.
KW - cs.CV
KW - Semantic segmentation
KW - Bird's eye view
KW - Cooperative perception
KW - Evidential deep learning
UR - http://www.scopus.com/inward/record.url?scp=85170410578&partnerID=8YFLogxK
U2 - 10.48550/arXiv.2302.02928
DO - 10.48550/arXiv.2302.02928
M3 - Article
VL - 204
SP - 27
EP - 41
JO - ISPRS Journal of Photogrammetry and Remote Sensing
JF - ISPRS Journal of Photogrammetry and Remote Sensing
SN - 0924-2716
ER -