Details
Original language | English |
---|---|
Pages (from-to) | 195-207 |
Number of pages | 13 |
Journal | PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science |
Volume | 89 |
Issue number | 3 |
Early online date | 6 May 2021 |
Publication status | Published - Jun 2021 |
Abstract
Creating 3D models of the static environment is an important task for the advancement of driver assistance systems and autonomous driving. In this work, a static reference map is created from a Mobile Mapping “light detection and ranging” (LiDAR) dataset. The data was obtained in 14 measurement runs from March to October 2017 in Hannover and consists in total of about 15 billion points. The point cloud data are first segmented by region growing and then processed by a random forest classification, which divides the segments into the five static classes (“facade”, “pole”, “fence”, “traffic sign”, and “vegetation”) and three dynamic classes (“vehicle”, “bicycle”, “person”) with an overall accuracy of 94%. All static objects are entered into a voxel grid, to compare different measurement epochs directly. In the next step, the classified voxels are combined with the result of a visibility analysis. Therefore, we use a ray tracing algorithm to detect traversed voxels and differentiate between empty space and occlusion. Each voxel is classified as suitable for the static reference map or not by its object class and its occupation state during different epochs. Thereby, we avoid to eliminate static voxels which were occluded in some of the measurement runs (e.g. parts of a building occluded by a tree). However, segments that are only temporarily present and connected to static objects, such as scaffolds or awnings on buildings, are not included in the reference map. Overall, the combination of the classification with the subsequent entry of the classes into a voxel grid provides good and useful results that can be updated by including new measurement data.
Keywords
- 3D point cloud, Change detection, Classification, LiDAR, Mobile mapping, Segmentation
ASJC Scopus subject areas
- Social Sciences(all)
- Geography, Planning and Development
- Physics and Astronomy(all)
- Instrumentation
- Earth and Planetary Sciences(all)
- Earth and Planetary Sciences (miscellaneous)
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science, Vol. 89, No. 3, 06.2021, p. 195-207.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - Classification and Change Detection in Mobile Mapping LiDAR Point Clouds
AU - Voelsen, Mirjana
AU - Schachtschneider, Julia
AU - Brenner, Claus
N1 - Funding Information: Julia Schachtschneider was supported by the German Research Foundation (DFG), as part of the Research Training Group i.c.sens, GRK 2159, ‘Integrity and Collaboration in Dynamic Sensor Networks’. The long term measurement campaign used in this paper was also conducted within the scope of this project.
PY - 2021/6
Y1 - 2021/6
N2 - Creating 3D models of the static environment is an important task for the advancement of driver assistance systems and autonomous driving. In this work, a static reference map is created from a Mobile Mapping “light detection and ranging” (LiDAR) dataset. The data was obtained in 14 measurement runs from March to October 2017 in Hannover and consists in total of about 15 billion points. The point cloud data are first segmented by region growing and then processed by a random forest classification, which divides the segments into the five static classes (“facade”, “pole”, “fence”, “traffic sign”, and “vegetation”) and three dynamic classes (“vehicle”, “bicycle”, “person”) with an overall accuracy of 94%. All static objects are entered into a voxel grid, to compare different measurement epochs directly. In the next step, the classified voxels are combined with the result of a visibility analysis. Therefore, we use a ray tracing algorithm to detect traversed voxels and differentiate between empty space and occlusion. Each voxel is classified as suitable for the static reference map or not by its object class and its occupation state during different epochs. Thereby, we avoid to eliminate static voxels which were occluded in some of the measurement runs (e.g. parts of a building occluded by a tree). However, segments that are only temporarily present and connected to static objects, such as scaffolds or awnings on buildings, are not included in the reference map. Overall, the combination of the classification with the subsequent entry of the classes into a voxel grid provides good and useful results that can be updated by including new measurement data.
AB - Creating 3D models of the static environment is an important task for the advancement of driver assistance systems and autonomous driving. In this work, a static reference map is created from a Mobile Mapping “light detection and ranging” (LiDAR) dataset. The data was obtained in 14 measurement runs from March to October 2017 in Hannover and consists in total of about 15 billion points. The point cloud data are first segmented by region growing and then processed by a random forest classification, which divides the segments into the five static classes (“facade”, “pole”, “fence”, “traffic sign”, and “vegetation”) and three dynamic classes (“vehicle”, “bicycle”, “person”) with an overall accuracy of 94%. All static objects are entered into a voxel grid, to compare different measurement epochs directly. In the next step, the classified voxels are combined with the result of a visibility analysis. Therefore, we use a ray tracing algorithm to detect traversed voxels and differentiate between empty space and occlusion. Each voxel is classified as suitable for the static reference map or not by its object class and its occupation state during different epochs. Thereby, we avoid to eliminate static voxels which were occluded in some of the measurement runs (e.g. parts of a building occluded by a tree). However, segments that are only temporarily present and connected to static objects, such as scaffolds or awnings on buildings, are not included in the reference map. Overall, the combination of the classification with the subsequent entry of the classes into a voxel grid provides good and useful results that can be updated by including new measurement data.
KW - 3D point cloud
KW - Change detection
KW - Classification
KW - LiDAR
KW - Mobile mapping
KW - Segmentation
UR - http://www.scopus.com/inward/record.url?scp=85105512603&partnerID=8YFLogxK
U2 - 10.1007/s41064-021-00148-x
DO - 10.1007/s41064-021-00148-x
M3 - Article
AN - SCOPUS:85105512603
VL - 89
SP - 195
EP - 207
JO - PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science
JF - PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science
SN - 2512-2789
IS - 3
ER -