Classification and Change Detection in Mobile Mapping LiDAR Point Clouds

Research output: Contribution to journalArticleResearchpeer review

View graph of relations

Details

Original languageEnglish
Pages (from-to)195-207
Number of pages13
JournalPFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science
Volume89
Issue number3
Early online date6 May 2021
Publication statusPublished - Jun 2021

Abstract

Creating 3D models of the static environment is an important task for the advancement of driver assistance systems and autonomous driving. In this work, a static reference map is created from a Mobile Mapping “light detection and ranging” (LiDAR) dataset. The data was obtained in 14 measurement runs from March to October 2017 in Hannover and consists in total of about 15 billion points. The point cloud data are first segmented by region growing and then processed by a random forest classification, which divides the segments into the five static classes (“facade”, “pole”, “fence”, “traffic sign”, and “vegetation”) and three dynamic classes (“vehicle”, “bicycle”, “person”) with an overall accuracy of 94%. All static objects are entered into a voxel grid, to compare different measurement epochs directly. In the next step, the classified voxels are combined with the result of a visibility analysis. Therefore, we use a ray tracing algorithm to detect traversed voxels and differentiate between empty space and occlusion. Each voxel is classified as suitable for the static reference map or not by its object class and its occupation state during different epochs. Thereby, we avoid to eliminate static voxels which were occluded in some of the measurement runs (e.g. parts of a building occluded by a tree). However, segments that are only temporarily present and connected to static objects, such as scaffolds or awnings on buildings, are not included in the reference map. Overall, the combination of the classification with the subsequent entry of the classes into a voxel grid provides good and useful results that can be updated by including new measurement data.

Keywords

    3D point cloud, Change detection, Classification, LiDAR, Mobile mapping, Segmentation

ASJC Scopus subject areas

Cite this

Classification and Change Detection in Mobile Mapping LiDAR Point Clouds. / Voelsen, Mirjana; Schachtschneider, Julia; Brenner, Claus.
In: PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science, Vol. 89, No. 3, 06.2021, p. 195-207.

Research output: Contribution to journalArticleResearchpeer review

Voelsen, M, Schachtschneider, J & Brenner, C 2021, 'Classification and Change Detection in Mobile Mapping LiDAR Point Clouds', PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science, vol. 89, no. 3, pp. 195-207. https://doi.org/10.1007/s41064-021-00148-x
Voelsen, M., Schachtschneider, J., & Brenner, C. (2021). Classification and Change Detection in Mobile Mapping LiDAR Point Clouds. PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science, 89(3), 195-207. https://doi.org/10.1007/s41064-021-00148-x
Voelsen M, Schachtschneider J, Brenner C. Classification and Change Detection in Mobile Mapping LiDAR Point Clouds. PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science. 2021 Jun;89(3):195-207. Epub 2021 May 6. doi: 10.1007/s41064-021-00148-x
Voelsen, Mirjana ; Schachtschneider, Julia ; Brenner, Claus. / Classification and Change Detection in Mobile Mapping LiDAR Point Clouds. In: PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science. 2021 ; Vol. 89, No. 3. pp. 195-207.
Download
@article{2b8e2598f20340cba2b87487e982bdd1,
title = "Classification and Change Detection in Mobile Mapping LiDAR Point Clouds",
abstract = "Creating 3D models of the static environment is an important task for the advancement of driver assistance systems and autonomous driving. In this work, a static reference map is created from a Mobile Mapping “light detection and ranging” (LiDAR) dataset. The data was obtained in 14 measurement runs from March to October 2017 in Hannover and consists in total of about 15 billion points. The point cloud data are first segmented by region growing and then processed by a random forest classification, which divides the segments into the five static classes (“facade”, “pole”, “fence”, “traffic sign”, and “vegetation”) and three dynamic classes (“vehicle”, “bicycle”, “person”) with an overall accuracy of 94%. All static objects are entered into a voxel grid, to compare different measurement epochs directly. In the next step, the classified voxels are combined with the result of a visibility analysis. Therefore, we use a ray tracing algorithm to detect traversed voxels and differentiate between empty space and occlusion. Each voxel is classified as suitable for the static reference map or not by its object class and its occupation state during different epochs. Thereby, we avoid to eliminate static voxels which were occluded in some of the measurement runs (e.g. parts of a building occluded by a tree). However, segments that are only temporarily present and connected to static objects, such as scaffolds or awnings on buildings, are not included in the reference map. Overall, the combination of the classification with the subsequent entry of the classes into a voxel grid provides good and useful results that can be updated by including new measurement data.",
keywords = "3D point cloud, Change detection, Classification, LiDAR, Mobile mapping, Segmentation",
author = "Mirjana Voelsen and Julia Schachtschneider and Claus Brenner",
note = "Funding Information: Julia Schachtschneider was supported by the German Research Foundation (DFG), as part of the Research Training Group i.c.sens, GRK 2159, {\textquoteleft}Integrity and Collaboration in Dynamic Sensor Networks{\textquoteright}. The long term measurement campaign used in this paper was also conducted within the scope of this project. ",
year = "2021",
month = jun,
doi = "10.1007/s41064-021-00148-x",
language = "English",
volume = "89",
pages = "195--207",
number = "3",

}

Download

TY - JOUR

T1 - Classification and Change Detection in Mobile Mapping LiDAR Point Clouds

AU - Voelsen, Mirjana

AU - Schachtschneider, Julia

AU - Brenner, Claus

N1 - Funding Information: Julia Schachtschneider was supported by the German Research Foundation (DFG), as part of the Research Training Group i.c.sens, GRK 2159, ‘Integrity and Collaboration in Dynamic Sensor Networks’. The long term measurement campaign used in this paper was also conducted within the scope of this project.

PY - 2021/6

Y1 - 2021/6

N2 - Creating 3D models of the static environment is an important task for the advancement of driver assistance systems and autonomous driving. In this work, a static reference map is created from a Mobile Mapping “light detection and ranging” (LiDAR) dataset. The data was obtained in 14 measurement runs from March to October 2017 in Hannover and consists in total of about 15 billion points. The point cloud data are first segmented by region growing and then processed by a random forest classification, which divides the segments into the five static classes (“facade”, “pole”, “fence”, “traffic sign”, and “vegetation”) and three dynamic classes (“vehicle”, “bicycle”, “person”) with an overall accuracy of 94%. All static objects are entered into a voxel grid, to compare different measurement epochs directly. In the next step, the classified voxels are combined with the result of a visibility analysis. Therefore, we use a ray tracing algorithm to detect traversed voxels and differentiate between empty space and occlusion. Each voxel is classified as suitable for the static reference map or not by its object class and its occupation state during different epochs. Thereby, we avoid to eliminate static voxels which were occluded in some of the measurement runs (e.g. parts of a building occluded by a tree). However, segments that are only temporarily present and connected to static objects, such as scaffolds or awnings on buildings, are not included in the reference map. Overall, the combination of the classification with the subsequent entry of the classes into a voxel grid provides good and useful results that can be updated by including new measurement data.

AB - Creating 3D models of the static environment is an important task for the advancement of driver assistance systems and autonomous driving. In this work, a static reference map is created from a Mobile Mapping “light detection and ranging” (LiDAR) dataset. The data was obtained in 14 measurement runs from March to October 2017 in Hannover and consists in total of about 15 billion points. The point cloud data are first segmented by region growing and then processed by a random forest classification, which divides the segments into the five static classes (“facade”, “pole”, “fence”, “traffic sign”, and “vegetation”) and three dynamic classes (“vehicle”, “bicycle”, “person”) with an overall accuracy of 94%. All static objects are entered into a voxel grid, to compare different measurement epochs directly. In the next step, the classified voxels are combined with the result of a visibility analysis. Therefore, we use a ray tracing algorithm to detect traversed voxels and differentiate between empty space and occlusion. Each voxel is classified as suitable for the static reference map or not by its object class and its occupation state during different epochs. Thereby, we avoid to eliminate static voxels which were occluded in some of the measurement runs (e.g. parts of a building occluded by a tree). However, segments that are only temporarily present and connected to static objects, such as scaffolds or awnings on buildings, are not included in the reference map. Overall, the combination of the classification with the subsequent entry of the classes into a voxel grid provides good and useful results that can be updated by including new measurement data.

KW - 3D point cloud

KW - Change detection

KW - Classification

KW - LiDAR

KW - Mobile mapping

KW - Segmentation

UR - http://www.scopus.com/inward/record.url?scp=85105512603&partnerID=8YFLogxK

U2 - 10.1007/s41064-021-00148-x

DO - 10.1007/s41064-021-00148-x

M3 - Article

AN - SCOPUS:85105512603

VL - 89

SP - 195

EP - 207

JO - PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science

JF - PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science

SN - 2512-2789

IS - 3

ER -