Gap completion in point cloud scene occluded by vehicles using SGC-Net

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autoren

Externe Organisationen

  • Technische Universität München (TUM)
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Seiten (von - bis)331-350
Seitenumfang20
FachzeitschriftISPRS Journal of Photogrammetry and Remote Sensing
Jahrgang215
Frühes Online-Datum23 Juli 2024
PublikationsstatusVeröffentlicht - Sept. 2024

Abstract

Recent advances in mobile mapping systems have greatly enhanced the efficiency and convenience of acquiring urban 3D data. These systems utilize LiDAR sensors mounted on vehicles to capture vast cityscapes. However, a significant challenge arises due to occlusions caused by roadside parked vehicles, leading to the loss of scene information, particularly on the roads, sidewalks, curbs, and the lower sections of buildings. In this study, we present a novel approach that leverages deep neural networks to learn a model capable of filling gaps in urban scenes that are obscured by vehicle occlusion. We have developed an innovative technique where we place virtual vehicle models along road boundaries in the gap-free scene and utilize a ray-casting algorithm to create a new scene with occluded gaps. This allows us to generate diverse and realistic urban point cloud scenes with and without vehicle occlusion, surpassing the limitations of real-world training data collection and annotation. Furthermore, we introduce the Scene Gap Completion Network (SGC-Net), an end-to-end model that can generate well-defined shape boundaries and smooth surfaces within occluded gaps. The experiment results reveal that 97.66% of the filled points fall within a range of 5 centimeters relative to the high-density ground truth point cloud scene. These findings underscore the efficacy of our proposed model in gap completion and reconstructing urban scenes affected by vehicle occlusions.

ASJC Scopus Sachgebiete

Zitieren

Gap completion in point cloud scene occluded by vehicles using SGC-Net. / Feng, Yu; Xu, Yiming; Xia, Yan et al.
in: ISPRS Journal of Photogrammetry and Remote Sensing, Jahrgang 215, 09.2024, S. 331-350.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Feng Y, Xu Y, Xia Y, Brenner C, Sester M. Gap completion in point cloud scene occluded by vehicles using SGC-Net. ISPRS Journal of Photogrammetry and Remote Sensing. 2024 Sep;215:331-350. Epub 2024 Jul 23. doi: 10.1016/j.isprsjprs.2024.07.009
Feng, Yu ; Xu, Yiming ; Xia, Yan et al. / Gap completion in point cloud scene occluded by vehicles using SGC-Net. in: ISPRS Journal of Photogrammetry and Remote Sensing. 2024 ; Jahrgang 215. S. 331-350.
Download
@article{53307baa4f6144c1b9fd05f5d7c77f0b,
title = "Gap completion in point cloud scene occluded by vehicles using SGC-Net",
abstract = "Recent advances in mobile mapping systems have greatly enhanced the efficiency and convenience of acquiring urban 3D data. These systems utilize LiDAR sensors mounted on vehicles to capture vast cityscapes. However, a significant challenge arises due to occlusions caused by roadside parked vehicles, leading to the loss of scene information, particularly on the roads, sidewalks, curbs, and the lower sections of buildings. In this study, we present a novel approach that leverages deep neural networks to learn a model capable of filling gaps in urban scenes that are obscured by vehicle occlusion. We have developed an innovative technique where we place virtual vehicle models along road boundaries in the gap-free scene and utilize a ray-casting algorithm to create a new scene with occluded gaps. This allows us to generate diverse and realistic urban point cloud scenes with and without vehicle occlusion, surpassing the limitations of real-world training data collection and annotation. Furthermore, we introduce the Scene Gap Completion Network (SGC-Net), an end-to-end model that can generate well-defined shape boundaries and smooth surfaces within occluded gaps. The experiment results reveal that 97.66% of the filled points fall within a range of 5 centimeters relative to the high-density ground truth point cloud scene. These findings underscore the efficacy of our proposed model in gap completion and reconstructing urban scenes affected by vehicle occlusions.",
keywords = "3D scene completion, LiDAR mobile mapping, Point cloud processing",
author = "Yu Feng and Yiming Xu and Yan Xia and Claus Brenner and Monika Sester",
note = "Publisher Copyright: {\textcopyright} 2024 The Author(s)",
year = "2024",
month = sep,
doi = "10.1016/j.isprsjprs.2024.07.009",
language = "English",
volume = "215",
pages = "331--350",
journal = "ISPRS Journal of Photogrammetry and Remote Sensing",
issn = "0924-2716",
publisher = "Elsevier",

}

Download

TY - JOUR

T1 - Gap completion in point cloud scene occluded by vehicles using SGC-Net

AU - Feng, Yu

AU - Xu, Yiming

AU - Xia, Yan

AU - Brenner, Claus

AU - Sester, Monika

N1 - Publisher Copyright: © 2024 The Author(s)

PY - 2024/9

Y1 - 2024/9

N2 - Recent advances in mobile mapping systems have greatly enhanced the efficiency and convenience of acquiring urban 3D data. These systems utilize LiDAR sensors mounted on vehicles to capture vast cityscapes. However, a significant challenge arises due to occlusions caused by roadside parked vehicles, leading to the loss of scene information, particularly on the roads, sidewalks, curbs, and the lower sections of buildings. In this study, we present a novel approach that leverages deep neural networks to learn a model capable of filling gaps in urban scenes that are obscured by vehicle occlusion. We have developed an innovative technique where we place virtual vehicle models along road boundaries in the gap-free scene and utilize a ray-casting algorithm to create a new scene with occluded gaps. This allows us to generate diverse and realistic urban point cloud scenes with and without vehicle occlusion, surpassing the limitations of real-world training data collection and annotation. Furthermore, we introduce the Scene Gap Completion Network (SGC-Net), an end-to-end model that can generate well-defined shape boundaries and smooth surfaces within occluded gaps. The experiment results reveal that 97.66% of the filled points fall within a range of 5 centimeters relative to the high-density ground truth point cloud scene. These findings underscore the efficacy of our proposed model in gap completion and reconstructing urban scenes affected by vehicle occlusions.

AB - Recent advances in mobile mapping systems have greatly enhanced the efficiency and convenience of acquiring urban 3D data. These systems utilize LiDAR sensors mounted on vehicles to capture vast cityscapes. However, a significant challenge arises due to occlusions caused by roadside parked vehicles, leading to the loss of scene information, particularly on the roads, sidewalks, curbs, and the lower sections of buildings. In this study, we present a novel approach that leverages deep neural networks to learn a model capable of filling gaps in urban scenes that are obscured by vehicle occlusion. We have developed an innovative technique where we place virtual vehicle models along road boundaries in the gap-free scene and utilize a ray-casting algorithm to create a new scene with occluded gaps. This allows us to generate diverse and realistic urban point cloud scenes with and without vehicle occlusion, surpassing the limitations of real-world training data collection and annotation. Furthermore, we introduce the Scene Gap Completion Network (SGC-Net), an end-to-end model that can generate well-defined shape boundaries and smooth surfaces within occluded gaps. The experiment results reveal that 97.66% of the filled points fall within a range of 5 centimeters relative to the high-density ground truth point cloud scene. These findings underscore the efficacy of our proposed model in gap completion and reconstructing urban scenes affected by vehicle occlusions.

KW - 3D scene completion

KW - LiDAR mobile mapping

KW - Point cloud processing

UR - http://www.scopus.com/inward/record.url?scp=85199285161&partnerID=8YFLogxK

U2 - 10.1016/j.isprsjprs.2024.07.009

DO - 10.1016/j.isprsjprs.2024.07.009

M3 - Article

AN - SCOPUS:85199285161

VL - 215

SP - 331

EP - 350

JO - ISPRS Journal of Photogrammetry and Remote Sensing

JF - ISPRS Journal of Photogrammetry and Remote Sensing

SN - 0924-2716

ER -

Von denselben Autoren