3D feature point extraction from LiDAR data using a neural network

Publikation: Beitrag in FachzeitschriftKonferenzaufsatz in FachzeitschriftForschungPeer-Review

Autoren

  • Y. Feng
  • A. Schlichting
  • C. Brenner
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Seiten (von - bis)563-569
Seitenumfang7
FachzeitschriftInternational Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives
Jahrgang2016-January
PublikationsstatusVeröffentlicht - 2016
Veranstaltung23rd International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Congress, ISPRS 2016 - Prague, Tschechische Republik
Dauer: 12 Juli 201619 Juli 2016

Abstract

Accurate positioning of vehicles plays an important role in autonomous driving. In our previous research on landmark-based positioning, poles were extracted both from reference data and online sensor data, which were then matched to improve the positioning accuracy of the vehicles. However, there are environments which contain only a limited number of poles. 3D feature points are one of the proper alternatives to be used as landmarks. They can be assumed to be present in the environment, independent of certain object classes. To match the LiDAR data online to another LiDAR derived reference dataset, the extraction of 3D feature points is an essential step. In this paper, we address the problem of 3D feature point extraction from LiDAR datasets. Instead of hand-crafting a 3D feature point extractor, we propose to train it using a neural network. In this approach, a set of candidates for the 3D feature points is firstly detected by the Shi-Tomasi corner detector on the range images of the LiDAR point cloud. Using a back propagation algorithm for the training, the artificial neural network is capable of predicting feature points from these corner candidates. The training considers not only the shape of each corner candidate on 2D range images, but also their 3D features such as the curvature value and surface normal value in z axis, which are calculated directly based on the LiDAR point cloud. Subsequently the extracted feature points on the 2D range images are retrieved in the 3D scene. The 3D feature points extracted by this approach are generally distinctive in the 3D space. Our test shows that the proposed method is capable of providing a sufficient number of repeatable 3D feature points for the matching task. The feature points extracted by this approach have great potential to be used as landmarks for a better localization of vehicles.

ASJC Scopus Sachgebiete

Zitieren

3D feature point extraction from LiDAR data using a neural network. / Feng, Y.; Schlichting, A.; Brenner, C.
in: International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives, Jahrgang 2016-January, 2016, S. 563-569.

Publikation: Beitrag in FachzeitschriftKonferenzaufsatz in FachzeitschriftForschungPeer-Review

Feng, Y, Schlichting, A & Brenner, C 2016, '3D feature point extraction from LiDAR data using a neural network', International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives, Jg. 2016-January, S. 563-569. https://doi.org/10.5194/isprsarchives-XLI-B1-563-2016
Feng, Y., Schlichting, A., & Brenner, C. (2016). 3D feature point extraction from LiDAR data using a neural network. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives, 2016-January, 563-569. https://doi.org/10.5194/isprsarchives-XLI-B1-563-2016
Feng Y, Schlichting A, Brenner C. 3D feature point extraction from LiDAR data using a neural network. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives. 2016;2016-January:563-569. doi: 10.5194/isprsarchives-XLI-B1-563-2016
Feng, Y. ; Schlichting, A. ; Brenner, C. / 3D feature point extraction from LiDAR data using a neural network. in: International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives. 2016 ; Jahrgang 2016-January. S. 563-569.
Download
@article{d4fff45b899f47af9d22e8ba27838543,
title = "3D feature point extraction from LiDAR data using a neural network",
abstract = "Accurate positioning of vehicles plays an important role in autonomous driving. In our previous research on landmark-based positioning, poles were extracted both from reference data and online sensor data, which were then matched to improve the positioning accuracy of the vehicles. However, there are environments which contain only a limited number of poles. 3D feature points are one of the proper alternatives to be used as landmarks. They can be assumed to be present in the environment, independent of certain object classes. To match the LiDAR data online to another LiDAR derived reference dataset, the extraction of 3D feature points is an essential step. In this paper, we address the problem of 3D feature point extraction from LiDAR datasets. Instead of hand-crafting a 3D feature point extractor, we propose to train it using a neural network. In this approach, a set of candidates for the 3D feature points is firstly detected by the Shi-Tomasi corner detector on the range images of the LiDAR point cloud. Using a back propagation algorithm for the training, the artificial neural network is capable of predicting feature points from these corner candidates. The training considers not only the shape of each corner candidate on 2D range images, but also their 3D features such as the curvature value and surface normal value in z axis, which are calculated directly based on the LiDAR point cloud. Subsequently the extracted feature points on the 2D range images are retrieved in the 3D scene. The 3D feature points extracted by this approach are generally distinctive in the 3D space. Our test shows that the proposed method is capable of providing a sufficient number of repeatable 3D feature points for the matching task. The feature points extracted by this approach have great potential to be used as landmarks for a better localization of vehicles.",
keywords = "3D feature points extraction, LiDAR, Mobile mapping system, Neural network",
author = "Y. Feng and A. Schlichting and C. Brenner",
year = "2016",
doi = "10.5194/isprsarchives-XLI-B1-563-2016",
language = "English",
volume = "2016-January",
pages = "563--569",
note = "23rd International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Congress, ISPRS 2016 ; Conference date: 12-07-2016 Through 19-07-2016",

}

Download

TY - JOUR

T1 - 3D feature point extraction from LiDAR data using a neural network

AU - Feng, Y.

AU - Schlichting, A.

AU - Brenner, C.

PY - 2016

Y1 - 2016

N2 - Accurate positioning of vehicles plays an important role in autonomous driving. In our previous research on landmark-based positioning, poles were extracted both from reference data and online sensor data, which were then matched to improve the positioning accuracy of the vehicles. However, there are environments which contain only a limited number of poles. 3D feature points are one of the proper alternatives to be used as landmarks. They can be assumed to be present in the environment, independent of certain object classes. To match the LiDAR data online to another LiDAR derived reference dataset, the extraction of 3D feature points is an essential step. In this paper, we address the problem of 3D feature point extraction from LiDAR datasets. Instead of hand-crafting a 3D feature point extractor, we propose to train it using a neural network. In this approach, a set of candidates for the 3D feature points is firstly detected by the Shi-Tomasi corner detector on the range images of the LiDAR point cloud. Using a back propagation algorithm for the training, the artificial neural network is capable of predicting feature points from these corner candidates. The training considers not only the shape of each corner candidate on 2D range images, but also their 3D features such as the curvature value and surface normal value in z axis, which are calculated directly based on the LiDAR point cloud. Subsequently the extracted feature points on the 2D range images are retrieved in the 3D scene. The 3D feature points extracted by this approach are generally distinctive in the 3D space. Our test shows that the proposed method is capable of providing a sufficient number of repeatable 3D feature points for the matching task. The feature points extracted by this approach have great potential to be used as landmarks for a better localization of vehicles.

AB - Accurate positioning of vehicles plays an important role in autonomous driving. In our previous research on landmark-based positioning, poles were extracted both from reference data and online sensor data, which were then matched to improve the positioning accuracy of the vehicles. However, there are environments which contain only a limited number of poles. 3D feature points are one of the proper alternatives to be used as landmarks. They can be assumed to be present in the environment, independent of certain object classes. To match the LiDAR data online to another LiDAR derived reference dataset, the extraction of 3D feature points is an essential step. In this paper, we address the problem of 3D feature point extraction from LiDAR datasets. Instead of hand-crafting a 3D feature point extractor, we propose to train it using a neural network. In this approach, a set of candidates for the 3D feature points is firstly detected by the Shi-Tomasi corner detector on the range images of the LiDAR point cloud. Using a back propagation algorithm for the training, the artificial neural network is capable of predicting feature points from these corner candidates. The training considers not only the shape of each corner candidate on 2D range images, but also their 3D features such as the curvature value and surface normal value in z axis, which are calculated directly based on the LiDAR point cloud. Subsequently the extracted feature points on the 2D range images are retrieved in the 3D scene. The 3D feature points extracted by this approach are generally distinctive in the 3D space. Our test shows that the proposed method is capable of providing a sufficient number of repeatable 3D feature points for the matching task. The feature points extracted by this approach have great potential to be used as landmarks for a better localization of vehicles.

KW - 3D feature points extraction

KW - LiDAR

KW - Mobile mapping system

KW - Neural network

UR - http://www.scopus.com/inward/record.url?scp=84987924608&partnerID=8YFLogxK

U2 - 10.5194/isprsarchives-XLI-B1-563-2016

DO - 10.5194/isprsarchives-XLI-B1-563-2016

M3 - Conference article

AN - SCOPUS:84987924608

VL - 2016-January

SP - 563

EP - 569

JO - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives

JF - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives

SN - 1682-1750

T2 - 23rd International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Congress, ISPRS 2016

Y2 - 12 July 2016 through 19 July 2016

ER -