Details
Original language | English |
---|---|
Pages (from-to) | 711-716 |
Number of pages | 6 |
Journal | International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives |
Volume | 43 |
Issue number | B2 |
Publication status | Published - 12 Aug 2020 |
Event | 2020 24th ISPRS Congress - Technical Commission II - Nice, Virtual, France Duration: 31 Aug 2020 → 2 Sept 2020 |
Abstract
The goal of this paper is to use transfer learning for semi supervised semantic segmentation in 2D images: given a pretrained deep convolutional network (DCNN), our aim is to adapt it to a new camera-sensor system by enforcing predictions to be consistent for the same object in space. This is enabled by projecting 3D object points into multi-view 2D images. Since every 3D object point is usually mapped to a number of 2D images, each of which undergoes a pixelwise classification using the pretrained DCNN, we obtain a number of predictions (labels) for the same object point. This makes it possible to detect and correct outlier predictions. Ultimately, we retrain the DCNN on the corrected dataset in order to adapt the network to the new input data. We demonstrate the effectiveness of our approach on a mobile mapping dataset containing over 10'000 images and more than 1 billion 3D points. Moreover, we manually annotated a subset of the mobile mapping images and show that we were able to rise the mean intersection over union (mIoU) by approximately 10% with Deeplabv3+, using our approach.
Keywords
- Deep Learning, MMS, Multi-view, Point Cloud, Transfer Learning
ASJC Scopus subject areas
- Computer Science(all)
- Information Systems
- Social Sciences(all)
- Geography, Planning and Development
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives, Vol. 43, No. B2, 12.08.2020, p. 711-716.
Research output: Contribution to journal › Conference article › Research › peer review
}
TY - JOUR
T1 - Improving deep learning based semantic segmentation with multi view outlier correction
AU - Peters, Torben
AU - Brenner, Claus
AU - Song, M.
N1 - Funding information: This work was funded by the German Research Foundation (DFG) as a part of the Research Training Group GRK2159, ‘Integrity and collaboration in dynamic sensor networks (i.c.sens)’.
PY - 2020/8/12
Y1 - 2020/8/12
N2 - The goal of this paper is to use transfer learning for semi supervised semantic segmentation in 2D images: given a pretrained deep convolutional network (DCNN), our aim is to adapt it to a new camera-sensor system by enforcing predictions to be consistent for the same object in space. This is enabled by projecting 3D object points into multi-view 2D images. Since every 3D object point is usually mapped to a number of 2D images, each of which undergoes a pixelwise classification using the pretrained DCNN, we obtain a number of predictions (labels) for the same object point. This makes it possible to detect and correct outlier predictions. Ultimately, we retrain the DCNN on the corrected dataset in order to adapt the network to the new input data. We demonstrate the effectiveness of our approach on a mobile mapping dataset containing over 10'000 images and more than 1 billion 3D points. Moreover, we manually annotated a subset of the mobile mapping images and show that we were able to rise the mean intersection over union (mIoU) by approximately 10% with Deeplabv3+, using our approach.
AB - The goal of this paper is to use transfer learning for semi supervised semantic segmentation in 2D images: given a pretrained deep convolutional network (DCNN), our aim is to adapt it to a new camera-sensor system by enforcing predictions to be consistent for the same object in space. This is enabled by projecting 3D object points into multi-view 2D images. Since every 3D object point is usually mapped to a number of 2D images, each of which undergoes a pixelwise classification using the pretrained DCNN, we obtain a number of predictions (labels) for the same object point. This makes it possible to detect and correct outlier predictions. Ultimately, we retrain the DCNN on the corrected dataset in order to adapt the network to the new input data. We demonstrate the effectiveness of our approach on a mobile mapping dataset containing over 10'000 images and more than 1 billion 3D points. Moreover, we manually annotated a subset of the mobile mapping images and show that we were able to rise the mean intersection over union (mIoU) by approximately 10% with Deeplabv3+, using our approach.
KW - Deep Learning
KW - MMS
KW - Multi-view
KW - Point Cloud
KW - Transfer Learning
UR - http://www.scopus.com/inward/record.url?scp=85091067250&partnerID=8YFLogxK
U2 - 10.5194/isprs-archives-XLIII-B2-2020-711-2020
DO - 10.5194/isprs-archives-XLIII-B2-2020-711-2020
M3 - Conference article
AN - SCOPUS:85091067250
VL - 43
SP - 711
EP - 716
JO - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives
JF - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives
SN - 1682-1750
IS - B2
T2 - 2020 24th ISPRS Congress - Technical Commission II
Y2 - 31 August 2020 through 2 September 2020
ER -