Details
Originalsprache | Englisch |
---|---|
Seiten (von - bis) | 711-716 |
Seitenumfang | 6 |
Fachzeitschrift | International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives |
Jahrgang | 43 |
Ausgabenummer | B2 |
Publikationsstatus | Veröffentlicht - 12 Aug. 2020 |
Veranstaltung | 2020 24th ISPRS Congress - Technical Commission II - Nice, Virtual, Frankreich Dauer: 31 Aug. 2020 → 2 Sept. 2020 |
Abstract
The goal of this paper is to use transfer learning for semi supervised semantic segmentation in 2D images: given a pretrained deep convolutional network (DCNN), our aim is to adapt it to a new camera-sensor system by enforcing predictions to be consistent for the same object in space. This is enabled by projecting 3D object points into multi-view 2D images. Since every 3D object point is usually mapped to a number of 2D images, each of which undergoes a pixelwise classification using the pretrained DCNN, we obtain a number of predictions (labels) for the same object point. This makes it possible to detect and correct outlier predictions. Ultimately, we retrain the DCNN on the corrected dataset in order to adapt the network to the new input data. We demonstrate the effectiveness of our approach on a mobile mapping dataset containing over 10'000 images and more than 1 billion 3D points. Moreover, we manually annotated a subset of the mobile mapping images and show that we were able to rise the mean intersection over union (mIoU) by approximately 10% with Deeplabv3+, using our approach.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Information systems
- Sozialwissenschaften (insg.)
- Geografie, Planung und Entwicklung
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
in: International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives, Jahrgang 43, Nr. B2, 12.08.2020, S. 711-716.
Publikation: Beitrag in Fachzeitschrift › Konferenzaufsatz in Fachzeitschrift › Forschung › Peer-Review
}
TY - JOUR
T1 - Improving deep learning based semantic segmentation with multi view outlier correction
AU - Peters, Torben
AU - Brenner, Claus
AU - Song, M.
N1 - Funding information: This work was funded by the German Research Foundation (DFG) as a part of the Research Training Group GRK2159, ‘Integrity and collaboration in dynamic sensor networks (i.c.sens)’.
PY - 2020/8/12
Y1 - 2020/8/12
N2 - The goal of this paper is to use transfer learning for semi supervised semantic segmentation in 2D images: given a pretrained deep convolutional network (DCNN), our aim is to adapt it to a new camera-sensor system by enforcing predictions to be consistent for the same object in space. This is enabled by projecting 3D object points into multi-view 2D images. Since every 3D object point is usually mapped to a number of 2D images, each of which undergoes a pixelwise classification using the pretrained DCNN, we obtain a number of predictions (labels) for the same object point. This makes it possible to detect and correct outlier predictions. Ultimately, we retrain the DCNN on the corrected dataset in order to adapt the network to the new input data. We demonstrate the effectiveness of our approach on a mobile mapping dataset containing over 10'000 images and more than 1 billion 3D points. Moreover, we manually annotated a subset of the mobile mapping images and show that we were able to rise the mean intersection over union (mIoU) by approximately 10% with Deeplabv3+, using our approach.
AB - The goal of this paper is to use transfer learning for semi supervised semantic segmentation in 2D images: given a pretrained deep convolutional network (DCNN), our aim is to adapt it to a new camera-sensor system by enforcing predictions to be consistent for the same object in space. This is enabled by projecting 3D object points into multi-view 2D images. Since every 3D object point is usually mapped to a number of 2D images, each of which undergoes a pixelwise classification using the pretrained DCNN, we obtain a number of predictions (labels) for the same object point. This makes it possible to detect and correct outlier predictions. Ultimately, we retrain the DCNN on the corrected dataset in order to adapt the network to the new input data. We demonstrate the effectiveness of our approach on a mobile mapping dataset containing over 10'000 images and more than 1 billion 3D points. Moreover, we manually annotated a subset of the mobile mapping images and show that we were able to rise the mean intersection over union (mIoU) by approximately 10% with Deeplabv3+, using our approach.
KW - Deep Learning
KW - MMS
KW - Multi-view
KW - Point Cloud
KW - Transfer Learning
UR - http://www.scopus.com/inward/record.url?scp=85091067250&partnerID=8YFLogxK
U2 - 10.5194/isprs-archives-XLIII-B2-2020-711-2020
DO - 10.5194/isprs-archives-XLIII-B2-2020-711-2020
M3 - Conference article
AN - SCOPUS:85091067250
VL - 43
SP - 711
EP - 716
JO - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives
JF - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives
SN - 1682-1750
IS - B2
T2 - 2020 24th ISPRS Congress - Technical Commission II
Y2 - 31 August 2020 through 2 September 2020
ER -