Adversarial domain adaptation for the classification of aerial images and height data using convolutional neural networks

Publikation: Beitrag in FachzeitschriftKonferenzaufsatz in FachzeitschriftForschungPeer-Review

Autoren

  • Dennis Wittich
  • Franz Rottensteiner
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Seiten (von - bis)197-204
Seitenumfang8
FachzeitschriftISPRS Journal of Photogrammetry and Remote Sensing
Jahrgang4
Ausgabenummer2/W7
PublikationsstatusElektronisch veröffentlicht (E-Pub) - 16 Sept. 2019
Veranstaltung1st Photogrammetric Image Analysis and Munich Remote Sensing Symposium, PIA 2019+MRSS 2019 - Munich, Deutschland
Dauer: 18 Sept. 201920 Sept. 2019

Abstract

Domain adaptation (DA) can drastically decrease the amount of training data needed to obtain good classification models by leveraging available data from a source domain for the classification of a new (target) domains. In this paper, we address deep DA, i.e. DA with deep convolutional neural networks (CNN), a problem that has not been addressed frequently in remote sensing. We present a new method for semi-supervised DA for the task of pixel-based classification by a CNN. After proposing an encoder-decoder-based fully convolutional neural network (FCN), we adapt a method for adversarial discriminative DA to be applicable to the pixel-based classification of remotely sensed data based on this network. It tries to learn a feature representation that is domain invariant; domain-invariance is measured by a classifier's incapability of predicting from which domain a sample was generated. We evaluate our FCN on the ISPRS labelling challenge, showing that it is close to the best-performing models. DA is evaluated on the basis of three domains. We compare different network configurations and perform the representation transfer at different layers of the network. We show that when using a proper layer for adaptation, our method achieves a positive transfer and thus an improved classification accuracy in the target domain for all evaluated combinations of source and target domains.

ASJC Scopus Sachgebiete

Zitieren

Adversarial domain adaptation for the classification of aerial images and height data using convolutional neural networks. / Wittich, Dennis; Rottensteiner, Franz.
in: ISPRS Journal of Photogrammetry and Remote Sensing, Jahrgang 4, Nr. 2/W7, 16.09.2019, S. 197-204.

Publikation: Beitrag in FachzeitschriftKonferenzaufsatz in FachzeitschriftForschungPeer-Review

Wittich D, Rottensteiner F. Adversarial domain adaptation for the classification of aerial images and height data using convolutional neural networks. ISPRS Journal of Photogrammetry and Remote Sensing. 2019 Sep 16;4(2/W7):197-204. Epub 2019 Sep 16. doi: 10.5194/isprs-annals-IV-2-W7-197-2019
Download
@article{0c356e21095a4bea90745eea8cb2793d,
title = "Adversarial domain adaptation for the classification of aerial images and height data using convolutional neural networks",
abstract = "Domain adaptation (DA) can drastically decrease the amount of training data needed to obtain good classification models by leveraging available data from a source domain for the classification of a new (target) domains. In this paper, we address deep DA, i.e. DA with deep convolutional neural networks (CNN), a problem that has not been addressed frequently in remote sensing. We present a new method for semi-supervised DA for the task of pixel-based classification by a CNN. After proposing an encoder-decoder-based fully convolutional neural network (FCN), we adapt a method for adversarial discriminative DA to be applicable to the pixel-based classification of remotely sensed data based on this network. It tries to learn a feature representation that is domain invariant; domain-invariance is measured by a classifier's incapability of predicting from which domain a sample was generated. We evaluate our FCN on the ISPRS labelling challenge, showing that it is close to the best-performing models. DA is evaluated on the basis of three domains. We compare different network configurations and perform the representation transfer at different layers of the network. We show that when using a proper layer for adaptation, our method achieves a positive transfer and thus an improved classification accuracy in the target domain for all evaluated combinations of source and target domains.",
keywords = "Classification, Domain Adaptation, Fully Convolutional Networks, Segmentation",
author = "Dennis Wittich and Franz Rottensteiner",
note = "Funding Information: This work was partially funded by the Federal Ministry of Education and Research, Germany (Bundesministerium f{\"u}r Bildung und Forschung, F{\"o}rderkennzeichen 01IS17076). The Vaihingen dataset was provided by the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) (Cramer, 2010): http://www.ifp.uni-stuttgart.de/dgpf/DKEP-Allg.html. The 3City dataset is an extract from the geospatial data of the Lower Saxony survey and cadastre administration, (c) 2013 (LGLN), the reference was provided by (Vogt et al., 2018).; 1st Photogrammetric Image Analysis and Munich Remote Sensing Symposium, PIA 2019+MRSS 2019 ; Conference date: 18-09-2019 Through 20-09-2019",
year = "2019",
month = sep,
day = "16",
doi = "10.5194/isprs-annals-IV-2-W7-197-2019",
language = "English",
volume = "4",
pages = "197--204",
journal = "ISPRS Journal of Photogrammetry and Remote Sensing",
issn = "0924-2716",
publisher = "Elsevier",
number = "2/W7",

}

Download

TY - JOUR

T1 - Adversarial domain adaptation for the classification of aerial images and height data using convolutional neural networks

AU - Wittich, Dennis

AU - Rottensteiner, Franz

N1 - Funding Information: This work was partially funded by the Federal Ministry of Education and Research, Germany (Bundesministerium für Bildung und Forschung, Förderkennzeichen 01IS17076). The Vaihingen dataset was provided by the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) (Cramer, 2010): http://www.ifp.uni-stuttgart.de/dgpf/DKEP-Allg.html. The 3City dataset is an extract from the geospatial data of the Lower Saxony survey and cadastre administration, (c) 2013 (LGLN), the reference was provided by (Vogt et al., 2018).

PY - 2019/9/16

Y1 - 2019/9/16

N2 - Domain adaptation (DA) can drastically decrease the amount of training data needed to obtain good classification models by leveraging available data from a source domain for the classification of a new (target) domains. In this paper, we address deep DA, i.e. DA with deep convolutional neural networks (CNN), a problem that has not been addressed frequently in remote sensing. We present a new method for semi-supervised DA for the task of pixel-based classification by a CNN. After proposing an encoder-decoder-based fully convolutional neural network (FCN), we adapt a method for adversarial discriminative DA to be applicable to the pixel-based classification of remotely sensed data based on this network. It tries to learn a feature representation that is domain invariant; domain-invariance is measured by a classifier's incapability of predicting from which domain a sample was generated. We evaluate our FCN on the ISPRS labelling challenge, showing that it is close to the best-performing models. DA is evaluated on the basis of three domains. We compare different network configurations and perform the representation transfer at different layers of the network. We show that when using a proper layer for adaptation, our method achieves a positive transfer and thus an improved classification accuracy in the target domain for all evaluated combinations of source and target domains.

AB - Domain adaptation (DA) can drastically decrease the amount of training data needed to obtain good classification models by leveraging available data from a source domain for the classification of a new (target) domains. In this paper, we address deep DA, i.e. DA with deep convolutional neural networks (CNN), a problem that has not been addressed frequently in remote sensing. We present a new method for semi-supervised DA for the task of pixel-based classification by a CNN. After proposing an encoder-decoder-based fully convolutional neural network (FCN), we adapt a method for adversarial discriminative DA to be applicable to the pixel-based classification of remotely sensed data based on this network. It tries to learn a feature representation that is domain invariant; domain-invariance is measured by a classifier's incapability of predicting from which domain a sample was generated. We evaluate our FCN on the ISPRS labelling challenge, showing that it is close to the best-performing models. DA is evaluated on the basis of three domains. We compare different network configurations and perform the representation transfer at different layers of the network. We show that when using a proper layer for adaptation, our method achieves a positive transfer and thus an improved classification accuracy in the target domain for all evaluated combinations of source and target domains.

KW - Classification

KW - Domain Adaptation

KW - Fully Convolutional Networks

KW - Segmentation

UR - http://www.scopus.com/inward/record.url?scp=85078902881&partnerID=8YFLogxK

U2 - 10.5194/isprs-annals-IV-2-W7-197-2019

DO - 10.5194/isprs-annals-IV-2-W7-197-2019

M3 - Conference article

AN - SCOPUS:85078902881

VL - 4

SP - 197

EP - 204

JO - ISPRS Journal of Photogrammetry and Remote Sensing

JF - ISPRS Journal of Photogrammetry and Remote Sensing

SN - 0924-2716

IS - 2/W7

T2 - 1st Photogrammetric Image Analysis and Munich Remote Sensing Symposium, PIA 2019+MRSS 2019

Y2 - 18 September 2019 through 20 September 2019

ER -