Generating Virtual Training Labels for Crop Classification from Fused Sentinel-1 and Sentinel-2 Time Series

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autoren

  • Maryam Teimouri
  • Mehdi Mokhtarzade
  • Nicolas Baghdadi
  • Christian Heipke

Externe Organisationen

  • K.N. Toosi University of Technology
  • Universität Montpellier
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Seiten (von - bis)413-423
Seitenumfang11
FachzeitschriftPFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science
Jahrgang91
Ausgabenummer6
Frühes Online-Datum26 Sept. 2023
PublikationsstatusVeröffentlicht - Dez. 2023

Abstract

Convolutional neural networks (CNNs) have shown results superior to most traditional image understanding approaches in many fields, incl. crop classification from satellite time series images. However, CNNs require a large number of training samples to properly train the network. The process of collecting and labeling such samples using traditional methods can be both, time-consuming and costly. To address this issue and improve classification accuracy, generating virtual training labels (VTL) from existing ones is a promising solution. To this end, this study proposes a novel method for generating VTL based on sub-dividing the training samples of each crop using self-organizing maps (SOM), and then assigning labels to a set of unlabeled pixels based on the distance to these sub-classes. We apply the new method to crop classification from Sentinel images. A three-dimensional (3D) CNN is utilized for extracting features from the fusion of optical and radar time series. The results of the evaluation show that the proposed method is effective in generating VTL, as demonstrated by the achieved overall accuracy (OA) of 95.3% and kappa coefficient (KC) of 94.5%, compared to 91.3% and 89.9% for a solution without VTL. The results suggest that the proposed method has the potential to enhance the classification accuracy of crops using VTL.

ASJC Scopus Sachgebiete

Zitieren

Generating Virtual Training Labels for Crop Classification from Fused Sentinel-1 and Sentinel-2 Time Series. / Teimouri, Maryam; Mokhtarzade, Mehdi; Baghdadi, Nicolas et al.
in: PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science, Jahrgang 91, Nr. 6, 12.2023, S. 413-423.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Teimouri, M, Mokhtarzade, M, Baghdadi, N & Heipke, C 2023, 'Generating Virtual Training Labels for Crop Classification from Fused Sentinel-1 and Sentinel-2 Time Series', PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science, Jg. 91, Nr. 6, S. 413-423. https://doi.org/10.1007/s41064-023-00256-w
Teimouri, M., Mokhtarzade, M., Baghdadi, N., & Heipke, C. (2023). Generating Virtual Training Labels for Crop Classification from Fused Sentinel-1 and Sentinel-2 Time Series. PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science, 91(6), 413-423. https://doi.org/10.1007/s41064-023-00256-w
Teimouri M, Mokhtarzade M, Baghdadi N, Heipke C. Generating Virtual Training Labels for Crop Classification from Fused Sentinel-1 and Sentinel-2 Time Series. PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science. 2023 Dez;91(6):413-423. Epub 2023 Sep 26. doi: 10.1007/s41064-023-00256-w
Teimouri, Maryam ; Mokhtarzade, Mehdi ; Baghdadi, Nicolas et al. / Generating Virtual Training Labels for Crop Classification from Fused Sentinel-1 and Sentinel-2 Time Series. in: PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science. 2023 ; Jahrgang 91, Nr. 6. S. 413-423.
Download
@article{9065709c2a544c7d880f7ee9211f2c1a,
title = "Generating Virtual Training Labels for Crop Classification from Fused Sentinel-1 and Sentinel-2 Time Series",
abstract = "Convolutional neural networks (CNNs) have shown results superior to most traditional image understanding approaches in many fields, incl. crop classification from satellite time series images. However, CNNs require a large number of training samples to properly train the network. The process of collecting and labeling such samples using traditional methods can be both, time-consuming and costly. To address this issue and improve classification accuracy, generating virtual training labels (VTL) from existing ones is a promising solution. To this end, this study proposes a novel method for generating VTL based on sub-dividing the training samples of each crop using self-organizing maps (SOM), and then assigning labels to a set of unlabeled pixels based on the distance to these sub-classes. We apply the new method to crop classification from Sentinel images. A three-dimensional (3D) CNN is utilized for extracting features from the fusion of optical and radar time series. The results of the evaluation show that the proposed method is effective in generating VTL, as demonstrated by the achieved overall accuracy (OA) of 95.3% and kappa coefficient (KC) of 94.5%, compared to 91.3% and 89.9% for a solution without VTL. The results suggest that the proposed method has the potential to enhance the classification accuracy of crops using VTL.",
keywords = "3D-CNN, Crop classification, Fusion, Optical and radar image time series, Virtual training labels",
author = "Maryam Teimouri and Mehdi Mokhtarzade and Nicolas Baghdadi and Christian Heipke",
note = "Funding Information: The authors would like to express their gratitude to the European Space Agency (ESA) for supplying the Sentinel 1 and Sentinel 2 data, as well as to the Department of Agriculture, Livestock, Fishing, and Food of the Generalitat of Catalonia for supplying the field data. ",
year = "2023",
month = dec,
doi = "10.1007/s41064-023-00256-w",
language = "English",
volume = "91",
pages = "413--423",
number = "6",

}

Download

TY - JOUR

T1 - Generating Virtual Training Labels for Crop Classification from Fused Sentinel-1 and Sentinel-2 Time Series

AU - Teimouri, Maryam

AU - Mokhtarzade, Mehdi

AU - Baghdadi, Nicolas

AU - Heipke, Christian

N1 - Funding Information: The authors would like to express their gratitude to the European Space Agency (ESA) for supplying the Sentinel 1 and Sentinel 2 data, as well as to the Department of Agriculture, Livestock, Fishing, and Food of the Generalitat of Catalonia for supplying the field data.

PY - 2023/12

Y1 - 2023/12

N2 - Convolutional neural networks (CNNs) have shown results superior to most traditional image understanding approaches in many fields, incl. crop classification from satellite time series images. However, CNNs require a large number of training samples to properly train the network. The process of collecting and labeling such samples using traditional methods can be both, time-consuming and costly. To address this issue and improve classification accuracy, generating virtual training labels (VTL) from existing ones is a promising solution. To this end, this study proposes a novel method for generating VTL based on sub-dividing the training samples of each crop using self-organizing maps (SOM), and then assigning labels to a set of unlabeled pixels based on the distance to these sub-classes. We apply the new method to crop classification from Sentinel images. A three-dimensional (3D) CNN is utilized for extracting features from the fusion of optical and radar time series. The results of the evaluation show that the proposed method is effective in generating VTL, as demonstrated by the achieved overall accuracy (OA) of 95.3% and kappa coefficient (KC) of 94.5%, compared to 91.3% and 89.9% for a solution without VTL. The results suggest that the proposed method has the potential to enhance the classification accuracy of crops using VTL.

AB - Convolutional neural networks (CNNs) have shown results superior to most traditional image understanding approaches in many fields, incl. crop classification from satellite time series images. However, CNNs require a large number of training samples to properly train the network. The process of collecting and labeling such samples using traditional methods can be both, time-consuming and costly. To address this issue and improve classification accuracy, generating virtual training labels (VTL) from existing ones is a promising solution. To this end, this study proposes a novel method for generating VTL based on sub-dividing the training samples of each crop using self-organizing maps (SOM), and then assigning labels to a set of unlabeled pixels based on the distance to these sub-classes. We apply the new method to crop classification from Sentinel images. A three-dimensional (3D) CNN is utilized for extracting features from the fusion of optical and radar time series. The results of the evaluation show that the proposed method is effective in generating VTL, as demonstrated by the achieved overall accuracy (OA) of 95.3% and kappa coefficient (KC) of 94.5%, compared to 91.3% and 89.9% for a solution without VTL. The results suggest that the proposed method has the potential to enhance the classification accuracy of crops using VTL.

KW - 3D-CNN

KW - Crop classification

KW - Fusion

KW - Optical and radar image time series

KW - Virtual training labels

UR - http://www.scopus.com/inward/record.url?scp=85172204057&partnerID=8YFLogxK

U2 - 10.1007/s41064-023-00256-w

DO - 10.1007/s41064-023-00256-w

M3 - Article

AN - SCOPUS:85172204057

VL - 91

SP - 413

EP - 423

JO - PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science

JF - PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science

SN - 2512-2789

IS - 6

ER -