Integrating Explainability into Graph Neural Network Models for the Prediction of X-ray Absorption Spectra

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autoren

  • Amir Kotobi
  • Kanishka Singh
  • Daniel Höche
  • Sadia Bari
  • Robert H. Meißner
  • Annika Bande

Organisationseinheiten

Externe Organisationen

  • Helmholtz-Zentrum Geesthacht Zentrum für Material- und Küstenforschung GmbH
  • Helmholtz-Zentrum Berlin für Materialien und Energie GmbH
  • Freie Universität Berlin (FU Berlin)
  • Deutsches Elektronen-Synchrotron (DESY)
  • Reichsuniversität Groningen
  • Technische Universität Hamburg (TUHH)
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Seiten (von - bis)22584-22598
Seitenumfang15
FachzeitschriftJournal of the American Chemical Society
Jahrgang145
Ausgabenummer41
Frühes Online-Datum9 Okt. 2023
PublikationsstatusVeröffentlicht - 18 Okt. 2023

Abstract

The use of sophisticated machine learning (ML) models, such as graph neural networks (GNNs), to predict complex molecular properties or all kinds of spectra has grown rapidly. However, ensuring the interpretability of these models' predictions remains a challenge. For example, a rigorous understanding of the predicted X-ray absorption spectrum (XAS) generated by such ML models requires an in-depth investigation of the respective black-box ML model used. Here, this is done for different GNNs based on a comprehensive, custom-generated XAS data set for small organic molecules. We show that a thorough analysis of the different ML models with respect to the local and global environments considered in each ML model is essential for the selection of an appropriate ML model that allows a robust XAS prediction. Moreover, we employ feature attribution to determine the respective contributions of various atoms in the molecules to the peaks observed in the XAS spectrum. By comparing this peak assignment to the core and virtual orbitals from the quantum chemical calculations underlying our data set, we demonstrate that it is possible to relate the atomic contributions via these orbitals to the XAS spectrum.

ASJC Scopus Sachgebiete

Zitieren

Integrating Explainability into Graph Neural Network Models for the Prediction of X-ray Absorption Spectra. / Kotobi, Amir; Singh, Kanishka; Höche, Daniel et al.
in: Journal of the American Chemical Society, Jahrgang 145, Nr. 41, 18.10.2023, S. 22584-22598.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Kotobi A, Singh K, Höche D, Bari S, Meißner RH, Bande A. Integrating Explainability into Graph Neural Network Models for the Prediction of X-ray Absorption Spectra. Journal of the American Chemical Society. 2023 Okt 18;145(41):22584-22598. Epub 2023 Okt 9. doi: 10.1021/jacs.3c07513
Kotobi, Amir ; Singh, Kanishka ; Höche, Daniel et al. / Integrating Explainability into Graph Neural Network Models for the Prediction of X-ray Absorption Spectra. in: Journal of the American Chemical Society. 2023 ; Jahrgang 145, Nr. 41. S. 22584-22598.
Download
@article{bdba8b39be634ecaa3563ecbf6378102,
title = "Integrating Explainability into Graph Neural Network Models for the Prediction of X-ray Absorption Spectra",
abstract = "The use of sophisticated machine learning (ML) models, such as graph neural networks (GNNs), to predict complex molecular properties or all kinds of spectra has grown rapidly. However, ensuring the interpretability of these models' predictions remains a challenge. For example, a rigorous understanding of the predicted X-ray absorption spectrum (XAS) generated by such ML models requires an in-depth investigation of the respective black-box ML model used. Here, this is done for different GNNs based on a comprehensive, custom-generated XAS data set for small organic molecules. We show that a thorough analysis of the different ML models with respect to the local and global environments considered in each ML model is essential for the selection of an appropriate ML model that allows a robust XAS prediction. Moreover, we employ feature attribution to determine the respective contributions of various atoms in the molecules to the peaks observed in the XAS spectrum. By comparing this peak assignment to the core and virtual orbitals from the quantum chemical calculations underlying our data set, we demonstrate that it is possible to relate the atomic contributions via these orbitals to the XAS spectrum.",
author = "Amir Kotobi and Kanishka Singh and Daniel H{\"o}che and Sadia Bari and Mei{\ss}ner, {Robert H.} and Annika Bande",
note = "Funding Information: HIDA TraineeNetworkprogram, HAICUAI-4-XAS, DASHHand HEIBRiDS graduate schools",
year = "2023",
month = oct,
day = "18",
doi = "10.1021/jacs.3c07513",
language = "English",
volume = "145",
pages = "22584--22598",
journal = "Journal of the American Chemical Society",
issn = "0002-7863",
publisher = "American Chemical Society",
number = "41",

}

Download

TY - JOUR

T1 - Integrating Explainability into Graph Neural Network Models for the Prediction of X-ray Absorption Spectra

AU - Kotobi, Amir

AU - Singh, Kanishka

AU - Höche, Daniel

AU - Bari, Sadia

AU - Meißner, Robert H.

AU - Bande, Annika

N1 - Funding Information: HIDA TraineeNetworkprogram, HAICUAI-4-XAS, DASHHand HEIBRiDS graduate schools

PY - 2023/10/18

Y1 - 2023/10/18

N2 - The use of sophisticated machine learning (ML) models, such as graph neural networks (GNNs), to predict complex molecular properties or all kinds of spectra has grown rapidly. However, ensuring the interpretability of these models' predictions remains a challenge. For example, a rigorous understanding of the predicted X-ray absorption spectrum (XAS) generated by such ML models requires an in-depth investigation of the respective black-box ML model used. Here, this is done for different GNNs based on a comprehensive, custom-generated XAS data set for small organic molecules. We show that a thorough analysis of the different ML models with respect to the local and global environments considered in each ML model is essential for the selection of an appropriate ML model that allows a robust XAS prediction. Moreover, we employ feature attribution to determine the respective contributions of various atoms in the molecules to the peaks observed in the XAS spectrum. By comparing this peak assignment to the core and virtual orbitals from the quantum chemical calculations underlying our data set, we demonstrate that it is possible to relate the atomic contributions via these orbitals to the XAS spectrum.

AB - The use of sophisticated machine learning (ML) models, such as graph neural networks (GNNs), to predict complex molecular properties or all kinds of spectra has grown rapidly. However, ensuring the interpretability of these models' predictions remains a challenge. For example, a rigorous understanding of the predicted X-ray absorption spectrum (XAS) generated by such ML models requires an in-depth investigation of the respective black-box ML model used. Here, this is done for different GNNs based on a comprehensive, custom-generated XAS data set for small organic molecules. We show that a thorough analysis of the different ML models with respect to the local and global environments considered in each ML model is essential for the selection of an appropriate ML model that allows a robust XAS prediction. Moreover, we employ feature attribution to determine the respective contributions of various atoms in the molecules to the peaks observed in the XAS spectrum. By comparing this peak assignment to the core and virtual orbitals from the quantum chemical calculations underlying our data set, we demonstrate that it is possible to relate the atomic contributions via these orbitals to the XAS spectrum.

UR - http://www.scopus.com/inward/record.url?scp=85174752551&partnerID=8YFLogxK

U2 - 10.1021/jacs.3c07513

DO - 10.1021/jacs.3c07513

M3 - Article

C2 - 37807700

AN - SCOPUS:85174752551

VL - 145

SP - 22584

EP - 22598

JO - Journal of the American Chemical Society

JF - Journal of the American Chemical Society

SN - 0002-7863

IS - 41

ER -