Integrating Explainability into Graph Neural Network Models for the Prediction of X-ray Absorption Spectra

Research output: Contribution to journalArticleResearchpeer review

Authors

  • Amir Kotobi
  • Kanishka Singh
  • Daniel Höche
  • Sadia Bari
  • Robert H. Meißner
  • Annika Bande

Research Organisations

External Research Organisations

  • Helmholtz Zentrum Geesthacht Centre for Materials and Coastal Research
  • Helmholtz-Zentrum Berlin für Materialien und Energie (HZB)
  • Freie Universität Berlin (FU Berlin)
  • Deutsches Elektronen-Synchrotron (DESY)
  • University of Groningen
  • Hamburg University of Technology (TUHH)
View graph of relations

Details

Original languageEnglish
Pages (from-to)22584-22598
Number of pages15
JournalJournal of the American Chemical Society
Volume145
Issue number41
Early online date9 Oct 2023
Publication statusPublished - 18 Oct 2023

Abstract

The use of sophisticated machine learning (ML) models, such as graph neural networks (GNNs), to predict complex molecular properties or all kinds of spectra has grown rapidly. However, ensuring the interpretability of these models' predictions remains a challenge. For example, a rigorous understanding of the predicted X-ray absorption spectrum (XAS) generated by such ML models requires an in-depth investigation of the respective black-box ML model used. Here, this is done for different GNNs based on a comprehensive, custom-generated XAS data set for small organic molecules. We show that a thorough analysis of the different ML models with respect to the local and global environments considered in each ML model is essential for the selection of an appropriate ML model that allows a robust XAS prediction. Moreover, we employ feature attribution to determine the respective contributions of various atoms in the molecules to the peaks observed in the XAS spectrum. By comparing this peak assignment to the core and virtual orbitals from the quantum chemical calculations underlying our data set, we demonstrate that it is possible to relate the atomic contributions via these orbitals to the XAS spectrum.

ASJC Scopus subject areas

Cite this

Integrating Explainability into Graph Neural Network Models for the Prediction of X-ray Absorption Spectra. / Kotobi, Amir; Singh, Kanishka; Höche, Daniel et al.
In: Journal of the American Chemical Society, Vol. 145, No. 41, 18.10.2023, p. 22584-22598.

Research output: Contribution to journalArticleResearchpeer review

Kotobi A, Singh K, Höche D, Bari S, Meißner RH, Bande A. Integrating Explainability into Graph Neural Network Models for the Prediction of X-ray Absorption Spectra. Journal of the American Chemical Society. 2023 Oct 18;145(41):22584-22598. Epub 2023 Oct 9. doi: 10.1021/jacs.3c07513
Kotobi, Amir ; Singh, Kanishka ; Höche, Daniel et al. / Integrating Explainability into Graph Neural Network Models for the Prediction of X-ray Absorption Spectra. In: Journal of the American Chemical Society. 2023 ; Vol. 145, No. 41. pp. 22584-22598.
Download
@article{bdba8b39be634ecaa3563ecbf6378102,
title = "Integrating Explainability into Graph Neural Network Models for the Prediction of X-ray Absorption Spectra",
abstract = "The use of sophisticated machine learning (ML) models, such as graph neural networks (GNNs), to predict complex molecular properties or all kinds of spectra has grown rapidly. However, ensuring the interpretability of these models' predictions remains a challenge. For example, a rigorous understanding of the predicted X-ray absorption spectrum (XAS) generated by such ML models requires an in-depth investigation of the respective black-box ML model used. Here, this is done for different GNNs based on a comprehensive, custom-generated XAS data set for small organic molecules. We show that a thorough analysis of the different ML models with respect to the local and global environments considered in each ML model is essential for the selection of an appropriate ML model that allows a robust XAS prediction. Moreover, we employ feature attribution to determine the respective contributions of various atoms in the molecules to the peaks observed in the XAS spectrum. By comparing this peak assignment to the core and virtual orbitals from the quantum chemical calculations underlying our data set, we demonstrate that it is possible to relate the atomic contributions via these orbitals to the XAS spectrum.",
author = "Amir Kotobi and Kanishka Singh and Daniel H{\"o}che and Sadia Bari and Mei{\ss}ner, {Robert H.} and Annika Bande",
note = "Funding Information: HIDA TraineeNetworkprogram, HAICUAI-4-XAS, DASHHand HEIBRiDS graduate schools",
year = "2023",
month = oct,
day = "18",
doi = "10.1021/jacs.3c07513",
language = "English",
volume = "145",
pages = "22584--22598",
journal = "Journal of the American Chemical Society",
issn = "0002-7863",
publisher = "American Chemical Society",
number = "41",

}

Download

TY - JOUR

T1 - Integrating Explainability into Graph Neural Network Models for the Prediction of X-ray Absorption Spectra

AU - Kotobi, Amir

AU - Singh, Kanishka

AU - Höche, Daniel

AU - Bari, Sadia

AU - Meißner, Robert H.

AU - Bande, Annika

N1 - Funding Information: HIDA TraineeNetworkprogram, HAICUAI-4-XAS, DASHHand HEIBRiDS graduate schools

PY - 2023/10/18

Y1 - 2023/10/18

N2 - The use of sophisticated machine learning (ML) models, such as graph neural networks (GNNs), to predict complex molecular properties or all kinds of spectra has grown rapidly. However, ensuring the interpretability of these models' predictions remains a challenge. For example, a rigorous understanding of the predicted X-ray absorption spectrum (XAS) generated by such ML models requires an in-depth investigation of the respective black-box ML model used. Here, this is done for different GNNs based on a comprehensive, custom-generated XAS data set for small organic molecules. We show that a thorough analysis of the different ML models with respect to the local and global environments considered in each ML model is essential for the selection of an appropriate ML model that allows a robust XAS prediction. Moreover, we employ feature attribution to determine the respective contributions of various atoms in the molecules to the peaks observed in the XAS spectrum. By comparing this peak assignment to the core and virtual orbitals from the quantum chemical calculations underlying our data set, we demonstrate that it is possible to relate the atomic contributions via these orbitals to the XAS spectrum.

AB - The use of sophisticated machine learning (ML) models, such as graph neural networks (GNNs), to predict complex molecular properties or all kinds of spectra has grown rapidly. However, ensuring the interpretability of these models' predictions remains a challenge. For example, a rigorous understanding of the predicted X-ray absorption spectrum (XAS) generated by such ML models requires an in-depth investigation of the respective black-box ML model used. Here, this is done for different GNNs based on a comprehensive, custom-generated XAS data set for small organic molecules. We show that a thorough analysis of the different ML models with respect to the local and global environments considered in each ML model is essential for the selection of an appropriate ML model that allows a robust XAS prediction. Moreover, we employ feature attribution to determine the respective contributions of various atoms in the molecules to the peaks observed in the XAS spectrum. By comparing this peak assignment to the core and virtual orbitals from the quantum chemical calculations underlying our data set, we demonstrate that it is possible to relate the atomic contributions via these orbitals to the XAS spectrum.

UR - http://www.scopus.com/inward/record.url?scp=85174752551&partnerID=8YFLogxK

U2 - 10.1021/jacs.3c07513

DO - 10.1021/jacs.3c07513

M3 - Article

C2 - 37807700

AN - SCOPUS:85174752551

VL - 145

SP - 22584

EP - 22598

JO - Journal of the American Chemical Society

JF - Journal of the American Chemical Society

SN - 0002-7863

IS - 41

ER -