ZORRO: Valid, Sparse, and Stable Explanations in Graph Neural Networks

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autoren

  • Thorben Funke
  • Megha Khosla
  • Mandeep Rathee
  • Avishek Anand

Organisationseinheiten

Externe Organisationen

  • Delft University of Technology
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Seiten (von - bis)8687-8698
Seitenumfang12
FachzeitschriftIEEE Transactions on Knowledge and Data Engineering
Jahrgang35
Ausgabenummer8
PublikationsstatusVeröffentlicht - 24 Aug. 2022

Abstract

With the ever-increasing popularity and applications of graph neural networks, several proposals have been made to explain and understand the decisions of a graph neural network. Explanations for graph neural networks differ in principle from other input settings. It is important to attribute the decision to input features and other related instances connected by the graph structure. We find that the previous explanation generation approaches that maximize the mutual information between the label distribution produced by the model and the explanation to be restrictive. Specifically, existing approaches do not enforce explanations to be valid, sparse, or robust to input perturbations. In this paper, we lay down some of the fundamental principles that an explanation method for graph neural networks should follow and introduce a metric <italic>RDT-Fidelity</italic> as a measure of the explanation&#x0027;s effectiveness. We propose a novel approach Zorro based on the principles from <italic>rate-distortion theory</italic> that uses a simple combinatorial procedure to optimize for RDT-Fidelity. Extensive experiments on real and synthetic datasets reveal that Zorro produces sparser, stable, and more faithful explanations than existing graph neural network explanation approaches.

ASJC Scopus Sachgebiete

Zitieren

ZORRO: Valid, Sparse, and Stable Explanations in Graph Neural Networks. / Funke, Thorben; Khosla, Megha; Rathee, Mandeep et al.
in: IEEE Transactions on Knowledge and Data Engineering, Jahrgang 35, Nr. 8, 24.08.2022, S. 8687-8698.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Funke T, Khosla M, Rathee M, Anand A. ZORRO: Valid, Sparse, and Stable Explanations in Graph Neural Networks. IEEE Transactions on Knowledge and Data Engineering. 2022 Aug 24;35(8):8687-8698. doi: 10.48550/arXiv.2105.08621, 10.1109/TKDE.2022.3201170
Funke, Thorben ; Khosla, Megha ; Rathee, Mandeep et al. / ZORRO : Valid, Sparse, and Stable Explanations in Graph Neural Networks. in: IEEE Transactions on Knowledge and Data Engineering. 2022 ; Jahrgang 35, Nr. 8. S. 8687-8698.
Download
@article{4564b9a5be9f4330ab55a2008b18ce7f,
title = "ZORRO: Valid, Sparse, and Stable Explanations in Graph Neural Networks",
abstract = "With the ever-increasing popularity and applications of graph neural networks, several proposals have been made to explain and understand the decisions of a graph neural network. Explanations for graph neural networks differ in principle from other input settings. It is important to attribute the decision to input features and other related instances connected by the graph structure. We find that the previous explanation generation approaches that maximize the mutual information between the label distribution produced by the model and the explanation to be restrictive. Specifically, existing approaches do not enforce explanations to be valid, sparse, or robust to input perturbations. In this paper, we lay down some of the fundamental principles that an explanation method for graph neural networks should follow and introduce a metric RDT-Fidelity as a measure of the explanation's effectiveness. We propose a novel approach Zorro based on the principles from rate-distortion theory that uses a simple combinatorial procedure to optimize for RDT-Fidelity. Extensive experiments on real and synthetic datasets reveal that Zorro produces sparser, stable, and more faithful explanations than existing graph neural network explanation approaches.",
keywords = "Computational modeling, Data models, Explainability, Feature extraction, Graph Neural Networks, Graph neural networks, Interpretability, Rate-distortion, Stability analysis, Task analysis, graph neural networks, interpretability",
author = "Thorben Funke and Megha Khosla and Mandeep Rathee and Avishek Anand",
note = "Funding Information: This work was partially supported in part by the project {"}CampaNeo{"} under Grant ID 01MD19007 ",
year = "2022",
month = aug,
day = "24",
doi = "10.48550/arXiv.2105.08621",
language = "English",
volume = "35",
pages = "8687--8698",
journal = "IEEE Transactions on Knowledge and Data Engineering",
issn = "1041-4347",
publisher = "IEEE Computer Society",
number = "8",

}

Download

TY - JOUR

T1 - ZORRO

T2 - Valid, Sparse, and Stable Explanations in Graph Neural Networks

AU - Funke, Thorben

AU - Khosla, Megha

AU - Rathee, Mandeep

AU - Anand, Avishek

N1 - Funding Information: This work was partially supported in part by the project "CampaNeo" under Grant ID 01MD19007

PY - 2022/8/24

Y1 - 2022/8/24

N2 - With the ever-increasing popularity and applications of graph neural networks, several proposals have been made to explain and understand the decisions of a graph neural network. Explanations for graph neural networks differ in principle from other input settings. It is important to attribute the decision to input features and other related instances connected by the graph structure. We find that the previous explanation generation approaches that maximize the mutual information between the label distribution produced by the model and the explanation to be restrictive. Specifically, existing approaches do not enforce explanations to be valid, sparse, or robust to input perturbations. In this paper, we lay down some of the fundamental principles that an explanation method for graph neural networks should follow and introduce a metric RDT-Fidelity as a measure of the explanation's effectiveness. We propose a novel approach Zorro based on the principles from rate-distortion theory that uses a simple combinatorial procedure to optimize for RDT-Fidelity. Extensive experiments on real and synthetic datasets reveal that Zorro produces sparser, stable, and more faithful explanations than existing graph neural network explanation approaches.

AB - With the ever-increasing popularity and applications of graph neural networks, several proposals have been made to explain and understand the decisions of a graph neural network. Explanations for graph neural networks differ in principle from other input settings. It is important to attribute the decision to input features and other related instances connected by the graph structure. We find that the previous explanation generation approaches that maximize the mutual information between the label distribution produced by the model and the explanation to be restrictive. Specifically, existing approaches do not enforce explanations to be valid, sparse, or robust to input perturbations. In this paper, we lay down some of the fundamental principles that an explanation method for graph neural networks should follow and introduce a metric RDT-Fidelity as a measure of the explanation's effectiveness. We propose a novel approach Zorro based on the principles from rate-distortion theory that uses a simple combinatorial procedure to optimize for RDT-Fidelity. Extensive experiments on real and synthetic datasets reveal that Zorro produces sparser, stable, and more faithful explanations than existing graph neural network explanation approaches.

KW - Computational modeling

KW - Data models

KW - Explainability

KW - Feature extraction

KW - Graph Neural Networks

KW - Graph neural networks

KW - Interpretability

KW - Rate-distortion

KW - Stability analysis

KW - Task analysis

KW - graph neural networks

KW - interpretability

UR - http://www.scopus.com/inward/record.url?scp=85137574213&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2105.08621

DO - 10.48550/arXiv.2105.08621

M3 - Article

AN - SCOPUS:85137574213

VL - 35

SP - 8687

EP - 8698

JO - IEEE Transactions on Knowledge and Data Engineering

JF - IEEE Transactions on Knowledge and Data Engineering

SN - 1041-4347

IS - 8

ER -