Adversarial Mask Explainer for Graph Neural Networks

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

Organisationseinheiten

Externe Organisationen

  • Nanyang Technological University (NTU)
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksWWW `24
UntertitelProceedings of the ACM Web Conference 2024
Seiten861-869
Seitenumfang9
ISBN (elektronisch)9798400701719
PublikationsstatusVeröffentlicht - 13 Mai 2024
Veranstaltung33rd ACM Web Conference, WWW 2024 - Singapore, Singapur
Dauer: 13 Mai 202417 Mai 2024

Abstract

The Graph Neural Networks (GNNs) model is a powerful tool for integrating node information with graph topology to learn representations and make predictions. However, the complex graph structure of GNNs has led to a lack of clear explainability in the decision-making process. Recently, there has been a growing interest in seeking instance-level explanations of the GNNs model, which aims to uncover the decision-making process of the GNNs model and provide insights into how it arrives at its final output. Previous works have focused on finding a set of weights (masks) for edges/nodes/node features to determine their importance. These works have adopted a regularization term and a hyperparameter K to control the explanation size during the training process and keep only the top-K weights as the explanation set. However, the true size of the explanation is typically unknown to users, making it difficult to provide reasonable values for the regularization term and K. In this work, we propose a novel framework AMExplainer which leverages the concept of adversarial networks to achieve a dual optimization objective in the target function. This approach ensures both accurate prediction of the mask and sparsity of the explanation set. In addition, we devise a novel scaling function to automatically sense and amplify the weights of the informative part of the graph, which filters out insignificant edges/nodes/node features for expediting the convergence of the solution during training. Our extensive experiments show that AMExplainer yields a more compelling explanation by generating a sparse set of masks while simultaneously maintaining fidelity.

ASJC Scopus Sachgebiete

Zitieren

Adversarial Mask Explainer for Graph Neural Networks. / Zhang, Wei; Li, Xiaofan; Nejdl, Wolfgang.
WWW `24 : Proceedings of the ACM Web Conference 2024. 2024. S. 861-869.

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Zhang, W, Li, X & Nejdl, W 2024, Adversarial Mask Explainer for Graph Neural Networks. in WWW `24 : Proceedings of the ACM Web Conference 2024. S. 861-869, 33rd ACM Web Conference, WWW 2024, Singapore, Singapur, 13 Mai 2024. https://doi.org/10.1145/3589334.3645608
Zhang, W., Li, X., & Nejdl, W. (2024). Adversarial Mask Explainer for Graph Neural Networks. In WWW `24 : Proceedings of the ACM Web Conference 2024 (S. 861-869) https://doi.org/10.1145/3589334.3645608
Zhang W, Li X, Nejdl W. Adversarial Mask Explainer for Graph Neural Networks. in WWW `24 : Proceedings of the ACM Web Conference 2024. 2024. S. 861-869 doi: 10.1145/3589334.3645608
Zhang, Wei ; Li, Xiaofan ; Nejdl, Wolfgang. / Adversarial Mask Explainer for Graph Neural Networks. WWW `24 : Proceedings of the ACM Web Conference 2024. 2024. S. 861-869
Download
@inproceedings{66d47f59eb464bc08d11757b8793097a,
title = "Adversarial Mask Explainer for Graph Neural Networks",
abstract = "The Graph Neural Networks (GNNs) model is a powerful tool for integrating node information with graph topology to learn representations and make predictions. However, the complex graph structure of GNNs has led to a lack of clear explainability in the decision-making process. Recently, there has been a growing interest in seeking instance-level explanations of the GNNs model, which aims to uncover the decision-making process of the GNNs model and provide insights into how it arrives at its final output. Previous works have focused on finding a set of weights (masks) for edges/nodes/node features to determine their importance. These works have adopted a regularization term and a hyperparameter K to control the explanation size during the training process and keep only the top-K weights as the explanation set. However, the true size of the explanation is typically unknown to users, making it difficult to provide reasonable values for the regularization term and K. In this work, we propose a novel framework AMExplainer which leverages the concept of adversarial networks to achieve a dual optimization objective in the target function. This approach ensures both accurate prediction of the mask and sparsity of the explanation set. In addition, we devise a novel scaling function to automatically sense and amplify the weights of the informative part of the graph, which filters out insignificant edges/nodes/node features for expediting the convergence of the solution during training. Our extensive experiments show that AMExplainer yields a more compelling explanation by generating a sparse set of masks while simultaneously maintaining fidelity.",
keywords = "explainability, graph analysis, graph neural networks",
author = "Wei Zhang and Xiaofan Li and Wolfgang Nejdl",
note = "Publisher Copyright: {\textcopyright} 2024 ACM.; 33rd ACM Web Conference, WWW 2024 ; Conference date: 13-05-2024 Through 17-05-2024",
year = "2024",
month = may,
day = "13",
doi = "10.1145/3589334.3645608",
language = "English",
pages = "861--869",
booktitle = "WWW `24",

}

Download

TY - GEN

T1 - Adversarial Mask Explainer for Graph Neural Networks

AU - Zhang, Wei

AU - Li, Xiaofan

AU - Nejdl, Wolfgang

N1 - Publisher Copyright: © 2024 ACM.

PY - 2024/5/13

Y1 - 2024/5/13

N2 - The Graph Neural Networks (GNNs) model is a powerful tool for integrating node information with graph topology to learn representations and make predictions. However, the complex graph structure of GNNs has led to a lack of clear explainability in the decision-making process. Recently, there has been a growing interest in seeking instance-level explanations of the GNNs model, which aims to uncover the decision-making process of the GNNs model and provide insights into how it arrives at its final output. Previous works have focused on finding a set of weights (masks) for edges/nodes/node features to determine their importance. These works have adopted a regularization term and a hyperparameter K to control the explanation size during the training process and keep only the top-K weights as the explanation set. However, the true size of the explanation is typically unknown to users, making it difficult to provide reasonable values for the regularization term and K. In this work, we propose a novel framework AMExplainer which leverages the concept of adversarial networks to achieve a dual optimization objective in the target function. This approach ensures both accurate prediction of the mask and sparsity of the explanation set. In addition, we devise a novel scaling function to automatically sense and amplify the weights of the informative part of the graph, which filters out insignificant edges/nodes/node features for expediting the convergence of the solution during training. Our extensive experiments show that AMExplainer yields a more compelling explanation by generating a sparse set of masks while simultaneously maintaining fidelity.

AB - The Graph Neural Networks (GNNs) model is a powerful tool for integrating node information with graph topology to learn representations and make predictions. However, the complex graph structure of GNNs has led to a lack of clear explainability in the decision-making process. Recently, there has been a growing interest in seeking instance-level explanations of the GNNs model, which aims to uncover the decision-making process of the GNNs model and provide insights into how it arrives at its final output. Previous works have focused on finding a set of weights (masks) for edges/nodes/node features to determine their importance. These works have adopted a regularization term and a hyperparameter K to control the explanation size during the training process and keep only the top-K weights as the explanation set. However, the true size of the explanation is typically unknown to users, making it difficult to provide reasonable values for the regularization term and K. In this work, we propose a novel framework AMExplainer which leverages the concept of adversarial networks to achieve a dual optimization objective in the target function. This approach ensures both accurate prediction of the mask and sparsity of the explanation set. In addition, we devise a novel scaling function to automatically sense and amplify the weights of the informative part of the graph, which filters out insignificant edges/nodes/node features for expediting the convergence of the solution during training. Our extensive experiments show that AMExplainer yields a more compelling explanation by generating a sparse set of masks while simultaneously maintaining fidelity.

KW - explainability

KW - graph analysis

KW - graph neural networks

UR - http://www.scopus.com/inward/record.url?scp=85194067054&partnerID=8YFLogxK

U2 - 10.1145/3589334.3645608

DO - 10.1145/3589334.3645608

M3 - Conference contribution

AN - SCOPUS:85194067054

SP - 861

EP - 869

BT - WWW `24

T2 - 33rd ACM Web Conference, WWW 2024

Y2 - 13 May 2024 through 17 May 2024

ER -

Von denselben Autoren