Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

  • Hussain Hussain
  • Meng Cao
  • Sandipan Sikdar
  • Denis Helic
  • Elisabeth Lex
  • Markus Strohmaier
  • Roman Kern

Organisationseinheiten

Externe Organisationen

  • Technische Universität Graz
  • Nanjing University
  • MODUL University Vienna
  • Universität Mannheim
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksProceedings
Untertitel22nd IEEE International Conference on Data Mining, ICDM 2022
Herausgeber/-innenXingquan Zhu, Sanjay Ranka, My T. Thai, Takashi Washio, Xindong Wu
Herausgeber (Verlag)Institute of Electrical and Electronics Engineers Inc.
Seiten975-980
Seitenumfang6
ISBN (elektronisch)9781665450997
ISBN (Print)978-1-6654-5100-0
PublikationsstatusVeröffentlicht - 2022
Veranstaltung22nd IEEE International Conference on Data Mining, ICDM 2022 - Orlando, USA / Vereinigte Staaten
Dauer: 28 Nov. 20221 Dez. 2022

Publikationsreihe

NameProceedings - IEEE International Conference on Data Mining, ICDM
Band2022-November
ISSN (Print)1550-4786

Abstract

We present evidence for the existence and effectiveness of adversarial attacks on graph neural networks (GNNs) that aim to degrade fairness. These attacks can disadvantage a particular subgroup of nodes in GNN-based node classification, where nodes of the underlying network have sensitive attributes, such as race or gender. We conduct qualitative and experimental analyses explaining how adversarial link injection impairs the fairness of GNN predictions. For example, an attacker can compromise the fairness of GNN-based node classification by injecting adversarial links between nodes belonging to opposite subgroups and opposite class labels. Our experiments on empirical datasets demonstrate that adversarial fairness attacks can significantly degrade the fairness of GNN predictions (attacks are effective) with a low perturbation rate (attacks are efficient) and without a significant drop in accuracy (attacks are deceptive). This work demonstrates the vulnerability of GNN models to adversarial fairness attacks. We hope our findings raise awareness about this issue in our community and lay a foundation for the future development of GNN models that are more robust to such attacks.

ASJC Scopus Sachgebiete

Zitieren

Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks. / Hussain, Hussain; Cao, Meng; Sikdar, Sandipan et al.
Proceedings : 22nd IEEE International Conference on Data Mining, ICDM 2022. Hrsg. / Xingquan Zhu; Sanjay Ranka; My T. Thai; Takashi Washio; Xindong Wu. Institute of Electrical and Electronics Engineers Inc., 2022. S. 975-980 (Proceedings - IEEE International Conference on Data Mining, ICDM; Band 2022-November).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Hussain, H, Cao, M, Sikdar, S, Helic, D, Lex, E, Strohmaier, M & Kern, R 2022, Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks. in X Zhu, S Ranka, MT Thai, T Washio & X Wu (Hrsg.), Proceedings : 22nd IEEE International Conference on Data Mining, ICDM 2022. Proceedings - IEEE International Conference on Data Mining, ICDM, Bd. 2022-November, Institute of Electrical and Electronics Engineers Inc., S. 975-980, 22nd IEEE International Conference on Data Mining, ICDM 2022, Orlando, USA / Vereinigte Staaten, 28 Nov. 2022. https://doi.org/10.48550/arXiv.2209.05957, https://doi.org/10.1109/ICDM54844.2022.00117
Hussain, H., Cao, M., Sikdar, S., Helic, D., Lex, E., Strohmaier, M., & Kern, R. (2022). Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks. In X. Zhu, S. Ranka, M. T. Thai, T. Washio, & X. Wu (Hrsg.), Proceedings : 22nd IEEE International Conference on Data Mining, ICDM 2022 (S. 975-980). (Proceedings - IEEE International Conference on Data Mining, ICDM; Band 2022-November). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.48550/arXiv.2209.05957, https://doi.org/10.1109/ICDM54844.2022.00117
Hussain H, Cao M, Sikdar S, Helic D, Lex E, Strohmaier M et al. Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks. in Zhu X, Ranka S, Thai MT, Washio T, Wu X, Hrsg., Proceedings : 22nd IEEE International Conference on Data Mining, ICDM 2022. Institute of Electrical and Electronics Engineers Inc. 2022. S. 975-980. (Proceedings - IEEE International Conference on Data Mining, ICDM). doi: 10.48550/arXiv.2209.05957, 10.1109/ICDM54844.2022.00117
Hussain, Hussain ; Cao, Meng ; Sikdar, Sandipan et al. / Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks. Proceedings : 22nd IEEE International Conference on Data Mining, ICDM 2022. Hrsg. / Xingquan Zhu ; Sanjay Ranka ; My T. Thai ; Takashi Washio ; Xindong Wu. Institute of Electrical and Electronics Engineers Inc., 2022. S. 975-980 (Proceedings - IEEE International Conference on Data Mining, ICDM).
Download
@inproceedings{7d3ad90c342f4c21a0c94782304abe0e,
title = "Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks",
abstract = "We present evidence for the existence and effectiveness of adversarial attacks on graph neural networks (GNNs) that aim to degrade fairness. These attacks can disadvantage a particular subgroup of nodes in GNN-based node classification, where nodes of the underlying network have sensitive attributes, such as race or gender. We conduct qualitative and experimental analyses explaining how adversarial link injection impairs the fairness of GNN predictions. For example, an attacker can compromise the fairness of GNN-based node classification by injecting adversarial links between nodes belonging to opposite subgroups and opposite class labels. Our experiments on empirical datasets demonstrate that adversarial fairness attacks can significantly degrade the fairness of GNN predictions (attacks are effective) with a low perturbation rate (attacks are efficient) and without a significant drop in accuracy (attacks are deceptive). This work demonstrates the vulnerability of GNN models to adversarial fairness attacks. We hope our findings raise awareness about this issue in our community and lay a foundation for the future development of GNN models that are more robust to such attacks.",
keywords = "adversarial attacks, fairness, graph neural networks",
author = "Hussain Hussain and Meng Cao and Sandipan Sikdar and Denis Helic and Elisabeth Lex and Markus Strohmaier and Roman Kern",
year = "2022",
doi = "10.48550/arXiv.2209.05957",
language = "English",
isbn = "978-1-6654-5100-0",
series = "Proceedings - IEEE International Conference on Data Mining, ICDM",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "975--980",
editor = "Xingquan Zhu and Sanjay Ranka and Thai, {My T.} and Takashi Washio and Xindong Wu",
booktitle = "Proceedings",
address = "United States",
note = "22nd IEEE International Conference on Data Mining, ICDM 2022 ; Conference date: 28-11-2022 Through 01-12-2022",

}

Download

TY - GEN

T1 - Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks

AU - Hussain, Hussain

AU - Cao, Meng

AU - Sikdar, Sandipan

AU - Helic, Denis

AU - Lex, Elisabeth

AU - Strohmaier, Markus

AU - Kern, Roman

PY - 2022

Y1 - 2022

N2 - We present evidence for the existence and effectiveness of adversarial attacks on graph neural networks (GNNs) that aim to degrade fairness. These attacks can disadvantage a particular subgroup of nodes in GNN-based node classification, where nodes of the underlying network have sensitive attributes, such as race or gender. We conduct qualitative and experimental analyses explaining how adversarial link injection impairs the fairness of GNN predictions. For example, an attacker can compromise the fairness of GNN-based node classification by injecting adversarial links between nodes belonging to opposite subgroups and opposite class labels. Our experiments on empirical datasets demonstrate that adversarial fairness attacks can significantly degrade the fairness of GNN predictions (attacks are effective) with a low perturbation rate (attacks are efficient) and without a significant drop in accuracy (attacks are deceptive). This work demonstrates the vulnerability of GNN models to adversarial fairness attacks. We hope our findings raise awareness about this issue in our community and lay a foundation for the future development of GNN models that are more robust to such attacks.

AB - We present evidence for the existence and effectiveness of adversarial attacks on graph neural networks (GNNs) that aim to degrade fairness. These attacks can disadvantage a particular subgroup of nodes in GNN-based node classification, where nodes of the underlying network have sensitive attributes, such as race or gender. We conduct qualitative and experimental analyses explaining how adversarial link injection impairs the fairness of GNN predictions. For example, an attacker can compromise the fairness of GNN-based node classification by injecting adversarial links between nodes belonging to opposite subgroups and opposite class labels. Our experiments on empirical datasets demonstrate that adversarial fairness attacks can significantly degrade the fairness of GNN predictions (attacks are effective) with a low perturbation rate (attacks are efficient) and without a significant drop in accuracy (attacks are deceptive). This work demonstrates the vulnerability of GNN models to adversarial fairness attacks. We hope our findings raise awareness about this issue in our community and lay a foundation for the future development of GNN models that are more robust to such attacks.

KW - adversarial attacks

KW - fairness

KW - graph neural networks

UR - http://www.scopus.com/inward/record.url?scp=85147735764&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2209.05957

DO - 10.48550/arXiv.2209.05957

M3 - Conference contribution

AN - SCOPUS:85147735764

SN - 978-1-6654-5100-0

T3 - Proceedings - IEEE International Conference on Data Mining, ICDM

SP - 975

EP - 980

BT - Proceedings

A2 - Zhu, Xingquan

A2 - Ranka, Sanjay

A2 - Thai, My T.

A2 - Washio, Takashi

A2 - Wu, Xindong

PB - Institute of Electrical and Electronics Engineers Inc.

T2 - 22nd IEEE International Conference on Data Mining, ICDM 2022

Y2 - 28 November 2022 through 1 December 2022

ER -