Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

  • Hussain Hussain
  • Meng Cao
  • Sandipan Sikdar
  • Denis Helic
  • Elisabeth Lex
  • Markus Strohmaier
  • Roman Kern

Research Organisations

External Research Organisations

  • Graz University of Technology
  • Nanjing University
  • MODUL University Vienna
  • University of Mannheim
View graph of relations

Details

Original languageEnglish
Title of host publicationProceedings
Subtitle of host publication22nd IEEE International Conference on Data Mining, ICDM 2022
EditorsXingquan Zhu, Sanjay Ranka, My T. Thai, Takashi Washio, Xindong Wu
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages975-980
Number of pages6
ISBN (electronic)9781665450997
ISBN (print)978-1-6654-5100-0
Publication statusPublished - 2022
Event22nd IEEE International Conference on Data Mining, ICDM 2022 - Orlando, United States
Duration: 28 Nov 20221 Dec 2022

Publication series

NameProceedings - IEEE International Conference on Data Mining, ICDM
Volume2022-November
ISSN (Print)1550-4786

Abstract

We present evidence for the existence and effectiveness of adversarial attacks on graph neural networks (GNNs) that aim to degrade fairness. These attacks can disadvantage a particular subgroup of nodes in GNN-based node classification, where nodes of the underlying network have sensitive attributes, such as race or gender. We conduct qualitative and experimental analyses explaining how adversarial link injection impairs the fairness of GNN predictions. For example, an attacker can compromise the fairness of GNN-based node classification by injecting adversarial links between nodes belonging to opposite subgroups and opposite class labels. Our experiments on empirical datasets demonstrate that adversarial fairness attacks can significantly degrade the fairness of GNN predictions (attacks are effective) with a low perturbation rate (attacks are efficient) and without a significant drop in accuracy (attacks are deceptive). This work demonstrates the vulnerability of GNN models to adversarial fairness attacks. We hope our findings raise awareness about this issue in our community and lay a foundation for the future development of GNN models that are more robust to such attacks.

Keywords

    adversarial attacks, fairness, graph neural networks

ASJC Scopus subject areas

Cite this

Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks. / Hussain, Hussain; Cao, Meng; Sikdar, Sandipan et al.
Proceedings : 22nd IEEE International Conference on Data Mining, ICDM 2022. ed. / Xingquan Zhu; Sanjay Ranka; My T. Thai; Takashi Washio; Xindong Wu. Institute of Electrical and Electronics Engineers Inc., 2022. p. 975-980 (Proceedings - IEEE International Conference on Data Mining, ICDM; Vol. 2022-November).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Hussain, H, Cao, M, Sikdar, S, Helic, D, Lex, E, Strohmaier, M & Kern, R 2022, Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks. in X Zhu, S Ranka, MT Thai, T Washio & X Wu (eds), Proceedings : 22nd IEEE International Conference on Data Mining, ICDM 2022. Proceedings - IEEE International Conference on Data Mining, ICDM, vol. 2022-November, Institute of Electrical and Electronics Engineers Inc., pp. 975-980, 22nd IEEE International Conference on Data Mining, ICDM 2022, Orlando, United States, 28 Nov 2022. https://doi.org/10.48550/arXiv.2209.05957, https://doi.org/10.1109/ICDM54844.2022.00117
Hussain, H., Cao, M., Sikdar, S., Helic, D., Lex, E., Strohmaier, M., & Kern, R. (2022). Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks. In X. Zhu, S. Ranka, M. T. Thai, T. Washio, & X. Wu (Eds.), Proceedings : 22nd IEEE International Conference on Data Mining, ICDM 2022 (pp. 975-980). (Proceedings - IEEE International Conference on Data Mining, ICDM; Vol. 2022-November). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.48550/arXiv.2209.05957, https://doi.org/10.1109/ICDM54844.2022.00117
Hussain H, Cao M, Sikdar S, Helic D, Lex E, Strohmaier M et al. Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks. In Zhu X, Ranka S, Thai MT, Washio T, Wu X, editors, Proceedings : 22nd IEEE International Conference on Data Mining, ICDM 2022. Institute of Electrical and Electronics Engineers Inc. 2022. p. 975-980. (Proceedings - IEEE International Conference on Data Mining, ICDM). doi: 10.48550/arXiv.2209.05957, 10.1109/ICDM54844.2022.00117
Hussain, Hussain ; Cao, Meng ; Sikdar, Sandipan et al. / Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks. Proceedings : 22nd IEEE International Conference on Data Mining, ICDM 2022. editor / Xingquan Zhu ; Sanjay Ranka ; My T. Thai ; Takashi Washio ; Xindong Wu. Institute of Electrical and Electronics Engineers Inc., 2022. pp. 975-980 (Proceedings - IEEE International Conference on Data Mining, ICDM).
Download
@inproceedings{7d3ad90c342f4c21a0c94782304abe0e,
title = "Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks",
abstract = "We present evidence for the existence and effectiveness of adversarial attacks on graph neural networks (GNNs) that aim to degrade fairness. These attacks can disadvantage a particular subgroup of nodes in GNN-based node classification, where nodes of the underlying network have sensitive attributes, such as race or gender. We conduct qualitative and experimental analyses explaining how adversarial link injection impairs the fairness of GNN predictions. For example, an attacker can compromise the fairness of GNN-based node classification by injecting adversarial links between nodes belonging to opposite subgroups and opposite class labels. Our experiments on empirical datasets demonstrate that adversarial fairness attacks can significantly degrade the fairness of GNN predictions (attacks are effective) with a low perturbation rate (attacks are efficient) and without a significant drop in accuracy (attacks are deceptive). This work demonstrates the vulnerability of GNN models to adversarial fairness attacks. We hope our findings raise awareness about this issue in our community and lay a foundation for the future development of GNN models that are more robust to such attacks.",
keywords = "adversarial attacks, fairness, graph neural networks",
author = "Hussain Hussain and Meng Cao and Sandipan Sikdar and Denis Helic and Elisabeth Lex and Markus Strohmaier and Roman Kern",
year = "2022",
doi = "10.48550/arXiv.2209.05957",
language = "English",
isbn = "978-1-6654-5100-0",
series = "Proceedings - IEEE International Conference on Data Mining, ICDM",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "975--980",
editor = "Xingquan Zhu and Sanjay Ranka and Thai, {My T.} and Takashi Washio and Xindong Wu",
booktitle = "Proceedings",
address = "United States",
note = "22nd IEEE International Conference on Data Mining, ICDM 2022 ; Conference date: 28-11-2022 Through 01-12-2022",

}

Download

TY - GEN

T1 - Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks

AU - Hussain, Hussain

AU - Cao, Meng

AU - Sikdar, Sandipan

AU - Helic, Denis

AU - Lex, Elisabeth

AU - Strohmaier, Markus

AU - Kern, Roman

PY - 2022

Y1 - 2022

N2 - We present evidence for the existence and effectiveness of adversarial attacks on graph neural networks (GNNs) that aim to degrade fairness. These attacks can disadvantage a particular subgroup of nodes in GNN-based node classification, where nodes of the underlying network have sensitive attributes, such as race or gender. We conduct qualitative and experimental analyses explaining how adversarial link injection impairs the fairness of GNN predictions. For example, an attacker can compromise the fairness of GNN-based node classification by injecting adversarial links between nodes belonging to opposite subgroups and opposite class labels. Our experiments on empirical datasets demonstrate that adversarial fairness attacks can significantly degrade the fairness of GNN predictions (attacks are effective) with a low perturbation rate (attacks are efficient) and without a significant drop in accuracy (attacks are deceptive). This work demonstrates the vulnerability of GNN models to adversarial fairness attacks. We hope our findings raise awareness about this issue in our community and lay a foundation for the future development of GNN models that are more robust to such attacks.

AB - We present evidence for the existence and effectiveness of adversarial attacks on graph neural networks (GNNs) that aim to degrade fairness. These attacks can disadvantage a particular subgroup of nodes in GNN-based node classification, where nodes of the underlying network have sensitive attributes, such as race or gender. We conduct qualitative and experimental analyses explaining how adversarial link injection impairs the fairness of GNN predictions. For example, an attacker can compromise the fairness of GNN-based node classification by injecting adversarial links between nodes belonging to opposite subgroups and opposite class labels. Our experiments on empirical datasets demonstrate that adversarial fairness attacks can significantly degrade the fairness of GNN predictions (attacks are effective) with a low perturbation rate (attacks are efficient) and without a significant drop in accuracy (attacks are deceptive). This work demonstrates the vulnerability of GNN models to adversarial fairness attacks. We hope our findings raise awareness about this issue in our community and lay a foundation for the future development of GNN models that are more robust to such attacks.

KW - adversarial attacks

KW - fairness

KW - graph neural networks

UR - http://www.scopus.com/inward/record.url?scp=85147735764&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2209.05957

DO - 10.48550/arXiv.2209.05957

M3 - Conference contribution

AN - SCOPUS:85147735764

SN - 978-1-6654-5100-0

T3 - Proceedings - IEEE International Conference on Data Mining, ICDM

SP - 975

EP - 980

BT - Proceedings

A2 - Zhu, Xingquan

A2 - Ranka, Sanjay

A2 - Thai, My T.

A2 - Washio, Takashi

A2 - Wu, Xindong

PB - Institute of Electrical and Electronics Engineers Inc.

T2 - 22nd IEEE International Conference on Data Mining, ICDM 2022

Y2 - 28 November 2022 through 1 December 2022

ER -