Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | Proceedings |
Untertitel | 22nd IEEE International Conference on Data Mining, ICDM 2022 |
Herausgeber/-innen | Xingquan Zhu, Sanjay Ranka, My T. Thai, Takashi Washio, Xindong Wu |
Herausgeber (Verlag) | Institute of Electrical and Electronics Engineers Inc. |
Seiten | 975-980 |
Seitenumfang | 6 |
ISBN (elektronisch) | 9781665450997 |
ISBN (Print) | 978-1-6654-5100-0 |
Publikationsstatus | Veröffentlicht - 2022 |
Veranstaltung | 22nd IEEE International Conference on Data Mining, ICDM 2022 - Orlando, USA / Vereinigte Staaten Dauer: 28 Nov. 2022 → 1 Dez. 2022 |
Publikationsreihe
Name | Proceedings - IEEE International Conference on Data Mining, ICDM |
---|---|
Band | 2022-November |
ISSN (Print) | 1550-4786 |
Abstract
We present evidence for the existence and effectiveness of adversarial attacks on graph neural networks (GNNs) that aim to degrade fairness. These attacks can disadvantage a particular subgroup of nodes in GNN-based node classification, where nodes of the underlying network have sensitive attributes, such as race or gender. We conduct qualitative and experimental analyses explaining how adversarial link injection impairs the fairness of GNN predictions. For example, an attacker can compromise the fairness of GNN-based node classification by injecting adversarial links between nodes belonging to opposite subgroups and opposite class labels. Our experiments on empirical datasets demonstrate that adversarial fairness attacks can significantly degrade the fairness of GNN predictions (attacks are effective) with a low perturbation rate (attacks are efficient) and without a significant drop in accuracy (attacks are deceptive). This work demonstrates the vulnerability of GNN models to adversarial fairness attacks. We hope our findings raise awareness about this issue in our community and lay a foundation for the future development of GNN models that are more robust to such attacks.
ASJC Scopus Sachgebiete
- Ingenieurwesen (insg.)
- Allgemeiner Maschinenbau
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
Proceedings : 22nd IEEE International Conference on Data Mining, ICDM 2022. Hrsg. / Xingquan Zhu; Sanjay Ranka; My T. Thai; Takashi Washio; Xindong Wu. Institute of Electrical and Electronics Engineers Inc., 2022. S. 975-980 (Proceedings - IEEE International Conference on Data Mining, ICDM; Band 2022-November).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks
AU - Hussain, Hussain
AU - Cao, Meng
AU - Sikdar, Sandipan
AU - Helic, Denis
AU - Lex, Elisabeth
AU - Strohmaier, Markus
AU - Kern, Roman
PY - 2022
Y1 - 2022
N2 - We present evidence for the existence and effectiveness of adversarial attacks on graph neural networks (GNNs) that aim to degrade fairness. These attacks can disadvantage a particular subgroup of nodes in GNN-based node classification, where nodes of the underlying network have sensitive attributes, such as race or gender. We conduct qualitative and experimental analyses explaining how adversarial link injection impairs the fairness of GNN predictions. For example, an attacker can compromise the fairness of GNN-based node classification by injecting adversarial links between nodes belonging to opposite subgroups and opposite class labels. Our experiments on empirical datasets demonstrate that adversarial fairness attacks can significantly degrade the fairness of GNN predictions (attacks are effective) with a low perturbation rate (attacks are efficient) and without a significant drop in accuracy (attacks are deceptive). This work demonstrates the vulnerability of GNN models to adversarial fairness attacks. We hope our findings raise awareness about this issue in our community and lay a foundation for the future development of GNN models that are more robust to such attacks.
AB - We present evidence for the existence and effectiveness of adversarial attacks on graph neural networks (GNNs) that aim to degrade fairness. These attacks can disadvantage a particular subgroup of nodes in GNN-based node classification, where nodes of the underlying network have sensitive attributes, such as race or gender. We conduct qualitative and experimental analyses explaining how adversarial link injection impairs the fairness of GNN predictions. For example, an attacker can compromise the fairness of GNN-based node classification by injecting adversarial links between nodes belonging to opposite subgroups and opposite class labels. Our experiments on empirical datasets demonstrate that adversarial fairness attacks can significantly degrade the fairness of GNN predictions (attacks are effective) with a low perturbation rate (attacks are efficient) and without a significant drop in accuracy (attacks are deceptive). This work demonstrates the vulnerability of GNN models to adversarial fairness attacks. We hope our findings raise awareness about this issue in our community and lay a foundation for the future development of GNN models that are more robust to such attacks.
KW - adversarial attacks
KW - fairness
KW - graph neural networks
UR - http://www.scopus.com/inward/record.url?scp=85147735764&partnerID=8YFLogxK
U2 - 10.48550/arXiv.2209.05957
DO - 10.48550/arXiv.2209.05957
M3 - Conference contribution
AN - SCOPUS:85147735764
SN - 978-1-6654-5100-0
T3 - Proceedings - IEEE International Conference on Data Mining, ICDM
SP - 975
EP - 980
BT - Proceedings
A2 - Zhu, Xingquan
A2 - Ranka, Sanjay
A2 - Thai, My T.
A2 - Washio, Takashi
A2 - Wu, Xindong
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 22nd IEEE International Conference on Data Mining, ICDM 2022
Y2 - 28 November 2022 through 1 December 2022
ER -