Membership Inference Attack on Graph Neural Networks

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

Organisationseinheiten

Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksProceedings - 2021 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021
Herausgeber (Verlag)Institute of Electrical and Electronics Engineers Inc.
Seiten11-20
Seitenumfang10
ISBN (elektronisch)9781665416238
PublikationsstatusVeröffentlicht - 2021
Veranstaltung3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021 - Virtual, Online, USA / Vereinigte Staaten
Dauer: 13 Dez. 202115 Dez. 2021

Publikationsreihe

NameProceedings - 2021 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021

Abstract

Graph Neural Networks (GNNs), which generalize traditional deep neural networks on graph data, have achieved state-of-the-art performance on several graph analytical tasks. We focus on how trained GNN models could leak information about the member nodes that they were trained on. We introduce two realistic settings for performing a membership inference (MI) attack on GNNs. While choosing the simplest possible attack model that utilizes the posteriors of the trained model (black-box access), we thoroughly analyze the properties of GNNs and the datasets which dictate the differences in their robustness towards MI attack. While in traditional machine learning models, overfitting is considered the main cause of such leakage, we show that in GNNs the additional structural information is the major contributing factor. We support our findings by extensive experiments on four representative GNN models. To prevent MI attacks on GNN, we propose two effective defenses that significantly decreases the attacker's inference by up to 60% without degradation to the target model's performance. Our code is available at https://github.com/iyempissy/rebMIGraph.

ASJC Scopus Sachgebiete

Zitieren

Membership Inference Attack on Graph Neural Networks. / Olatunji, Iyiola E.; Nejdl, Wolfgang; Khosla, Megha.
Proceedings - 2021 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021. Institute of Electrical and Electronics Engineers Inc., 2021. S. 11-20 (Proceedings - 2021 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Olatunji, IE, Nejdl, W & Khosla, M 2021, Membership Inference Attack on Graph Neural Networks. in Proceedings - 2021 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021. Proceedings - 2021 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021, Institute of Electrical and Electronics Engineers Inc., S. 11-20, 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021, Virtual, Online, USA / Vereinigte Staaten, 13 Dez. 2021. https://doi.org/10.48550/arXiv.2101.06570, https://doi.org/10.1109/TPSISA52974.2021.00002
Olatunji, I. E., Nejdl, W., & Khosla, M. (2021). Membership Inference Attack on Graph Neural Networks. In Proceedings - 2021 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021 (S. 11-20). (Proceedings - 2021 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.48550/arXiv.2101.06570, https://doi.org/10.1109/TPSISA52974.2021.00002
Olatunji IE, Nejdl W, Khosla M. Membership Inference Attack on Graph Neural Networks. in Proceedings - 2021 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021. Institute of Electrical and Electronics Engineers Inc. 2021. S. 11-20. (Proceedings - 2021 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021). doi: 10.48550/arXiv.2101.06570, 10.1109/TPSISA52974.2021.00002
Olatunji, Iyiola E. ; Nejdl, Wolfgang ; Khosla, Megha. / Membership Inference Attack on Graph Neural Networks. Proceedings - 2021 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021. Institute of Electrical and Electronics Engineers Inc., 2021. S. 11-20 (Proceedings - 2021 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021).
Download
@inproceedings{ff5eba4fd0494c90b3a61f8dc02a60c8,
title = "Membership Inference Attack on Graph Neural Networks",
abstract = "Graph Neural Networks (GNNs), which generalize traditional deep neural networks on graph data, have achieved state-of-the-art performance on several graph analytical tasks. We focus on how trained GNN models could leak information about the member nodes that they were trained on. We introduce two realistic settings for performing a membership inference (MI) attack on GNNs. While choosing the simplest possible attack model that utilizes the posteriors of the trained model (black-box access), we thoroughly analyze the properties of GNNs and the datasets which dictate the differences in their robustness towards MI attack. While in traditional machine learning models, overfitting is considered the main cause of such leakage, we show that in GNNs the additional structural information is the major contributing factor. We support our findings by extensive experiments on four representative GNN models. To prevent MI attacks on GNN, we propose two effective defenses that significantly decreases the attacker's inference by up to 60% without degradation to the target model's performance. Our code is available at https://github.com/iyempissy/rebMIGraph.",
keywords = "Graph Neural Networks, Membership Inference, Privacy leakage",
author = "Olatunji, {Iyiola E.} and Wolfgang Nejdl and Megha Khosla",
note = "Funding Information: Acknowledgements. This work is in part funded by the Lower Saxony Ministry of Science and Culture under grant number ZN3491 within the Lower Saxony {"}Vorab{"} of the Volkswagen Foundation and supported by the Center for Digital Innovations (ZDIN), and the Federal Ministry of Education and Research (BMBF) under LeibnizKILabor (grant number 01DD20003). ; 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021 ; Conference date: 13-12-2021 Through 15-12-2021",
year = "2021",
doi = "10.48550/arXiv.2101.06570",
language = "English",
series = "Proceedings - 2021 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "11--20",
booktitle = "Proceedings - 2021 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021",
address = "United States",

}

Download

TY - GEN

T1 - Membership Inference Attack on Graph Neural Networks

AU - Olatunji, Iyiola E.

AU - Nejdl, Wolfgang

AU - Khosla, Megha

N1 - Funding Information: Acknowledgements. This work is in part funded by the Lower Saxony Ministry of Science and Culture under grant number ZN3491 within the Lower Saxony "Vorab" of the Volkswagen Foundation and supported by the Center for Digital Innovations (ZDIN), and the Federal Ministry of Education and Research (BMBF) under LeibnizKILabor (grant number 01DD20003).

PY - 2021

Y1 - 2021

N2 - Graph Neural Networks (GNNs), which generalize traditional deep neural networks on graph data, have achieved state-of-the-art performance on several graph analytical tasks. We focus on how trained GNN models could leak information about the member nodes that they were trained on. We introduce two realistic settings for performing a membership inference (MI) attack on GNNs. While choosing the simplest possible attack model that utilizes the posteriors of the trained model (black-box access), we thoroughly analyze the properties of GNNs and the datasets which dictate the differences in their robustness towards MI attack. While in traditional machine learning models, overfitting is considered the main cause of such leakage, we show that in GNNs the additional structural information is the major contributing factor. We support our findings by extensive experiments on four representative GNN models. To prevent MI attacks on GNN, we propose two effective defenses that significantly decreases the attacker's inference by up to 60% without degradation to the target model's performance. Our code is available at https://github.com/iyempissy/rebMIGraph.

AB - Graph Neural Networks (GNNs), which generalize traditional deep neural networks on graph data, have achieved state-of-the-art performance on several graph analytical tasks. We focus on how trained GNN models could leak information about the member nodes that they were trained on. We introduce two realistic settings for performing a membership inference (MI) attack on GNNs. While choosing the simplest possible attack model that utilizes the posteriors of the trained model (black-box access), we thoroughly analyze the properties of GNNs and the datasets which dictate the differences in their robustness towards MI attack. While in traditional machine learning models, overfitting is considered the main cause of such leakage, we show that in GNNs the additional structural information is the major contributing factor. We support our findings by extensive experiments on four representative GNN models. To prevent MI attacks on GNN, we propose two effective defenses that significantly decreases the attacker's inference by up to 60% without degradation to the target model's performance. Our code is available at https://github.com/iyempissy/rebMIGraph.

KW - Graph Neural Networks

KW - Membership Inference

KW - Privacy leakage

UR - http://www.scopus.com/inward/record.url?scp=85128779483&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2101.06570

DO - 10.48550/arXiv.2101.06570

M3 - Conference contribution

AN - SCOPUS:85128779483

T3 - Proceedings - 2021 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021

SP - 11

EP - 20

BT - Proceedings - 2021 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021

PB - Institute of Electrical and Electronics Engineers Inc.

T2 - 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2021

Y2 - 13 December 2021 through 15 December 2021

ER -

Von denselben Autoren