Details
Original language | English |
---|---|
Title of host publication | Proceedings |
Subtitle of host publication | 2022 IEEE 42nd International Conference on Distributed Computing Systems, ICDCS 2022 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 820-830 |
Number of pages | 11 |
ISBN (electronic) | 9781665471770 |
ISBN (print) | 978-1-6654-7178-7 |
Publication status | Published - 2022 |
Event | 42nd IEEE International Conference on Distributed Computing Systems, ICDCS 2022 - Bologna, Italy Duration: 10 Jul 2022 → 13 Jul 2022 |
Publication series
Name | Proceedings - International Conference on Distributed Computing Systems |
---|---|
Volume | 2022-July |
Abstract
Malicious clients can attack federated learning systems using compromised data during the training phase, including backdoor samples. The compromised global model will perform well on the validation dataset designed for the task, but a small subset of data with backdoor patterns may trigger the model to make a wrong prediction. In this work, we propose a new and effective method to mitigate backdoor attacks in federated learning after the training phase. Through federated pruning method, we remove redundant neurons and "backdoor neurons", which trigger misbehavior upon recognizing backdoor patterns while keeping silent when the input data is clean. The second optional fine-tuning process is designed to recover the pruning damage to the test accuracy on benign datasets. In the last step, we eliminate backdoor attacks by limiting the extreme values of inputs and neural network neurons' weights. Experiments using our defenses mechanism against the state-of-the-art Distributed Backdoor Attacks on CIFAR-10 show promising results; the averaged attack success rate drops more than 70% with less than 2% loss of test accuracy on the validation dataset. Our defense method has also outperformed the state-of-the-art pruning defense against backdoor attacks in the federated learning scenario.
Keywords
- backdoor attack, federated learning, federated model pruning, machine-learning security
ASJC Scopus subject areas
- Computer Science(all)
- Software
- Computer Science(all)
- Hardware and Architecture
- Computer Science(all)
- Computer Networks and Communications
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Proceedings : 2022 IEEE 42nd International Conference on Distributed Computing Systems, ICDCS 2022. Institute of Electrical and Electronics Engineers Inc., 2022. p. 820-830 (Proceedings - International Conference on Distributed Computing Systems; Vol. 2022-July).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Toward Cleansing Backdoored Neural Networks in Federated Learning
AU - Wu, Chen
AU - Yang, Xian
AU - Zhu, Sencun
AU - Mitra, Prasenjit
PY - 2022
Y1 - 2022
N2 - Malicious clients can attack federated learning systems using compromised data during the training phase, including backdoor samples. The compromised global model will perform well on the validation dataset designed for the task, but a small subset of data with backdoor patterns may trigger the model to make a wrong prediction. In this work, we propose a new and effective method to mitigate backdoor attacks in federated learning after the training phase. Through federated pruning method, we remove redundant neurons and "backdoor neurons", which trigger misbehavior upon recognizing backdoor patterns while keeping silent when the input data is clean. The second optional fine-tuning process is designed to recover the pruning damage to the test accuracy on benign datasets. In the last step, we eliminate backdoor attacks by limiting the extreme values of inputs and neural network neurons' weights. Experiments using our defenses mechanism against the state-of-the-art Distributed Backdoor Attacks on CIFAR-10 show promising results; the averaged attack success rate drops more than 70% with less than 2% loss of test accuracy on the validation dataset. Our defense method has also outperformed the state-of-the-art pruning defense against backdoor attacks in the federated learning scenario.
AB - Malicious clients can attack federated learning systems using compromised data during the training phase, including backdoor samples. The compromised global model will perform well on the validation dataset designed for the task, but a small subset of data with backdoor patterns may trigger the model to make a wrong prediction. In this work, we propose a new and effective method to mitigate backdoor attacks in federated learning after the training phase. Through federated pruning method, we remove redundant neurons and "backdoor neurons", which trigger misbehavior upon recognizing backdoor patterns while keeping silent when the input data is clean. The second optional fine-tuning process is designed to recover the pruning damage to the test accuracy on benign datasets. In the last step, we eliminate backdoor attacks by limiting the extreme values of inputs and neural network neurons' weights. Experiments using our defenses mechanism against the state-of-the-art Distributed Backdoor Attacks on CIFAR-10 show promising results; the averaged attack success rate drops more than 70% with less than 2% loss of test accuracy on the validation dataset. Our defense method has also outperformed the state-of-the-art pruning defense against backdoor attacks in the federated learning scenario.
KW - backdoor attack
KW - federated learning
KW - federated model pruning
KW - machine-learning security
UR - http://www.scopus.com/inward/record.url?scp=85140918570&partnerID=8YFLogxK
U2 - 10.1109/ICDCS54860.2022.00084
DO - 10.1109/ICDCS54860.2022.00084
M3 - Conference contribution
AN - SCOPUS:85140918570
SN - 978-1-6654-7178-7
T3 - Proceedings - International Conference on Distributed Computing Systems
SP - 820
EP - 830
BT - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 42nd IEEE International Conference on Distributed Computing Systems, ICDCS 2022
Y2 - 10 July 2022 through 13 July 2022
ER -