Does the security of the software need to be updated when faced with the reality of distributed machine learning?
DOI:
https://doi.org/10.5753/compbr.2024.52.4601Keywords:
Federated Learning, Machine learning attacks, Software SecurityAbstract
Federated Learning (FL) is a promising distributed learning technique for training models on multiple devices without sharing data and maintaining privacy. However, it faces cybersecurity challenges such as data and model poisoning attacks. These attacks compromise data integrity and privacy. Mitigation strategies, such as data filtering and differential privacy, are essential for protecting ML applications. Integrating knowledge of these threats into the software security curricula and providing hands-on experience for students is key to prepare them for the job market. Tools like FedML and FADO facilitate experimenting and testing security mechanisms in the ML domain, thus contributing to a better understanding of vulnerabilities and security countermeasures.
Downloads
References
Bukaty, P.. The California Consumer Privacy Act (CCPA): An implementation guide. IT Governance Publishing, 2019
Fang, M., Cao, X., Jia, J., Gong, N.. Local model poisoning attacks to Byzantine-robust federated learning. USENIX Conference on Security Symposium, 2020
European Parliament and Council of the European Union, General Data Protection Regulation, 2016
Han, S., et al. FedMLSecurity: A Benchmark for Attacks and Defenses in Federated Learning and Federated LLMs, arXiv:2306.04959, 2023
Kairouz, P., et al., Advances and open problems in federated learning. Foundations and Trends in Machine Learning, 14(1-2):1–210, 2021
McMahan, H., Moore, E., Ramage, D., Hampson, S., Arcas, B.. Communication-efficient learning of deep networks from decentralized data. International Conference on Artificial Intelligence and Statistics, 2017
Rodrigues, F., Simões, R., Neves, N., FADO: A Federated Learning Attack and Defense Orchestrator, Workshop on Dependable and Secure Machine Learning (DSML), June 2023.
Sociedade Brasileira de Computação, Referenciais de Formação para o Curso de Bacharelado em CiberSegurança, 2023
Tolpegin, V., Truex, S., Gursoy, M., Liu, L.. Data poisoning attacks against federated learning systems. European Symposium on Research In Computer Security, 2020
Yue, K., et al., Gradient Obfuscation Gives a False Sense of Security in Federated Learning, USENIX Security Symposium, 2023
Zhang, Z., et al., Neurotoxin: Durable backdoors in federated learning. International Conference on Machine Learning, 2022
Downloads
Published
How to Cite
Issue
Section
License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.