A Triad of Defenses to Mitigate Poisoning Attacks in Federated Learning

Authors

DOI:

https://doi.org/10.5753/jbcs.2026.5558

Keywords:

Federated Learning, Poisoning Attacks, Machine Learning

Abstract

Federated learning (FL) enables the training of machine learning models on decentralized data, potentially improving data privacy. However, the FL distributed architecture is vulnerable to poisoning attacks. In this paper, we propose an FL method capable of mitigating these attacks through a triad of defense strategies: organizing clients into groups, evaluating the local performance of global models during training, and using a voting scheme during the inference phase. The proposed approach first divides the clients into randomly sampled groups, each generating a distinct global model. Each client trains a local model on their private data and submits it to the central server. The central server aggregates the local models within each group to generate the global models. Then, each client receives all global models, selects the best performing one as their new local model, and the process repeats until training is complete. During the inference phase, each client classifies its inputs according to a majority-based voting scheme among the global models. Our experiments using the HAR and MNIST datasets demonstrate that our method can effectively mitigate poisoning attacks without compromising the global model's performance.

Downloads

Download data is not yet available.

References

Andreina, S., Marson, G. A., Möllering, H., and Karame, G. (2020). Baffle: Backdoor detection via feedback-based federated learning. CoRR, abs/2011.02167. Available at:[link].

Blanchard, P., El Mhamdi, E. M., Guerraoui, R., and Stainer, J. (2017). Machine learning with adversaries: Byzantine tolerant gradient descent. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. Available at:[link].

Bouacida, N. and Mohapatra, P. (2021). Vulnerabilities in federated learning. IEEE Access, 9:63229-63249. DOI: 10.1109/ACCESS.2021.3075203.

Cao, X., Jia, J., and Gong, N. Z. (2021). Provably secure federated learning against malicious clients. CoRR, abs/2102.01854. DOI: 10.1609/aaai.v35i8.16849.

Cao, X., Zhang, Z., Jia, J., and Gong, N. Z. (2022). Flcert: Provably secure federated learning against poisoning attacks. IEEE Transactions on Information Forensics and Security, 17:3691-3705. DOI: 10.1109/TIFS.2022.3212174.

Che, C., Li, X., Chen, C., He, X., and Zheng, Z. (2022). A decentralized federated learning framework via committee mechanism with convergence guarantee. IEEE Transactions on Parallel and Distributed Systems, 33(12):4783–4800. DOI: 10.1109/tpds.2022.3202887.

Deng, L. (2012). The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141-142. DOI: 10.1109/MSP.2012.2211477.

Fang, M., Cao, X., Jia, J., and Gong, N. (2020). Local model poisoning attacks to Byzantine-Robust federated learning. In 29th USENIX Security Symposium (USENIX Security 20), pages 1605-1622. USENIX Association. Available at:[link].

Jebreel, N. M., Domingo-Ferrer, J., Blanco-Justicia, A., and Sánchez, D. (2024). Enhanced security and privacy via fragmented federated learning. IEEE Transactions on Neural Networks and Learning Systems, 35(5):6703–6717. DOI: 10.1109/tnnls.2022.3212627.

Li, S., Ngai, E., and Voigt, T. (2023). Byzantine-robust aggregation in federated learning empowered industrial iot. IEEE Transactions on Industrial Informatics, 19(2):1165-1175. DOI: 10.1109/TII.2021.3128164.

Liu, B., Ding, M., Shaham, S., Rahayu, W., Farokhi, F., and Lin, Z. (2021). When machine learning meets privacy: A survey and outlook. ACM Comput. Surv., 54(2). DOI: 10.1145/3436755.

Marcozzi, M. and Mostarda, L. (2024). Analytical model for performability evaluation of practical byzantine fault-tolerant systems. Expert Systems with Applications, 238:121838. DOI: 10.1016/j.eswa.2023.121838.

McMahan, B. and Ramage, D. (2017). Federated learning: Collaborative machine learning without centralized training data. Available at:[link] Accessed on june 06, 2024.

Reyes-Ortiz, J., Anguita, D., Ghio, A., Oneto, L., and Parra, X. (2012). Human Activity Recognition Using Smartphones. DOI: 10.24432/C54S4K.

Takahashi, K., Yamamoto, K., Kuchiba, A., and Koyama, T. (2022). Confidence interval for micro-averaged f 1 and macro-averaged f 1 scores. Applied Intelligence, 52(5):4961-4972. DOI: 10.1007/s10489-021-02635-5.

Tolpegin, V., Truex, S., Gursoy, M. E., and Liu, L. (2020). Data poisoning attacks against federated learning systems. In Chen, L., Li, N., Liang, K., and Schneider, S., editors, Computer Security - ESORICS 2020, pages 480-501, Cham. Springer International Publishing. DOI: 10.1007/978-3-030-58951-6_24.

Wang, Z., Kang, Q., Zhang, X., and Hu, Q. (2022). Defense strategies toward model poisoning attacks in federated learning: A survey. DOI: 10.1109/wcnc51071.2022.9771619.

Witt, L., Heyer, M., Toyoda, K., Samek, W., and Li, D. (2023). Decentral and incentivized federated learning frameworks: A systematic literature review. IEEE Internet of Things Journal, 10(4):3642-3663. DOI: 10.1109/JIOT.2022.3231363.

Xia, G., Chen, J., Yu, C., and Ma, J. (2023). Poisoning attacks in federated learning: A survey. IEEE Access, 11:10708-10722. DOI: 10.1109/ACCESS.2023.3238823.

Xu, C., Jia, Y., Zhu, L., Zhang, C., Jin, G., and Sharif, K. (2022). Tdfl: Truth discovery based byzantine robust federated learning. IEEE Transactions on Parallel and Distributed Systems, 33(12):4835-4848. DOI: 10.1109/TPDS.2022.3205714.

Yang, Q., Liu, Y., Chen, T., and Tong, Y. (2019). Federated machine learning: Concept and applications. ACM Trans. Intell. Syst. Technol., 10(2). DOI: 10.1145/3298981.

Yin, D., Chen, Y., Kannan, R., and Bartlett, P. (2018). Byzantine-robust distributed learning: Towards optimal statistical rates. In Dy, J. and Krause, A., editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 5650-5659. PMLR. DOI: 10.48550/arxiv.1803.01498.

Zhang, C., Xie, Y., Bai, H., Yu, B., Li, W., and Gao, Y. (2021). A survey on federated learning. Knowledge-Based Systems, 216:106775. DOI: 10.1016/j.knosys.2021.106775.

Zhang, Z., Li, J., Yu, S., and Makaya, C. (2023). Safelearning: Secure aggregation in federated learning with backdoor detectability. IEEE Transactions on Information Forensics and Security, 18:3289–3304. DOI: 10.1109/tifs.2023.3280032.

Downloads

Published

2026-03-16

How to Cite

Mazetto, B. O., & Zarpelão, B. B. (2026). A Triad of Defenses to Mitigate Poisoning Attacks in Federated Learning. Journal of the Brazilian Computer Society, 32(1), 316–331. https://doi.org/10.5753/jbcs.2026.5558

Issue

Section

Regular Issue