MESFLA: Model Efficiency through Selective Federated Learning Algorithm
DOI:
https://doi.org/10.5753/jisa.2024.4044Keywords:
Federated Learning, Cluster, Aggregation Methods;Abstract
Integrating big data and deep learning across various applications has significantly enhanced intelligence and efficiency in our daily lives. However, it also requires extensive data sharing, raising significant communication and privacy concerns. In this context, Federated Learning (FL) emerges as a promising solution to enable collaborative model training while preserving the privacy and autonomy of participating clients. FL facilitates collaborative model training by enabling data to be trained locally on devices, eliminating the need for individual information sharing among clients. A client selection mechanism strategically chooses a subset of participating clients to contribute to the model training in each learning round. However, an efficient selection of clients to participate in the training process directly impacts model convergence/accuracy and the overall communication load on the network. In addition, FL faces challenges when dealing with non-Independent and Non-Identically Distributed (non-IID) data, where the diversity in data distribution often leads to reduced classification accuracy. Hence, designing an efficient client selection mechanism in a scenario with non-IID data is essential, but it is still an open issue. This article proposes a Model Efficiency through Selective Federated Learning Algorithm called MESFLA. The mechanism employs a Centered Kernel Alignment (CKA) algorithm to search for similar models based on data weight or similarity between models, i.e., grouping participants with comparable data distributions or learning objectives. Afterward, MESFLA selects the clients with more relevance in each group based on data weight and entropy. Our comprehensive evaluation across multiple datasets, including MNIST, CIFAR-10, and CIFAR-100, demonstrates MESFLA's superior performance over traditional FL algorithms. Our results show an accuracy improvement and a minor loss in each client aggregation of the new global model sent to clients with a difference of 3 rounds using the Data Weight in comparison with the other selection methods.
Downloads
References
Albaseer, A., Abdallah, M., Al-Fuqaha, A., and Erbad, A. (2021). Client selection approach in support of clustered federated learning over wireless edge networks. In 2021 IEEE Global Communications Conference (GLOBECOM), pages 1-6. IEEE. DOI: 10.48550/arXiv.2108.08768.
Ami, D. B., Cohen, K., and Zhao, Q. (2023). Client selection for generalization in accelerated federated learning: A multi-armed bandit approach. DOI: 10.48550/arXiv.2303.10373.
Barros, A., Rosário, D., Cerqueira, E., and Fonseca, N. (2021). A strategy to the reduction of communication overhead and overfitting in federated learning. In Proceedings of the 26th Workshop on Management and Operation of Networks and Service (WGRS), pages 1-13, Porto Alegre, RS, Brasil. SBC. DOI: 10.5753/wgrs.2021.17181.
Chen, Y., Sun, X., and Jin, Y. (2020). Communication-efficient federated deep learning with layerwise asynchronous model update and temporally weighted aggregation. IEEE Transactions on Neural Networks and Learning Systems, 31(10):4229-4238. DOI: https://doi.org/10.1109/TNNLS.2019.2953131.
Fraboni, Y., Vidal, R., Kameni, L., and Lorenzi, M. (2021). Clustered sampling: Low-variance and improved representativity for clients selection in federated learning. In International Conference on Machine Learning, pages 3407-3416. PMLR. DOI: 10.48550/arXiv.2105.05883.
Gao, H., Thai, M. T., and Wu, J. (2023). When decentralized optimization meets federated learning. IEEE network, 37(5):233-239. DOI: 10.1109/MNET.132.2200530.
Ghosh, A., Chung, J., Yin, D., and Ramchandran, K. (2020). An efficient framework for clustered federated learning. Advances in Neural Information Processing Systems, 33:19586-19597. DOI: 10.48550/arXiv.2006.04088.
Hu, X., Li, R., Ning, Y., Ota, K., and Wang, L. (2023). A Data Sharing Scheme Based on Federated Learning in IoV. IEEE Transactions on Vehicular Technology, pages 1-13. DOI: 10.1109/TVT.2023.3266100.
Huang, H., Shi, W., Feng, Y., Niu, C., Cheng, G., Huang, J., and Liu, Z. (2023). Active client selection for clustered federated learning. IEEE Transactions on Neural Networks and Learning Systems, pages 1-15. DOI: 10.1109/tnnls.2023.3294295.
(ITU), I. T. U. Itu statistics. Available online [link] accessed on 11/29/2023.
Kornblith, S., Norouzi, M., Lee, H., and Hinton, G. (2019). Similarity of neural network representations revisited. In International conference on machine learning, pages 3519-3529. PMLR. DOI: 10.48550/arXiv.1905.00414.
Kusano, K. D., Scanlon, J. M., Chen, Y.-H., McMurry, T. L., Chen, R., Gode, T., and Victor, T. (2023). Comparison of waymo rider-only crash data to human benchmarks at 7.1 million miles. DOI: 10.48550/arXiv.2312.12675.
Li, C., Li, G., and Varshney, P. K. (2022). Federated learning with soft clustering. IEEE Internet of Things Journal, 9(10):7773-7782. DOI: 10.1109/JIOT.2021.3113927.
Li, Q., He, B., and Song, D. (2021). Model-contrastive federated learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10713-10722. DOI: 10.48550/arXiv.2103.16257.
Li, T., Sahu, A. K., Zaheer, M., Sanjabi, M., Talwalkar, A., and Smith, V. (2020). Federated optimization in heterogeneous networks. DOI: 10.48550/arXiv.1812.06127.
Lobato, W., Costa, J. B. D. D., Souza, A. M. d., Rosário, D., Sommer, C., and Villas, L. A. (2022). FLEXE: Investigating Federated Learning in Connected Autonomous Vehicle Simulations. In IEEE 96th Vehicular Technology Conference (VTC-Fall). IEEE. DOI: 10.1109/VTC2022-Fall57202.2022.10012905.
Ma, X., Zhu, J., Lin, Z., Chen, S., and Qin, Y. (2022). A state-of-the-art survey on solving non-iid data in federated learning. Future Generation Computer Systems, 135:244-258. DOI: 10.1016/j.future.2022.05.003.
McMahan, H. B., Moore, E., Ramage, D., Hampson, S., and y Arcas, B. A. (2017). Communication-efficient learning of deep networks from decentralized data. DOI: 10.48550/arXiv.1602.05629.
Nguyen, H. T., Sehwag, V., Hosseinalipour, S., Brinton, C. G., Chiang, M., and Poor, H. V. (2020). Fast-convergent federated learning. IEEE Journal on Selected Areas in Communications, 39(1):201-218. DOI: 10.48550/arXiv.2007.13137.
Ouyang, X., Xie, Z., Zhou, J., Huang, J., and Xing, G. (2021). Clusterfl: a similarity-aware federated learning system for human activity recognition. In Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services, pages 54-66. DOI: 10.1145/3458864.3467681.
Qiao, Z., Shen, Y., Yu, X., Zhang, J., Song, S., and Letaief, K. B. (2022). Content-aware client selection for federated learning in wireless networks. In 2022 IEEE International Mediterranean Conference on Communications and Networking (MeditCom), pages 49-54. DOI: 10.1109/MeditCom55741.2022.9928665.
Qu, Z., Duan, R., Chen, L., Xu, J., Lu, Z., and Liu, Y. (2022). Context-aware online client selection for hierarchical federated learning. IEEE Transactions on Parallel and Distributed Systems, 33(12):4353-4367. DOI: 10.1109/TPDS.2022.3186960.
Sattler, F., Müller, K.-R., and Samek, W. (2020). Clustered federated learning: Model-agnostic distributed multitask optimization under privacy constraints. IEEE transactions on neural networks and learning systems, 32(8):3710-3722. DOI: 10.48550/arXiv.1910.01991.
Smestad, C. and Li, J. (2023). A systematic literature review on client selection in federated learning. In Proceedings of the 27th International Conference on Evaluation and Assessment in Software Engineering, EASE '23, page 2–11, New York, NY, USA. Association for Computing Machinery. DOI: 10.1145/3593434.3593438.
Song, R., Zhou, L., Lakshminarasimhan, V., Festag, A., and Knoll, A. (2022). Federated Learning Framework Coping with Hierarchical Heterogeneity in Cooperative ITS. In IEEE 25th International Conference on Intelligent Transportation Systems (ITSC). IEEE. DOI: 10.1109/ITSC55140.2022.9922064.
Sousa, J. L. R., Lobato, W., Rosário, D., Cerqueira, E., and Villas, L. A. (2023). Entropy-based client selection mechanism for vehicular federated environments. In Proceedings of the 22nd Workshop on Performance of Computer and Communication Systems, pages 37-48. SBC. DOI: 10.5753/wperformance.2023.230700.
Veiga, R., Both, C., Medeiros, I., Rosário, D., and Cerqueira, E. (2023). A federated learning approach for authentication and user identification based on behavioral biometrics. In Proceedings of the 41st Brazilian Symposium on Computer Networks and Distributed Systems, pages 15-28, Porto Alegre, RS, Brasil. SBC. DOI: 10.5753/sbrc.2023.536.
Xiong, Y., Wang, R., Cheng, M., Yu, F., and Hsieh, C.-J. (2023). Feddm: Iterative distribution matching for communication-efficient federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16323-16332. DOI: 10.48550/arXiv.2207.09653.
Zhang, X., Liu, J., Hu, T., Chang, Z., Zhang, Y., and Min, G. (2023). Federated learning-assisted vehicular edge computing: Architecture and research directions. IEEE Vehicular Technology Magazine, pages 2-11. DOI: 10.1109/MVT.2023.3297793.
Zhao, W. X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., Min, Y., Zhang, B., Zhang, J., Dong, Z., Du, Y., Yang, C., Chen, Y., Chen, Z., Jiang, J., Ren, R., Li, Y., Tang, X., Liu, Z., Liu, P., Nie, J.-Y., and Wen, J.-R. (2023). A survey of large language models. DOI: 10.48550/arXiv.2303.18223.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Journal of Internet Services and Applications
This work is licensed under a Creative Commons Attribution 4.0 International License.