Training Neural Networks in Cloud Environments: A Methodology and a Comparative Analysis
DOI:
https://doi.org/10.5753/jisa.2025.4891Keywords:
Cloud Computing, Neural Networks, Performance Evaluation, Cost EvaluationAbstract
Deep neural networks are solutions to problems that involve pattern recognition, and various works seek to optimize the performance of these networks. This optimization requires suitable hardware, which can be expensive for small and medium organizations. This work proposes a methodology to evaluate the performance and cost of training deep neural networks by assessing how much impact factors such as environment setup, frameworks, and datasets can have on training time and, along with this task, evaluating the total financial cost of the environment for the training process. Experiments were performed to measure and compare the performance and cost of training deep neural networks on cloud platforms such as Azure, AWS, and Google Cloud. In this sense, factors such as the size of the input image and the network architecture significantly impact the training time metric and the total cost.
Downloads
References
Adebayo, I., Manganyela, N., and Adigun, M. O. (2020). Cost-benefit analysis of pricing models in cloudlets. 2020 2nd International Multidisciplinary Information Technology and Engineering Conference (IMITEC), pages 1-5. DOI: 10.1109/IMITEC50163.2020.9334114.
Alahmed, Y., Abadla, R., Badri, A. A., and Ameen, N. (2023). How does chatgpt work. examining functionality to the creative ai chatgpt on x's (twitter) platform. 2023 Tenth International Conference on Social Networks Analysis, Management and Security (SNAMS), pages 1-7. DOI: 10.1109/SNAMS60348.2023.10375450.
Carneiro, T., Da Nóbrega, R. V. M., Nepomuceno, T., Bian, G.-B., De Albuquerque, V. H. C., and Rebouças Filho, P. P. (2018). Performance analysis of google colaboratory as a tool for accelerating deep learning applications. 2020 2nd International Multidisciplinary Information Technology and Engineering Conference (IMITEC), 6:61677-61685. DOI: 10.1109/ACCESS.2018.2874767.
Correia, F. (2011). Definição de computação em nuvem segundo o nist. Avalable at: [link] Accessed: 2024-08-31.
Dally, W. J., Keckler, S. W., and Kirk, D. B. (2021). Evolution of the graphics processing unit (gpu). IEEE Micro, 41(6):42-51. DOI: 10.1109/MM.2021.3113475.
Edinat, A. (2018). Cloud computing pricing models: A survey. International Journal of Scientific Engineering and Research (IJSER). DOI: 10.14257/ijgdc.2013.6.5.09.
Elshawi, R., Wahab, A., Barnawi, A., and Sakr, S. (2021). Performance analysis of high performance computing applications on the amazon web services cloud. 2010 IEEE Second International Conference on Cloud Computing Technology and Science, page 2017–2038. DOI: 10.1109/CloudCom.2010.69.
Jackson, K. R., Ramakrishnan, L., Muriki, K., Canon, S., Cholia, S., Shalf, J., Wasserman, H. J., and Wright, N. J. (2010). Dlbench: a comprehensive experimental evaluation of deep learning frameworks. Cluster Computing, 24:159-168. DOI: 10.1007/s10586-021-03240-4.
Jain, D., Singh, P., Pandey, Kumar, A., Singh, M., Singh, H., and Singh, A. (2022). Lung cancer detection using convolutional neural network. 2022 3rd International Conference on Issues and Challenges in Intelligent Computing Techniques (ICICT), pages 1-4. DOI: 10.1109/ICICT55121.2022.10064513.
Juve, G., Deelman, E., Vahi, K., Mehta, G., Berriman, B., Berman, B. P., and Maechling, P. (2009). Scientific workflow applications on amazon ec2. 2009 5th IEEE International Conference on E-Science Workshops. DOI: 10.1109/ESCIW.2009.5408002.
Kandpal, M., Gahlawat, M., and Patel, K. R. (2017). Role of predictive modeling in cloud services pricing: A survey. 2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence. DOI: 10.1109/CONFLUENCE.2017.7943158.
Kansal, S., Singh, G., Kumar, H., and Kausha, S. (2014). Pricing models in cloud computing. Proceedings of the 2014 International Conference on Information and Communication Technology for Competitive Strategies. DOI: https://doi.org/10.1145/2677855.267788.
Krizhevsky, A. (2009). Learning multiple layers of features from tiny images. Available at: [link].
LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. Nature, 521(7553):436–444. DOI: https://doi.org/10.1038/nature14539.
LeCun, Y. and Cortes, C. (2005). The mnist database of handwritten digits. Available at: [link].
Lin, T.-Y., Maire, M., Belongie, S., Bourdev, L., Girshick, R., Hays, J., Perona, P., Ramanan, D., Zitnick, C. L., and Dollár, P. (2015). Microsoft coco: Common objects in context. DOI: 10.48550/arXiv.1405.0312.
Liu, J., Dutta, J., Li, N., Kurup, U., and Shah, M. (2018). Usability study of distributed deep learning frameworks for convolutional neural networks. SIGKDD Conference on Knowledge Discovery and Data Mining. Available at: [link].
Moura Filho, C. and Sousa, E. (2024). Uma metodologia para a avaliação de desempenho e custos do treinamento de redes neurais em ambientes de nuvem. In Anais do XXIII Workshop em Desempenho de Sistemas Computacionais e de Comunicação, pages 1-12, Porto Alegre, RS, Brasil. SBC. DOI: 10.5753/wperformance.2024.1986.
Shams, S., Platania, R., Lee, K., and Park, S.-J. (2017). Evaluation of deep learning frameworks over different hpc architectures. 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS). DOI: 10.1109/ICDCS.2017.259.
Shi, S., Wang, Q., Xu, P., and Chu, X. (2016). Benchmarking state-of-the-art deep learning software tools. 2016 7th International Conference on Cloud Computing and Big Data (CCBD), pages 99-104. Available at: [link].
Simonyan, K. and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556. Available at: [link].
Wang, C.-Y., Bochkovskiy, A., and Liao, H.-Y. M. (2023). YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). DOI: 10.48550/arXiv.2207.02696.
Wu, C., Buyya, R., and Ramamohanarao, K. (2019). Cloud pricing models: Taxonomy, survey, and interdisciplinary challenges. ACM Comput. Surv., 52(6). DOI: 10.1145/3342103.
Wu, Y., Liu, L., Pu, C., Cao, W., Sahin, S., Wei, W., and Zhang, Q. (2018). A comparative measurement study of deep learning as a service framework. IEEE Transactions on Services Computing, 15:551-566. Available at: [link].
Xie, X., He, W., Zhu, Y., and Xu, H. (2023). Performance evaluation and analysis of deep learning frameworks. In Proceedings of the 2022 5th International Conference on Artificial Intelligence and Pattern Recognition, AIPR '22, page 38–44, New York, NY, USA. Association for Computing Machinery. DOI: 10.1145/3573942.3573948.
Zhu, H., Akrout, M., Zheng, B., Pelegris, A., Jayarajan, A., Phanishayee, A., Schroeder, B., and Pekhimenko, G. (2018). Benchmarking and analyzing deep neural network training. 2018 IEEE International Symposium on Workload Characterization (IISWC), pages 88-100. DOI: 10.1109/IISWC.2018.8573476.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Journal of Internet Services and Applications

This work is licensed under a Creative Commons Attribution 4.0 International License.

