Cepstral and Deep Features for Apis mellifera Hive Strength Classification
DOI:
https://doi.org/10.5753/jisa.2024.4015Keywords:
Precision beekeeping, Machine learning, Feature extraction, Audio processingAbstract
Regular management practices are crucial to assessing colonies’ conditions and implementing measures to improve their strength. However, constant revisions can induce stress and even contribute to swarm loss. Therefore, effective management that considers the well-being of the bees is necessary. In order to assist the beekeeper in managing the hives, this study proposes a noninvasive approach integrating Apis mellifera L., 1758 (Hymenoptera: Apidae) colony sound processing with machine learning and deep learning techniques to identify colony strength, essential for the productivity of apiculture. We developed an audio acquisition process focused on colony strength, resulting in a dataset with 3702 samples. We explored features extracted by CNNs, including VGG16, ResNet50, MobileNet, and YOLO, comparing them with cepstral features such as Mel-Frequency cepstral coefficients (MFCCs). Cepstral features significantly outperformed those extracted by CNN, with MFCCs achieving an accuracy of 95.53%, compared to the 78.99% achieved by the best-performing CNN. These results highlight the effectiveness of MFCCs in accurately identifying hive strength. This work differs from literature because it presents a protocol for categorizing beehives as either weak or strong, with a focus on reducing intervention time. It also includes a public dataset containing MFCCs and Deep Features extracted from audio recorded at different apiaries. Additionally, it offers a method for automatically classifying hives based on their strength. These contributions aim to serve as a knowledge base for the scientific community and to support beekeepers in non-invasive and cost-effective apiary management.
Downloads
References
Abdollahi, M., Giovenazzo, P., and Falk, T. H. (2022). Automated beehive acoustics monitoring: a comprehensive review of the literature and recommendations for future work. Applied Sciences, 12(8):3920. DOI: 10.3390/app12083920.
Al-Tikrity, W., Hillmann, R., Benton, A., and Clarke, WW, J. (1971). A new instrument for brood measurement in a honeybee colony. Available online [link].
Barbisan, L., Turvani, G., and Riente, F. (2024). A machine learning approach for queen bee detection through remote audio sensing to safeguard honeybee colonies. IEEE Transactions on AgriFood Electronics. DOI: 10.1109/TAFE.2024.3406648.
Blackman, S. and Harris, J. W. (1967). On the use of windows for harmonic analysis with the discrete fourier transform. IEEE Transactions on Audio and Electroacoustics, 15(2):236-241. DOI: 10.1109/PROC.1978.10837.
Breiman, L. (2001). Random forests. Machine learning, 45:5-32. DOI: 10.1023/A:1010933404324.
Bromenshenk, J. J., Henderson, C. B., Seccomb, R. A., Rice, S. D., and Etter, R. T. (2009). Honey bee acoustic recording and analysis system for monitoring hive health. Available online [link].
Cejrowski, T., Szymański, J., and Logofătu, D. (2020). Buzz-based recognition of the honeybee colony circadian rhythm. Computers and Electronics in Agriculture, 175:105586. DOI: 10.1016/j.compag.2020.105586.
Cortes, C. and Vapnik, V. (1995). Support-vector networks. Machine learning, 20:273-297. Available online [link].
de Oliveira, M. C., Pereira, F. d. M., de Moura, V. G., Brito, M. A., dos Santos, B. R., de Oliveira, M. C., and Magalhaes, D. M. (2023). Aquisição e classificação da intensidade da colmeia usando características cepstrais. In Anais do XV Simpósio Brasileiro de Computação Ubíqua e Pervasiva, pages 31-40. SBC. DOI: 10.5753/sbcup.2023.230536.
DeGrandi-Hoffman, G., Wardell, G., Ahumada-Segura, F., Rinderer, T., Danka, R., and Pettis, J. (2008). Comparisons of pollen substitute diets for honey bees: consumption rates by colonies and effects on brood and adult populations. Journal of apicultural research, 47(4):265-270. DOI: 10.1080/00218839.2008.11101473.
Di, N., Sharif, M. Z., Hu, Z., Xue, R., and Yu, B. (2023). Applicability of vggish embedding in bee colony monitoring: comparison with mfcc in colony sound classification. PeerJ, 11:e14696. DOI: 10.7717/peerj.14696.
Dong, X., Luu, A. T., Lin, M., Yan, S., and Zhang, H. (2021). How should pre-trained language models be fine-tuned towards adversarial robustness? Advances in Neural Information Processing Systems, 34:4356-4369. Available online [link].
Gorroi, G., Freitas, L. P. V. d., and Assis, D. C. S. d. (2020). Apicultura: o manejo das abelhas do gênero apis. Cad. técn. Vet. Zoot., pages 9-36. Available online [link].
Haykin, S. (2001). Redes neurais: princípios e prática. Bookman Editora. Book.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778. Available online [link].
Heise, D., Miller, Z., Wallace, M., and Galen, C. (2020). Bumble bee traffic monitoring using acoustics. In 2020 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), pages 1-6. IEEE. DOI: 10.1109/I2MTC43012.2020.9129582.
Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. DOI: 10.48550/arXiv.1704.04861.
Kim, J., Oh, J., and Heo, T.-Y. (2021). Acoustic scene classification and visualization of beehive sounds using machine learning algorithms and grad-cam. Mathematical Problems in Engineering, 2021:1-13. DOI: 10.1155/2021/5594498.
Kulyukin, V. (2021). Audio, image, video, and weather datasets for continuous electronic beehive monitoring. Applied Sciences, 11(10):4632. DOI: 10.3390/app11104632.
Kumar, R. and Mall, P. (2018). Important traits for the selection of honey bee (apis mellifera l.) colonies. J. Entomol, 6:906-909. Available online [link].
Leng, Y., Sun, C., Cheng, C., Xu, X., Li, S., Wan, H., Fang, J., and Li, D. (2015). Classification of overlapped audio events based on at, plsa, and the combination of them. Radioengineering, 24(2):593-603. Available online [link].
Mazepa, C. I. and Laurenti, C. R. S. (2022). Evolución del estado sanitario en colmenas de apis mellifera l. bajo distinas condiciones de manejo y su relación con el aporte nutricional del polen. Agrotecnia, (32):34-56. DOI: 10.30972/agr.0326339.
McFee, B., Raffel, C., Liang, D., Ellis, D. P., McVicar, M., Battenberg, E., and Nieto, O. (2015). librosa: Audio and music signal analysis in python. In Proceedings of the 14th python in science conference, volume 8, pages 18-25. Available online [link].
Oliveira Costa, R., Bezerra, A. H. A., Ferreira, A. C., Pereira, B. B. M., Pimenta, T. A., and de Andrade, A. B. A. (2016). Análise hierárquica dos problemas existentes na produção de mel do estado da paraíba. Revista Verde de Agroecologia e Desenvolvimento Sustentável, 11(2):24-28. Available online [link].
Oppenheim, A. V. and Schafer, R. W. (2009). Discrete-time signal processing. Pearson Education. Book.
Phan, T.-T.-H., Nguyen-Doan, D., Nguyen-Huu, D., Nguyen-Van, H., and Pham-Hong, T. (2023). Investigation on new mel frequency cepstral coefficients features and hyper-parameters tuning technique for bee sound recognition. Soft Computing, 27(9):5873-5892. DOI: 10.1007/s00500-022-07596-6.
Rustam, F., Sharif, M. Z., Aljedaani, W., Lee, E., and Ashraf, I. (2024). Bee detection in bee hives using selective features from acoustic data. Multimedia Tools and Applications, 83(8):23269-23296. DOI: 10.1007/s11042-023-15192-5.
Ruvinga, S., Hunter, G., Duran, O., and Nebel, J.-C. (2023). Identifying queenlessness in honeybee hives from audio signals using machine learning. Electronics, 12(7):1627. DOI: 10.3390/electronics12071627.
Ruvinga, S., Hunter, G. J., Duran, O., and Nebel, J.-C. (2021). Use of lstm networks to identify “queenlessness” in honeybee hives from audio signals. In 2021 17th International Conference on Intelligent Environments (IE), pages 1-4. IEEE. DOI: 10.1109/IE51775.2021.9486575.
Sharif, M. Z., Wario, F., Di, N., Xue, R., and Liu, F. (2020). Soundscape indices: new features for classifying beehive audio samples. Sociobiology, 67(4):566-571. DOI: 10.13102/sociobiology.v67i4.5860.
Shostak, S. and Prodeus, A. (2019). Classification of the bee colony condition using spectral features. In 2019 IEEE International Scientific-Practical Conference Problems of Infocommunications, Science and Technology (PIC S&T), pages 737-740. IEEE. DOI: 10.1109/PICST47496.2019.9061441.
Simonyan, K. and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. DOI: 10.48550/arXiv.1409.1556.
Smith, L. N. (2017). Cyclical learning rates for training neural networks. In 2017 IEEE winter conference on applications of computer vision (WACV), pages 464-472. IEEE. DOI: 10.1109/WACV.2017.58.
Soares, B. S., Luz, J. S., de Macêdo, V. F., e Silva, R. R. V., de Araújo, F. H. D., and Magalhães, D. M. V. (2022). Mfcc-based descriptor for bee queen presence detection. Expert Systems with Applications, 201:117104. DOI: 10.1016/j.eswa.2022.117104.
Vallenas-Sánchez, Y., Honorio-Javes, C. E., Valdivia-Camargo, V., and Rodríguez-Soto, J. C. (2023). Efecto de suplemento proteico sobre la postura y la población de colonias de abejas (apis mellifera l.) comerciales ubicadas en paisaje polifloral. Ciencia y Tecnología Agropecuaria, 24(2). DOI: 10.21930/rcta.vol24_num2_art:3058.
Vieira, F. R., Andrade, D. C., and Ribeiro, F. L. (2021). A polinização por abelhas sob a perspectiva da abordagem de serviços ecossistêmicos (ase). Revista Ibero-Americana de Ciências Ambientais, 12(4):544-560. DOI: 10.6008/CBPC2179-6858.2021.004.0042.
Virtanen, T., Plumbley, M. D., and Ellis, D. (2018). Computational analysis of sound scenes and events. Springer. DOI: 10.1007/978-3-319-63450-0.
Wardhani, N. W. S., Rochayani, M. Y., Iriany, A., Sulistyono, A. D., and Lestantyo, P. (2019). Cross-validation metrics for evaluating classification performance on imbalanced data. In International conference on computer, control, informatics and its applications, pages 14-18. IEEE. DOI: 10.1109/IC3INA48034.2019.8949568.
Yosinski, J., Clune, J., Bengio, Y., and Lipson, H. (2014). How transferable are features in deep neural networks? Advances in neural information processing systems, 27. Available online [link].
Yu, B., Huang, X., Sharif, M. Z., Jiang, X., Di, N., and Liu, F. (2023). A matter of the beehive sound: Can honey bees alert the pollution out of their hives? Environmental Science and Pollution Research, 30(6):16266-16276. DOI: 10.1007/s11356-022-23322-z.
Zgank, A. (2021). Iot-based bee swarm activity acoustic classification using deep neural networks. Sensors, 21(3):676. DOI: 10.3390/s21030676.
Zhang, T., Zmyslony, S., Nozdrenkov, S., Smith, M., and Hopkins, B. (2021). Semi-supervised audio representation learning for modeling beehive strengths. arXiv preprint arXiv:2105.10536. DOI: 10.48550/arXiv.2105.10536.
Zhang, Y., Guo, Z., Wu, J., Tian, Y., Tang, H., and Guo, X. (2022). Real-time vehicle detection based on improved yolo v5. Sustainability, 14(19):12274. DOI: 10.3390/su141912274.
Zhao, Y., Deng, G., Zhang, L., Di, N., Jiang, X., and Li, Z. (2021). Based investigate of beehive sound to detect air pollutants by machine learning. Ecological Informatics, 61:101246. DOI: 10.1016/j.ecoinf.2021.101246.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Journal of Internet Services and Applications
This work is licensed under a Creative Commons Attribution 4.0 International License.