Aiding Novice Chess Moves with YOLO Pose Detection and Augmented Reality
DOI:
https://doi.org/10.5753/reic.2025.6046Keywords:
Deep learning, YOLO, computer vision, augmented reality, keypoints detection, chessAbstract
This paper presents the development of a system combining deep learning, computer vision, and augmented reality to aid learning chess beginner-level players. The proposed approach utilizes a deep learning YOLO model to detect the chessboard and pieces. A series of computer vision algorithms are applied to segment the chessboard into a grid and position detected pieces within the squares. The system provides interactive assistance to the user overlaying helpful information on the augmented reality view. This is achieved by highlighting possible moves for each piece. The custom-trained YOLO model achieved an overall precision of 95% in chessboard and pieces detection. The chessboard keypoints boundary detection reached 99% accuracy. Our chessboard segmentation algorithm was able to correctly identify 98% of the chessboards in the validation dataset. An error rate of 1.45% per chessboard square was achieved while positioning the pieces within the grid. The whole processing pipeline demands an average of 454 ms per image. Future works may explore end-to-end deep learning approaches to improve board and piece localization detection. Additionally, user studies are proposed to evaluate the system’s effectiveness in aiding beginner chess players to improve their skills.
Downloads
Referências
Ackermann, M. R., Blömer, J., Kuntze, D., and Sohler, C. (2014). Analysis of agglomerative clustering. Algorithmica, 69(1):184–215. DOI: 10.1007/s00453-012-9717-4.
Billinghurst, M. (2002). Augmented reality in education. New Horizons for Learning, 12(5):1–5. Available at: [link].
Buslaev, A., Iglovikov, V. I., Khvedchenya, E., Parinov, A., Druzhinin, M., and Kalinin, A. A. (2020). Albumentations: Fast and flexible image augmentations. Information, 11(2). DOI: 10.3390/info11020125.
Czyzewski, M. A., Laskowski, A., and Wasik, S. (2020). Chessboard and chess piece recognition with the support of neural networks. Foundations of Computing and Decision Sciences, 45(4):257–280. DOI: 10.2478/fcds-2020-0014.
Delgado Neto, A. d. S. and Campello, R. M. (2019). Chess position identification using pieces classification based on synthetic images generation and deep neural network fine-tuning. 2019 21st Symposium on Virtual and Augmented Reality (SVR), pages 152–160. DOI: 10.1109/SVR.2019.00038.
Gobet, F. (2018). The Psychology of Chess. Routledge.
Gonzalez, R. C. and Woods, R. E. (2017). Digital Image Processing. Pearson, 4th edition.
Jankovic, A. and Novak, I. (2019). Chess as a powerful educational tool for successful people. In Tipurić, D. and Hruška, D., editors, 7th International OFEL Conference on Governance, Management and Entrepreneurship: Embracing Diversity in Organisations. April 5th–6th, 2019, Dubrovnik, Croatia, pages 425–441, Zagreb. Governance Research and Development Centre (CIRU). Available at: [link].
Joseph, E. and Easvaradoss, V. (2021). Thinking out of the box, enhancing creativity and divergent thinking through chess training. Revista Mundi Engenharia, Tecnologia e Gestão, 6. DOI: 10.21575/25254782rmetg2021vol6n11525.
Kesim, M. and Ozarslan, Y. (2012). Augmented reality in education: Current technologies and the potential for education. Procedia - Social and Behavioral Sciences, 47:297–302. DOI: 10.1016/j.sbspro.2012.06.654.
Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., and Zitnick, C. L. (2014). Microsoft COCO: Common Objects in Context. In Computer Vision – ECCV 2014, pages 740–755, Cham. Springer International Publishing. DOI: 10.1007/978-3-319-10602-1_48.
Liu, W. et al. (2016). SSD: Single shot multibox detector. Computer Vision – ECCV 2016: 14th European Conference, pages 21–37. DOI: 10.48550/arXiv.1512.02325.
Masouris, A. and van Gemert, J. (2024). End-to-end chess recognition. In Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. SCITEPRESS - Science and Technology Publications. DOI: 10.5220/0012370200003660.
Mehta, A. (2020). Augmented reality chess analyzer (archessanalyzer): In-device inference of physical chess game positions through board segmentation and piece recognition using convolutional neural network. arXiv preprint arXiv:2009.01649. DOI: 10.48550/arXiv.2009.01649.
Menezes, R. and Maia, H. (2023). An intelligent chess piece detection tool. Anais do L Seminário Integrado de Software e Hardware, pages 60–70. DOI: 10.5753/semish.2023.229800.
Orémaš, Z. (2018). Chess position recognition from a photo. Master’s thesis, Masaryk University, Faculty of Informatics, Brno, Czech Republic. Available at: [link].
Pedoe, D. (1970). Geometry: A Comprehensive Course. Dover Publications.
Rahman, N. N., Mahi, A. B. S., Mistry, D., Masud, S. M. R. A., Saha, A. K., Rahman, R., and Islam, M. R. (2025). Fallvision: A benchmark video dataset for fall detection. Data in Brief, 59:111440. DOI: 10.1016/j.dib.2025.111440.
Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016). You only look once: Unified, real-time object detection. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 779–788. DOI: 10.1109/CVPR.2016.91.
Ren, S. et al. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. Advances in Neural Information Processing Systems, 28. DOI: 10.48550/arXiv.1506.01497.
Unterrainer, J. et al. (2006). Planning abilities and chess: A comparison of chess and non chess players on the Tower of London task. British Journal of Psychology. DOI: 10.1348/000712605X71407.
Wölflein, G. and Arandjelović, O. (2021). Determining chess game state from an image. Journal of Imaging, 7(6):94. DOI: 10.3390/jimaging7060094.
Yusof, C. S., Low, T. S., Ismail, A. W., and Sunar, M. S. (2019). Collaborative augmented reality for chess game in handheld devices. 2019 IEEE Conference on Graphics and Media (GAME), pages 32–37. DOI: 10.1109/GAME47560.2019.8980979.
Downloads
Published
Como Citar
Issue
Section
Licença
Copyright (c) 2025 Os autores

Este trabalho está licenciado sob uma licença Creative Commons Attribution 4.0 International License.
