Advancing Electric Engineering Education through Immersive Virtual Reality: Deep Learning and Evolutionary Algorithms for Image Stitching and Rectification in Virtual Lab Environments
DOI:
https://doi.org/10.5753/jbcs.2024.3751Keywords:
Virtual Reality, Image Stitching, Rectification Techniques, Immersive Learning, Virtual LabAbstract
Virtual Reality (VR) technology has emerged as a transformative tool in education, offering immersive and interactive experiences that enhance learning outcomes. This paper delves into the application of image stitching and rectification techniques to create a VR lab environment, specifically tailored for electrical engineering education. The importance of VR technology in education is explored, highlighting its role in promoting active learning and providing experiential learning opportunities. The primary emphasis of this Paper lies in the smooth incorporation of image stitching algorithms for the creation of panoramic perspectives, along with the implementation of rectification techniques to correct irregular borders within the stitched images. By utilizing Convolutional Neural Networks (CNNs) and Genetic Algorithms (GAs), the proposed approach optimizes the rectification process, resulting in visually cohesive representations. Demonstrating the utilization of the VR lab across a range of situations, such as examining power transfer and creating control panels for water pumps in irrigation initiatives, the immersive setting enables students to delve into intricate systems. The performance of the proposed method was evaluated using various metrics, including mean squared error, peak signal to noise ratio (PSNR), structural similarity index (SSIM), and Fréchet inception distance (FID). the combination of deep learning algorithm specifically (CNN) and optimization algorithm specifically (Genetic algorithm (GA)) led to an increase in the accuracy of the rectified images where the average PSNR reached 23.98, SSIM was 0.8066, and FID was 18.72. Regarding the users’ opinion about the generated environment by stitching and rectifying images, participants demonstrated consistent positive sentiments, with mean scores ranging from 3.65 to 4.03, all above the scale midpoint, and moderate variability indicated by standard deviation values ranging from 1.070 to 1.251, suggesting general favorability with some variation in responses. This experience empowers the users to gain insights and cultivate essential problemsolving abilities at a heightened level. Collaborative learning is facilitated, enabling students to engage in collaborative projects regardless of their physical location. Through the synthesis of image processing techniques and VR technology, this research contributes to the enrichment of educational experiences and the advancement of electrical engineering education.
Downloads
References
Blake, M. and Gallimore, V. (2021). Understanding academics: a ux ethnographic research project at the university of york. In Positioning the academic library within the university, pages 155-167. Routledge. DOI: 10.1080/13614533.2018.1466716.
Chen, Y.-R., Chang-Liao, Y.-Q., Lin, C.-y., Tsai, D.-R., Lim, J.-H., Hong, R.-H., and Chang, A.-R. (2021). Forensic science education by crime scene investigation in virtual reality. In 2021 IEEE international conference on artificial intelligence and virtual reality (AIVR), pages 205-206. IEEE Computer Society. DOI: 10.1109/AIVR52153.2021.00046.
Davis, F. D. et al. (1989). Technology acceptance model: Tam. Al-Suqri, MN, Al-Aufi, AS: Information Seeking Behavior and Technology Adoption, 205:219. Available online [link].
Deng, L., Piao, Y., and Liu, S. (2018). Research on sift image matching based on mlesac algorithm. In Proceedings of the 2nd International Conference on Digital Signal Processing, pages 17-21. DOI: 10.1145/3193025.3193059.
Eswaran, M. and Bahubalendruni, M. R. (2022). Challenges and opportunities on ar/vr technologies for manufacturing systems in the context of industry 4.0: A state of the art review. Journal of Manufacturing Systems, 65:260-278. DOI: 10.1016/j.jmsy.2022.09.016.
Fominykh, M., Wild, F., Klamma, R., Billinghurst, M., Costiner, L. S., Karsakov, A., Mangina, E., Molka-Danielsen, J., Pollock, I., Preda, M., et al. (2020). Model augmented reality curriculum. In Proceedings of the Working Group Reports on Innovation and Technology in Computer Science Education, pages 131-149. DOI: 10.1145/3437800.3439205.
Gebhard, K. T., Hargrove, S., Chaudhry, T., Buchwach, S. Y., and Cattaneo, L. B. (2022). Building strength for the long haul toward liberation: What psychology can contribute to the resilience of communities targeted by state-sanctioned violence. American Journal of Community Psychology, 70(3-4):475-492. DOI: 10.1002/ajcp.12596.
Gugenheimer, J., Stemasov, E., Frommel, J., and Rukzio, E. (2017). Sharevr: Enabling co-located experiences for virtual reality between hmd and non-hmd users. In Proceedings of the 2017 CHI conference on human factors in computing systems, pages 4021-4033. DOI: 10.1145/3025453.3025683.
He, K., Chang, H., and Sun, J. (2013). Rectangling panoramic images via warping. ACM Transactions on Graphics (TOG), 32(4):1-10. DOI: 10.1145/2461912.2462004.
Ijaz, K., Tran, T. T. M., Kocaballi, A. B., Calvo, R. A., Berkovsky, S., and Ahmadpour, N. (2022). Design considerations for immersive virtual reality applications for older adults: a scoping review. Multimodal Technologies and Interaction, 6(7):60. DOI: 10.3390/mti6070060.
Jam, J., Kendrick, C., Walker, K., Drouard, V., Hsu, J. G.-S., and Yap, M. H. (2021). A comprehensive review of past and present image inpainting methods. Computer vision and image understanding, 203:103147. DOI: 10.1016/j.cviu.2020.103147.
Kim, S. and Kim, E. (2020). The use of virtual reality in psychiatry: a review. Journal of the Korean Academy of Child and Adolescent Psychiatry, 31(1):26. DOI: 10.5765/jkacap.190037.
Laghari, A. A., Jumani, A. K., Kumar, K., and Chhajro, M. A. (2021). Systematic analysis of virtual reality & augmented reality. International Journal of Information Engineering & Electronic Business, 13(1). DOI: 10.5815/ijieeb.2021.01.04.
Lakehal, B. et al. (2021). Autonomy in Advanced Language Education: Considerations of the Socio-cultural Dimensions and their Impact on EFL Algerian Students’ Learning Expectations and Attitudes. PhD thesis. Available online [link].
Lamb, R., Lin, J., and Firestone, J. B. (2020). Virtual reality laboratories: A way forward for schools? Eurasia Journal of Mathematics, Science and Technology Education, 16(6):em1856. DOI: 10.29333/ejmste/8206.
Laseinde, O. T. and Dada, D. (2023). Enhancing teaching and learning in stem labs: The development of an android-based virtual reality platform. Materials Today: Proceedings. DOI: 10.1016/j.matpr.2023.09.020.
Lee, H., Lee, S., and Choi, O. (2020). Improved method on image stitching based on optical flow algorithm. International Journal of Engineering Business Management, 12:1847979020980928. DOI: 10.1177/1847979020980928.
Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60:91-110. DOI: 10.1023/B:VISI.0000029664.99615.94.
Ma, C.-W., Cheng, P.-S., Chan, Y.-S., and Tipoe, G. L. (2023). Virtual reality: a technology to promote active learning of physiology for students across multiple disciplines. Advances in Physiology Education, 47(3):594-603. DOI: 10.1152/advan.00172.2022.
Madni, A. M., Madni, C. C., and Lucero, S. D. (2019). Leveraging digital twin technology in model-based systems engineering. Systems, 7(1):7. DOI: 10.3390/systems7010007.
Marougkas, A., Troussas, C., Krouska, A., and Sgouropoulou, C. (2023). Virtual reality in education: a review of learning theories, approaches and methodologies for the last decade. Electronics, 12(13):2832. DOI: 10.3390/systems7010007.
Meccawy, M. (2022). Creating an immersive xr learning experience: A roadmap for educators. Electronics, 11(21):3547. DOI: 10.3390/electronics11213547.
Menke, K., Beckmann, J., and Weber, P. (2019). Universal design for learning in augmented and virtual reality trainings. In Universal access through inclusive instructional design, pages 294-304. Routledge. Book.
Mittelstaedt, J. M. (2020). Individual predictors of the susceptibility for motion-related sickness: a systematic review. Journal of Vestibular Research, 30(3):165-193. DOI: 10.3233/ves-200702.
Moncada, J. A. (2020). Virtual reality as punishment. Ind. JL & Soc. Equal., 8:304. Available online [link].
Mystakidis, S. (2019). Motivation enhanced deep and meaningful learning with social virtual reality. JYU dissertations. Available online [link].
Nie, L., Lin, C., Liao, K., Liu, S., and Zhao, Y. (2022). Deep rectangling for image stitching: A learning baseline. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5740-5748. DOI: 10.48550/arXiv.2203.03831.
Petersen, G. B., Klingenberg, S., Mayer, R. E., and Makransky, G. (2020). The virtual field trip: Investigating how to optimize immersive virtual learning in climate change education. British Journal of Educational Technology, 51(6):2099-2115. DOI: 10.1111/bjet.12991.
Ravi, C. and Gowda, R. M. (2020). Development of image stitching using feature detection and feature matching techniques. In 2020 IEEE international conference for innovation in technology (INOCON), pages 1-7. IEEE. DOI: 10.1109/INOCON50539.2020.9298339.
Shi, H., Guo, L., Tan, S., Li, G., and Sun, J. (2019). Improved parallax image stitching algorithm based on feature block. Symmetry, 11(3):348. DOI: 10.3390/sym11030348.
Soares, A. Z. (2023). Social outcomes of education: Experiences of three innovative schools aligned with the integral education approach in Brazil. University of California, Los Angeles. Available online [link].
Song, L., Wu, J., Yang, M., Zhang, Q., Li, Y., and Yuan, J. (2021). Stacked homography transformations for multi-view pedestrian detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6049-6057. DOI: 10.1109/ICCV48922.2021.00599.
Stuart, J., Aul, K., Stephen, A., Bumbach, M. D., and Lok, B. (2022). The effect of virtual human rendering style on user perceptions of visual cues. Frontiers in Virtual Reality, 3:864676. DOI: 10.3389/frvir.2022.864676.
Suh, I., McKinney, T., and Siu, K.-C. (2023). Current perspective of metaverse application in medical education, research and patient care. In Virtual Worlds, volume 2, pages 115-128. MDPI. DOI: 10.3390/virtualworlds2020007.
Tian, C., Chai, X., Chen, G., Shao, F., Jiang, Q., Meng, X., Xu, L., and Ho, Y.-S. (2022). Vsoiqe: A novel viewport-based stitched 360° omnidirectional image quality evaluator. IEEE Transactions on Circuits and Systems for Video Technology, 32(10):6557-6572. DOI: 10.1109/TCSVT.2022.3172135.
Ullah, H., Afzal, S., and Khan, I. U. (2022). Perceptual quality assessment of panoramic stitched contents for immersive applications: a prospective survey. Virtual Reality & Intelligent Hardware, 4(3):223-246. DOI: 10.1016/j.vrih.2022.03.004.
Vergara, D., Fernández-Arias, P., Extremera, J., Dávila, L. P., and Rubio, M. P. (2022). Educational trends post covid-19 in engineering: Virtual laboratories. Materials Today: Proceedings, 49:155-160. DOI: 10.1016/j.matpr.2021.07.494.
Wahsh, M. A. and Hussain, Z. M. (2023). Optimizing image rectangular boundaries with precision: A genetic algorithm based approach with deep stitching. International Journal of Intelligent Engineering & Systems, 16(3). DOI: 10.22266/ijies2023.0630.56.
Walker, J., Stepanov, D., Towey, D., Elamin, A., Pike, M., and Wei, R. (2019). Creating a 4d photoreal vr environment to teach civil engineering. In 2019 IEEE International Conference on Engineering, Technology and Education (TALE), pages 1-8. DOI: 10.1109/TALE48000.2019.9225972.
Walker, J., Towey, D., Pike, M., Kapogiannis, G., Elamin, A., and Wei, R. (2020). Developing a pedagogical photoreal virtual environment to teach civil engineering. Interactive Technology and Smart Education, 17(3):303-321. DOI: 10.1108/ITSE-10-2019-0069.
Wang, J., Chen, K., Xu, R., Liu, Z., Loy, C. C., and Lin, D. (2021). Carafe++: Unified content-aware reassembly of features. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(9):4674-4687. DOI: 10.1109/TPAMI.2021.3074370.
Wylde, V., Prakash, E., Hewage, C., and Platts, J. (2023). Post-covid-19 metaverse cybersecurity and data privacy: Present and future challenges. In Data Protection in a Post-Pandemic Society: Laws, Regulations, Best Practices and Recent Solutions, pages 1-48. Springer. DOI: 10.1007/978-3-031-34006-2_1.
Xu, W., Wu, L., and Y, G. (2016). Distributed application based on sift feature image retrieval. Journal of Guizhou Teachers College, 32:13-17. DOI: 10.1109/WISA.2010.48.
Yan, N., Mei, Y., Xu, L., Yu, H., Sun, B., Wang, Z., and Chen, Y. (2023). Deep learning on image stitching with multi-viewpoint images: A survey. Neural Processing Letters, pages 1-36. DOI: 10.1007/s11063-023-11226-z.
Yen, S. H., Yeh, H. Y., and Chang, H. W. (2017). Progressive completion of a panoramic image. Multimedia Tools and Applications, 76(9):11603-11620. DOI: 10.1007/s11042-015-3157-5.
Zhang, Y., Wan, Z., Jiang, X., and Mei, X. (2020). Automatic stitching for hyperspectral images using robust feature matching and elastic warp. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 13:3145-3154. DOI: 10.1109/JSTARS.2020.3001022.
Zhang, Z. (2020). Image stitching algorithm based on combined feature detection. In Proceedings of 2020 IEEE International Conference on Advances in Electrical Engineering and Computer Applications (AEECA), pages 966-971. DOI: 10.1109/AEECA49918.2020.9213616.
Zhao, Q., Ma, Y., Zhu, C., Yao, C., Feng, B., and Dai, F. (2021). Image stitching via deep homography estimation. Neurocomputing, 450:219-229. DOI: 10.1016/j.neucom.2021.03.099.
Zibrek, K. and McDonnell, R. (2019). Social presence and place illusion are affected by photorealism in embodied vr. In Proceedings of the 12th ACM SIGGRAPH Conference on Motion, Interaction and Games, pages 1-7. ACM. DOI: 10.1145/3359566.3360064.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Zainab M. Hussain, Muntasser A. Wahsh, Mays A. Wahish
This work is licensed under a Creative Commons Attribution 4.0 International License.