Foveated Path Culling: A mixed path tracing and radiance field approach for optimizing rendering in XR Displays

Authors

DOI:

https://doi.org/10.5753/jis.2024.4352

Keywords:

Virtual Reality, Foveated Rendering, Radiance Fields, Visual Perception, 3D Gaussian Splatting

Abstract

Real-time effects achieved by path tracing are essential for creating highly accurate illumination effects in interactive environments. However, due to its computational complexity, it is essential to explore optimization techniques like Foveated Rendering when considering Head Mounted Displays. In this paper we combine traditional Foveated Rendering approaches with recent advancements in the field of radiance fields, extending a previous work and including recent advancements based on Gaussian Splatting. The present paper proposes the usage of mixing real time path tracing at the fovea region of an HMD while replacing the images at the peripheral by pre-computed radiance fields, inferred by neural networks or rendered in real time due to Gaussian splats. We name our approach as Foveated Path Culling (FPC) due to the process of culling raycasts, diminishing the workload by replacing most of the screen raytracing tasks by a less costly approach. FPC allowed us for better frame rates when compared to purely path tracing while rendering scenes in real time, increasing the frame rate speedup proportionally to the display resolution. Our work contributes to the development of rendering techniques for XR experiences that demand low latency, high resolution and high visual quality through global illumination effects.

Downloads

Download data is not yet available.

References

Albert, R., Patney, A., Luebke, D., and Kim, J. (2017). Latency requirements for foveated rendering in virtual reality. ACM Transactions on Applied Perception (TAP), 14(4):1–13. DOI: https://doi.org/10.1145/3127589.

Alsop, T. (2024). Virtual reality (VR) - statistics & facts. [link] (accessed: 17 June 2024).

Barré-Brisebois, C., Halén, H., Wihlidal, G., Lauritzen, A., Bekkers, J., Stachowiak, T., and Andersson, J. (2019). Hybrid rendering for real-time ray tracing. In Ray Tracing Gems, pages 437–473. Springer. DOI: https://doi.org/10.1007/978-1-4842-4427-2.

Barron, J. T., Mildenhall, B., Verbin, D., Srinivasan, P. P., and Hedman, P. (2022). Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5470–5479. DOI: https://doi.org/10.1109/CVPR52688.2022.00539.

Cao, A. and Johnson, J. (2023). Hexplane: A fast representation for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 130–141.

Caulfield, B. (2022). What is path tracing? NVIDIA Blog. [link] (accessed: 17 June 2024).

Chellappa, R. and Theodoridis, S. (2017). Academic Press Library in Signal Processing, Volume 6: Image and Video Processing and Analysis and Computer Vision. Elsevier Science & Technology, Saint Louis. DOI: https://doi.org/10.1016/C2016-0-00726-X.

Chen, A., Xu, Z., Geiger, A., Yu, J., and Su, H. (2022). Tensorf: Tensorial radiance fields.

Chen, Z., Funkhouser, T., Hedman, P., and Tagliasacchi, A. (2023). MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16569–16578, Vancouver, BC, Canada. IEEE. DOI: https://doi.org/10.1109/CVPR52729.2023.01590.

Duinkharjav, B., Chen, K., Tyagi, A., He, J., Zhu, Y., and Sun, Q. (2022). Color-perception-guided display power reduction for virtual reality. ACM Trans. Graph. (Proc. SIGGRAPH Asia), 41(6):144:1–144:16.

Einhon, E. and Mawdsley, J. (2023). Generate groundbreaking ray-traced images with next-generation nvidia dlss. [link] (accessed: 17 June 2024).

Fernandes, F., Castro, D., and Werner, C. (2022). Immersive learning research from svr publications: A re-conduction of the systematic mapping study. Journal on Interactive Systems, 13(1):205–220. DOI: https://doi.org/10.5753/jis.2022.2472.

Fridovich-Keil, S., Yu, A., Tancik, M., Chen, Q., Recht, B., and Kanazawa, A. (2022). Plenoxels: Radiance fields without neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5501–5510. DOI: https://doi.org/10.1109/CVPR52688.2022.00542.

Guenter, B., Finch, M., Drucker, S., Tan, D., and Snyder, J. (2012). Foveated 3d graphics. ACM Transactions on Graphics (TOG), 31(6):1–10. DOI: https://doi.org/10.1145/2366145.2366183.

Henriques, H., Oliveira, E., Clua, E., and Trevisan, D. (2023). A mixed path tracing and nerf approach for optimizing rendering in xr displays. In Proceedings of the 25th Symposium on Virtual and Augmented Reality, pages 123–130.

Infinite (2024). iNFINITE|VR Headset database and utility. [link] (accessed: 17 June 2024).

Jabbireddy, S., Sun, X., Meng, X., and Varshney, A. (2022). Foveated rendering: Motivation, taxonomy, and research directions. arXiv e-prints, pages 1–16. DOI: https://doi.org/10.48550/arXiv.2205.04529.

Kajiya, J. T. (1986). The rendering equation. In Proceedings of the 13th annual conference on Computer graphics and interactive techniques, pages 143–150. DOI: https://doi.org/10.1145/15922.15902.

Kerbl, B., Kopanas, G., Leimkühler, T., and Drettakis, G. (2023). 3d gaussian splatting for real-time radiance field rendering. ACM Transactions on Graphics, 42(4).

Kilgariff, E., Moreton, H., Stam, N., and Bell, B. (2018). NVIDIA Turing Architecture In-Depth | NVIDIA Technical Blog. [link] (accessed: 17 June 2024).

Kim, J. (2022). Individualized Foveated Rendering. page 60.

Koskela, M., Lotvonen, A., Mäkitalo, M., Kivi, P., Viitanen, T., and Jääskeläinen, P. (2019). Foveated Real-Time Path Tracing in Visual-Polar Space. In Boubekeur, T. and Sen, P., editors, Eurographics Symposium on Rendering - DLonly and Industry Track, pages 1–12. The Eurographics Association. DOI: https://doi.org/10.2312/sr.20191219.

Laine, S. and Karras, T. (2011). High-performance software rasterization on gpus. In Proceedings of the ACM SIGGRAPH Symposium on High Performance Graphics, pages 79–88.

Levoy, M. and Whitaker, R. (1990). Gaze-directed volume rendering. In Proceedings of the 1990 symposium on interactive 3d graphics, pages 217–223. DOI: https://doi.org/10.1145/91385.91449.

Li, Y., Yu, Z., Choy, C., Xiao, C., Alvarez, J. M., Fidler, S., Feng, C., and Anandkumar, A. (2023). Voxformer: Sparse voxel transformer for camera-based 3d semantic scene completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9087–9098.

Lin, C.-Y., Fu, Q., Merth, T., Yang, K., and Ranjan, A. (2024). Fastsr-nerf: Improving nerf efficiency on consumer devices with a simple super-resolution pipeline. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 6036–6045.

Liu, L., Gu, J., Zaw Lin, K., Chua, T.-S., and Theobalt, C. (2020). Neural Sparse Voxel Fields. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M. F., and Lin, H., editors, Advances in Neural Information Processing Systems, volume 33, pages 15651–15663. Curran Associates, Inc.

Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., and Ng, R. (2021). Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99–106. DOI: https://doi.org/10.1145/3503250.

Mohanto, B., Islam, A. T., Gobbetti, E., and Staadt, O. (2022). An integrative view of foveated rendering. Computers & Graphics, 102:474–501. DOI: https://doi.org/10.1016/j.cag.2021.10.010.

Müller, T., Evans, A., Schied, C., and Keller, A. (2022). Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph., 41(4):1–15. DOI: https://doi.org/10.1145/3528223.3530127.

Müller, T., Rousselle, F., Novák, J., and Keller, A. (2021). Real-time neural radiance caching for path tracing. ACM Transactions on Graphics, 40(4):1–16. DOI: https://doi.org/10.1145/3450626.3459812.

Niedermayr, S., Stumpfegger, J., and Westermann, R. (2023). Compressed 3d gaussian splatting for accelerated novel view synthesis. arXiv preprint arXiv:2401.02436.

Niemeyer, M., Manhardt, F., Rakotosaona, M.-J., Oechsle, M., Duckworth, D., Gosula, R., Tateno, K., Bates, J., Kaeser, D., and Tombari, F. (2024). Radsplat: Radiance ield-informed gaussian splatting for robust real-time rendering with 900+ fps. arXiv.org.

Porcino, T., Trevisan, D., and Clua, E. (2021). A cybersickness review: causes, strategies, and classification methods. Journal on Interactive Systems, 12(1):269–282. DOI: https://doi.org/10.5753/jis.2021.2058.

Reiser, C., Peng, S., Liao, Y., and Geiger, A. (2021). Kilonerf: Speeding up neural radiance fields with thousands of tiny mlps. arXiv e-prints, pages 1–11. DOI: https://doi.org/10.48550/arXiv.2103.13744.

Schwarz, K., Sauer, A., Niemeyer, M., Liao, Y., and Geiger, A. (2022). Voxgraf: Fast 3d-aware image synthesis with sparse voxel grids. arXiv preprint arXiv:2206.07695, pages 1–22. DOI: https://doi.org/10.48550/arXiv.2206.07695.

Series, B. (2012). Methodology for the subjective assessment of the quality of television pictures. Recommendation ITU-R BT, 500:1–46.

Souza, A. M. d. C., Aureliano, T., Ghilardi, A. M., Ramos, E. A., Bessa, O. F. M., and Rennó-Costa, C. (2023). Dinosaurvr: Using virtual reality to enhance a museum exhibition. Journal on Interactive Systems, 14(1):363–370. DOI: https://doi.org/10.5753/jis.2023.3464.

Sun, C., Sun, M., and Chen, H.-T. (2022). Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5459–5469. DOI: https://doi.org/10.1109/CVPR52688.2022.00538.

Swafford, N. T., Iglesias-Guitian, J. A., Koniaris, C., Moon, B., Cosker, D., and Mitchell, K. (2016). User, metric, and computational evaluation of foveated rendering methods. In Proceedings of the ACM Symposium on Applied Perception, pages 7–14, Anaheim California. ACM. DOI: https://doi.org/10.1145/2931002.2931011.

Tanaka, E. H., Almeida, L. d., Gouveia, G. S. d. F., Clerici, R. P. S., Alves, A. H. F., and Oliveira, R. R. d. (2023). A collaborative, immersive, virtual reality environment for training electricians. Journal on Interactive Systems, 14(1):59–71. DOI: https://doi.org/10.5753/jis.2023.2685.

Tang, J., Ren, J., Zhou, H., Liu, Z., and Zeng, G. (2023). Dreamgaussian: Generative gaussian splatting for efficient 3d content creation. arXiv preprint arXiv:2309.16653.

Toschi, M., De Matteo, R., Spezialetti, R., De Gregorio, D., Di Stefano, L., and Salti, S. (2023). Relight my nerf: A dataset for novel view synthesis and relighting of real world objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20762–20772.

Unity (2023). Getting started with ray tracing. Unity Technologies. [link] (accessed: 17 June 2024).

Unity (2024). Unity real-time development platform | 3d, 2d, vr & ar engine. Unity Technologies. [link] (accessed: 17 June 2024).

Wang, L., Shi, X., and Liu, Y. (2023). Foveated rendering: A state-of-the-art survey. Computational Visual Media, 9(2):195–228. DOI: https://doi.org/10.1007/s41095-022-0306-4.

Wang, Z., Bovik, A. C., Sheikh, H. R., and Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612. DOI: https://doi.org/10.1109/TIP.2003.819861.

Warburton, M., Mon-Williams, M., Mushtaq, F., and Morehead, J. R. (2023). Measuring motion-to-photon latency for sensorimotor experiments with virtual reality systems. Behavior Research Methods, 55(7):3658–3678.

Weier, M., Roth, T., Kruijff, E., Hinkenjann, A., Pérard-Gayot, A., Slusallek, P., and Li, Y. (2016). Foveated Real-Time Ray Tracing for Head-Mounted Displays. pages 289–298. The Eurographics Association and John Wiley & Sons Ltd.. DOI: https://doi.org/10.1111/cgf.13026.

Whitted, T. (1979). An improved illumination model for shaded display. In Proceedings of the 6th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’79, page 14, New York, NY, USA. Association for Computing Machinery. DOI: https://doi.org/10.1145/800249.807419.

Whitted, T. (1980). An improved illumination model for shaded display. Commun. ACM, 23(6):343–349. DOI: https://doi.org/10.1145/358876.358882.

Wu, G., Yi, T., Fang, J., Xie, L., Zhang, X., Wei, W., Liu, W., Tian, Q., and Wang, X. (2023). 4d gaussian splatting for real-time dynamic scene rendering. arXiv preprint arXiv:2310.08528.

Xu, Y., Zoss, G., Chandran, P., Gross, M., Bradley, D., and Gotardo, P. (2023). Renerf: Relightable neural radiance fields with nearfield lighting. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 22581–22591.

Ye, J. (2022). Rectangular-Mapping-based-Foveated-Rendering. [link] (accessed: 17 June 2024).

Ye, J., Xie, A., Jabbireddy, S., Li, Y., Yang, X., and Meng, X. (2022). Rectangular mapping-based foveated rendering. In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pages 756–764. IEEE. DOI: https://doi.org/10.1109/VR51125.2022.00097.

Downloads

Published

2024-06-18

How to Cite

HENRIQUES, H.; OLIVEIRA, A. de; OLIVEIRA, E.; TREVISAN, D.; CLUA, E. Foveated Path Culling: A mixed path tracing and radiance field approach for optimizing rendering in XR Displays. Journal on Interactive Systems, Porto Alegre, RS, v. 15, n. 1, p. 576–590, 2024. DOI: 10.5753/jis.2024.4352. Disponível em: https://journals-sol.sbc.org.br/index.php/jis/article/view/4352. Acesso em: 28 sep. 2024.

Issue

Section

Regular Paper