Teaching Memory Organizations in Parallel Architectures Using Graphics Accelerator Boards
DOI:
https://doi.org/10.5753/ijcae.2013.4945Keywords:
Graphics Cards, Teaching Computer ArchitectureAbstract
This work proposes simple assignments by using Graphics Processing Units (GPU) to teach parallel architectures. NVIDIA GPU has become very popular in less than one decade, since the CUDA framework has appeared in 2007. A GPU is an interesting didactic resource as it is a parallel and programmable architecture, and it has several memory organizations to be explored as: main memory, shared memory in distributed banks, switch on/off and resize cache L1, specialized memories (textures and constant).
Downloads
References
J. L. Hennessy and D. A. Patterson, Computer architecture: a quantitative approach, 5th ed. Elsevier, 2012.
J.-H. Huang, “What’s next in gpu technology?” in GTC GPU Technology Conference, 2013.
D. B. Kirk and W. H. Wen-mei, Programming massively parallel processors. Morgan Kaufmaan, 2010.
J. Sanders and E. Kandrot, CUDA by example. Addison-Wesley Professional, 2010.
S. Cook, CUDA Programming: A Developer’s Guide to Parallel Computing with GPUs. Newnes, 2012.
N. Wilt, The CUDA Handbook: A Comprehensive Guide to GPU Programming. Addison-Wesley Professional, 2013.
F. Q. Pereira, Tecnicas de otimizacao de codigo para placas graficas. Jornadas de Atualizacao em Informatica, 2011.
D. Luebke, M. Harris, N. Govindaraju, A. Lefohn, M. Houston, J. Owens, M. Segal, M. Papakipos, and I. Buck, “Gpgpu: general-purpose computation on graphics hardware,” in ACM/IEEE Conf. on Supercomputing, 2006.
C. Nvidia, “Nvidia cuda programming guide,” 2011.
P. Micikevicius, “Local memory and register spilling,” in GTC GPU Technology Conference, 2011.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2013 Os autores
This work is licensed under a Creative Commons Attribution 4.0 International License.