Retrieval-Augmented Large Language Models for Computer Architecture Learning and Design Assistance

Authors

DOI:

https://doi.org/10.5753/ijcae.2025.6540

Keywords:

RAG, LLM, Computer Architecture, Learning and Design Assistant

Abstract

The field of computer architecture is highly specialized and demands skilled expertise. Large Language Models (LLMs) can support this process by improving the quality of project development. Moreover, they can be employed as training tools, progressively enhancing individual skills and facilitating the identification of suitable components to address specific architectural gaps. In this work, we propose the use of an LLM combined with the Retrieval-Augmented Generation (RAG) technique to expand the model’s knowledge and assist in identifying components of computer architectures. Experimental results indicate that LLMs can successfully identify some architectural components, while also revealing significant opportunities to refine the proposed methodology and advance research in architecture design supported by LLMs.

Descargas

Los datos de descargas todavía no están disponibles.

Citas

Alsaqer, S., Alajmi, S., Ahmad, I., and Alfailakawi, M. (2024). The potential of llms in hardware design. Journal of Engineering Research. DOI: 10.1016/j.jer.2024.08.001.

Barmer, H., Dzombak, R., Gaston, M., Palat, V., Redner, F., Smith, C., and Smith, T. (2021). Human-Centered AI. Acesso em: 18 jan. 2025.. DOI: 10.1184/R1/16560183.v1.

Brahmavar, S. B., Srinivasan, A., Dash, T., Krishnan, S. R., Vig, L., Roy, A., and Aduri, R. (2024). Generating novel leads for drug discovery using llms with logical feedback. Proceedings of the AAAI Conference on Artificial Intelligence, 38(1):21-29. DOI: 10.1609/aaai.v38i1.27751.

Chang, K., Wang, Y., Ren, H., Wang, M., Liang, S., Han, Y., Li, H., and Li, X. (2023). Chipgpt: How far are we from natural language hardware design. DOI: 10.48550/arxiv.2305.14019.

Charfi, A., Li, S., Payret, T., Tessier, P., Mraidha, C., and Gérard, S. (2019). A model driven tool for requirements and hardware engineering. In 2019 ACM/IEEE 22nd International Conference on Model Driven Engineering Languages and Systems Companion (MODELS-C), pages 769-773. DOI: 10.1109/MODELS-C.2019.00120.

Chen, M., Tworek, J., Jun, H., Yuan, Q., de Oliveira Pinto, H. P., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman, G., Ray, A., Puri, R., Krueger, G., Petrov, M., Khlaaf, H., Sastry, G., Mishkin, P., Chan, B., Gray, S., Ryder, N., Pavlov, M., Power, A., Kaiser, L., Bavarian, M., Winter, C., Tillet, P., Such, F. P., Cummings, D., Plappert, M., Chantzis, F., Barnes, E., Herbert-Voss, A., Guss, W. H., Nichol, A., Paino, A., Tezak, N., Tang, J., Babuschkin, I., Balaji, S., Jain, S., Saunders, W., Hesse, C., Carr, A. N., Leike, J., Achiam, J., Misra, V., Morikawa, E., Radford, A., Knight, M., Brundage, M., Murati, M., Mayer, K., Welinder, P., McGrew, B., Amodei, D., McCandlish, S., Sutskever, I., and Zaremba, W. (2021). Evaluating large language models trained on code. DOI: 10.48550/arxiv.2107.03374.

Fan, W., Ding, Y., Ning, L., Wang, S., Li, H., Yin, D., Chua, T.-S., and Li, Q. (2024). A survey on rag meeting llms: Towards retrieval-augmented large language models. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD '24, page 6491–6501, New York, NY, USA. Association for Computing Machinery. DOI: 10.1145/3637528.3671470.

Gao, Y., Xiong, Y., Gao, X., Jia, K., Pan, J., Bi, Y., Dai, Y., Sun, J., Wang, M., and Wang, H. (2024). Retrieval-augmented generation for large language models: A survey.

Hennessy, J. L. and Patterson, D. A. (2011). Computer Architecture, Fifth Edition: A Quantitative Approach. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 5th edition. Book.

Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W.-t., Rocktäschel, T., Riedel, S., and Kiela, D. (2020). Retrieval-augmented generation for knowledge-intensive nlp tasks. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS '20, Red Hook, NY, USA. Curran Associates Inc.. DOI: 10.48550/arxiv.2005.11401.

Li, Z., Wang, Z., Wang, W., Hung, K., Xie, H., and Wang, F. L. (2025). Retrieval-augmented generation for educational application: A systematic survey. Computers and Education: Artificial Intelligence, 8:100417. DOI: 10.1016/j.caeai.2025.100417.

Ling, C., Zhao, X., Lu, J., Deng, C., Zheng, C., Wang, J., Chowdhury, T., Li, Y., Cui, H., Zhang, X., Zhao, T., Panalkar, A., Mehta, D., Pasquali, S., Cheng, W., Wang, H., Liu, Y., Chen, Z., Chen, H., White, C., Gu, Q., Pei, J., Yang, C., and Zhao, L. (2024). Domain specialization as the key to make large language models disruptive: A comprehensive survey.

Liu, C., Hoang, L., Stolman, A., and Wu, B. (2024). Hita: A rag-based educational platform that centers educators in the instructional loop. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 14830 LNAI:405 – 412. DOI: 10.1007/978-3-031-64299-9_37.

Maciel, L. A., Souza, M. A., and Freitas, H. C. (2024). Energy-efficient cpu+fpga-based cnn architecture for intrusion detection systems. IEEE Consumer Electronics Magazine, 13(4):65-72. DOI: 10.1109/MCE.2023.3283730.

Perković, G., Drobnjak, A., and Botički, I. (2024). Hallucinations in llms: Understanding and addressing challenges. In 2024 47th MIPRO ICT and Electronics Convention (MIPRO), pages 2084-2088. DOI: 10.1109/MIPRO60963.2024.10569238.

Rajaraman, V. (2023). From eliza to chatgpt. Resonance, 28(6):889-905. DOI: 10.1007/s12045-023-1620-6.

Russell, S. J. and Norvig, P. (2022). Artificial intelligence - a modern approach. DOI: 10.5860/choice.33-1577.

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. (2017). Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, page 6000–6010, Red Hook, NY, USA. Curran Associates Inc.. DOI: 10.65215/nxvz2v36.

Wang, Z., Chu, Z., Doan, T. V., Ni, S., Yang, M., and Zhang, W. (2024). History, development, and principles of large language models: an introductory survey. AI and Ethics. DOI: 10.1007/s43681-024-00583-7.

Xu, H., Hu, H., and Huang, S. (2024). Optimizing high-level synthesis designs with retrieval-augmented large language models. In 2024 IEEE LLM Aided Design Workshop (LAD), pages 1-5. DOI: 10.1109/LAD62341.2024.10691855.

Zhou, M., Duan, N., Liu, S., and Shum, H.-Y. (2020). Progress in neural nlp: Modeling, learning, and reasoning. Engineering, 6(3):275-290. DOI: 10.1016/j.eng.2019.12.014.

Descargas

Published

2025-12-30

Cómo citar

de Souza, W. J., Neto, H. T. M., & de Freitas, H. C. (2025). Retrieval-Augmented Large Language Models for Computer Architecture Learning and Design Assistance. International Journal of Computer Architecture Education, 14(1), 12–18. https://doi.org/10.5753/ijcae.2025.6540

Issue

Section

Artículos Completos