Large Languages Models in Brazilian Portuguese: A Chronological Survey

Authors

DOI:

https://doi.org/10.5753/jbcs.2025.5789

Keywords:

Brazilian Portuguese, Large Language Models, Architectures, Configuration

Abstract

The era of Large Language Models (LLMs) started with OpenAI’s GPT-3 model, and the popularity of LLMs has increased exponentially after the introduction of models like ChatGPT and GPT4 that demonstrated remarkable capabilities in natural language processing tasks. LLMs are a special class of pre-trained language models (PLMs) obtained by scaling model size, pretraining corpus, and use of computational power. Large PLMs can be valuable assets, especially for languages such as Portuguese to capture the cultural and knowledge richness inherent in the language. In this sense, this survey encompasses, based on the existing scientific literature, an overview of the research development with LLMs on Brazilian Portuguese (PT-BR-LLMs). The objective is to bring a self-contained, comprehensive overview of PT-BR-LLMs advancements, architectures, and resources. This survey is intended not only to provide a systematic survey but also a quick, comprehensive reference for the research community and practitioners to draw insights from extensive informative summaries of the existing scientific works to advance and progress in the PT-BR-LLMs research field. Considering the emergence of new literature on PT-BR-LLMs, future updates will be made and regularly maintained in the project repository https://github.com/Amadeus-AI-Official/pt-br-llms

Downloads

Download data is not yet available.

References

Abonizio, H., Almeida, T. S., Laitz, T., Junior, R. M., Bonás, G. K., Nogueira, R., and Pires, R. (2024). Sabiá-3 technical report. DOI: 10.48550/arxiv.2410.12049.

Alcoforado, A., Ferraz, T. P., Gerber, R., Bustos, E., Oliveira, A. S., Veloso, B. M., Siqueira, F. L., and Costa, A. H. R. (2022). Zeroberto: Leveraging zero-shot text classification by topic modeling. In Pinheiro, V., Gamallo, P., Amaro, R., Scarton, C., Batista, F., Silva, D., Magro, C., and Pinto, H., editors, Computational Processing of the Portuguese Language, pages 125-136, Cham. Springer International Publishing. DOI: 10.1007/978-3-030-98305-5_12.

Almeida, T. S., Abonizio, H., Nogueira, R., and Pires, R. (2024). Sabiá-2: A new generation of portuguese large language models. DOI: 10.48550/arxiv.2403.09887.

Bonifacio, L. H., Vilela, P. A., Lobato, G. R., and Fernandes, E. R. (2020). A study on the impact of intradomain finetuning of deep language models for legal named entity recognition in portuguese. In Cerri, R. and Prati, R. C., editors, Intelligent Systems, pages 648-662, Cham. Springer International Publishing. DOI: 10.1007/978-3-030-61377-8_46.

Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D. (2020). Language models are few-shot learners. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS '20, Red Hook, NY, USA. Curran Associates Inc.. DOI: 10.48550/arxiv.2005.14165.

Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Lee, Y. T., Li, Y., Lundberg, S., Nori, H., Palangi, H., Ribeiro, M. T., and Zhang, Y. (2023). Sparks of artificial general intelligence: Early experiments with gpt-4. DOI: 10.48550/arxiv.2303.12712.

Campiotti, I., Rodrigues, M., Albuquerque, Y., Azevedo, R., and Andrade, A. (2023). Debertinha: A multistep approach to adapt debertav3 xsmall for brazilian portuguese natural language processing task. DOI: 10.48550/arxiv.2309.16844.

Carmo, D., Piau, M., Campiotti, I., Nogueira, R., and Lotufo, R. (2020). Ptt5: Pretraining and validating the t5 model on brazilian portuguese data. DOI: 10.48550/arxiv.2008.09144.

Carneiro, F., Vianna, D., Carvalho, J., Plastino, A., and Paes, A. (2025). Bertweet.br: a pre-trained language model for tweets in portuguese. Neural Computing and Applications, 37(6):4363-4385. DOI: 10.1007/s00521-024-10711-3.

Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., Roberts, A., Barham, P., Chung, H. W., Sutton, C., Gehrmann, S., Schuh, P., Shi, K., Tsvyashchenko, S., Maynez, J., Rao, A., Barnes, P., Tay, Y., Shazeer, N., Prabhakaran, V., Reif, E., Du, N., Hutchinson, B., Pope, R., Bradbury, J., Austin, J., Isard, M., Gur-Ari, G., Yin, P., Duke, T., Levskaya, A., Ghemawat, S., Dev, S., Michalewski, H., Garcia, X., Misra, V., Robinson, K., Fedus, L., Zhou, D., Ippolito, D., Luan, D., Lim, H., Zoph, B., Spiridonov, A., Sepassi, R., Dohan, D., Agrawal, S., Omernick, M., Dai, A. M., Pillai, T. S., Pellat, M., Lewkowycz, A., Moreira, E., Child, R., Polozov, O., Lee, K., Zhou, Z., Wang, X., Saeta, B., Diaz, M., Firat, O., Catasta, M., Wei, J., Meier-Hellstern, K., Eck, D., Dean, J., Petrov, S., and Fiedel, N. (2024). Palm: scaling language modeling with pathways. J. Mach. Learn. Res., 24(1). DOI: 10.48550/arxiv.2204.02311.

Ciurlino, V. (2021). Bertbr : a pretrained language model for law texts. Available online [link].

Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., and Stoyanov, V. (2020). Unsupervised cross-lingual representation learning at scale. In Jurafsky, D., Chai, J., Schluter, N., and Tetreault, J., editors, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online. Association for Computational Linguistics. DOI: 10.18653/v1/2020.acl-main.747.

Corrêa, N. K., Falk, S., Fatimah, S., Sen, A., and De Oliveira, N. (2024a). Teenytinyllama: Open-source tiny language models trained in brazilian portuguese. Machine Learning with Applications, 16:100558. DOI: 10.1016/j.mlwa.2024.100558.

Corrêa, N. K., Sen, A., Falk, S., and Fatimah, S. (2024b). Tucano: Advancing neural text generation for portuguese. DOI: 10.1016/j.patter.2025.101325.

Costa, P. B., Pavan, M. C., Santos, W. R., Silva, S. C., and Paraboni, I. (2023). BERTabaporu: Assessing a genre-specific language model for Portuguese NLP. In Mitkov, R. and Angelova, G., editors, Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing, pages 217-223, Varna, Bulgaria. INCOMA Ltd., Shoumen, Bulgaria. Available online [link].

Cruz-Castañeda, W. A. and Amadeus, M. (2025). Amadeus-verbo technical report: The powerful qwen2.5 family models trained in portuguese. DOI: h10.48550/arxiv.2506.00019.

Cui, Y., Yang, Z., and Yao, X. (2024). Efficient and effective text encoding for chinese llama and alpaca. DOI: 10.48550/arxiv.2304.08177.

de Mello, G. L., Finger, M., , Serras, F., de Mello Carpi, M., Jose, M. M., Domingues, P. H., and Cavalim, P. (2024). Pelle: Encoder-based language models for brazilian portuguese based on open data.

Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In Burstein, J., Doran, C., and Solorio, T., editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. DOI: 10.18653/v1/N19-1423.

Dong, Q., Li, L., Dai, D., Zheng, C., Ma, J., Li, R., Xia, H., Xu, J., Wu, Z., Chang, B., Sun, X., Li, L., and Sui, Z. (2024). A survey on in-context learning. In Al-Onaizan, Y., Bansal, M., and Chen, Y.-N., editors, Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 1107-1128, Miami, Florida, USA. Association for Computational Linguistics. DOI: 10.18653/v1/2024.emnlp-main.64.

Finardi, P., Viegas, J. D., Ferreira, G. T., Mansano, A. F., and Caridá, V. F. (2021). Bertú: Itaú bert for digital customer service. DOI: 10.48550/arXiv.2101.12015.

Garcia, E. A. S., Silva, N. F. F., Siqueira, F., Albuquerque, H. O., Gomes, J. R. S., Souza, E., and Lima, E. A. (2024a). RoBERTaLexPT: A legal RoBERTa model pretrained with deduplication for Portuguese. In Gamallo, P., Claro, D., Teixeira, A., Real, L., Garcia, M., Oliveira, H. G., and Amaro, R., editors, Proceedings of the 16th International Conference on Computational Processing of Portuguese - Vol. 1, pages 374-383, Santiago de Compostela, Galicia/Spain. Association for Computational Lingustics. Available online [link].

Garcia, G. L., Paiola, P. H., Morelli, L. H., Candido, G., Júnior, A. C., Jodas, D. S., Afonso, L. C. S., Guilherme, I. R., Penteado, B. E., and Papa, J. P. (2024b). Introducing bode: A fine-tuned large language model for portuguese prompt-based task. DOI: 10.48550/arxiv.2401.02909.

Geng, X. and Liu, H. (2023). Openllama: An open reproduction of llama. Available online [link].

Guillou, P. (2020). Gportuguese-2 (portuguese gpt-2 small): a language model for portuguese text generation (and more nlp tasks...). Available online [link].

Hoffmann, J., Borgeaud, S., Mensch, A., Buchatskaya, E., Cai, T., Rutherford, E., de Las Casas, D., Hendricks, L. A., Welbl, J., Clark, A., Hennigan, T., Noland, E., Millican, K., van den Driessche, G., Damoc, B., Guy, A., Osindero, S., Simonyan, K., Elsen, E., Vinyals, O., Rae, J. W., and Sifre, L. (2024). Training compute-optimal large language models. In Proceedings of the 36th International Conference on Neural Information Processing Systems, NIPS '22, Red Hook, NY, USA. Curran Associates Inc.. DOI: 10.48550/arxiv.2203.15556.

Huang, J. and Chang, K. C.-C. (2023). Towards reasoning in large language models: A survey. In Rogers, A., Boyd-Graber, J., and Okazaki, N., editors, Findings of the Association for Computational Linguistics: ACL 2023, pages 1049-1065, Toronto, Canada. Association for Computational Linguistics. DOI: 10.18653/v1/2023.findings-acl.67.

José, M. A. and Cozman, F. G. (2021). mrat-sql+gap: A portuguese text-to-sql transformer. In Britto, A. and Valdivia Delgado, K., editors, Intelligent Systems, pages 511-525, Cham. Springer International Publishing. DOI: 10.1007/978-3-030-91699-2_35.

Junior, R. M., Pires, R., Romero, R., and Nogueira, R. (2024). Juru: Legal brazilian large language model from reputable sources. DOI: 10.48550/arxiv.2403.18140.

Kalyan, K. S. (2024). A survey of gpt-3 family large language models including chatgpt and gpt-4. Natural Language Processing Journal, 6:100048. DOI: 10.1016/j.nlp.2023.100048.

Lai, V., Ngo, N., Pouran Ben Veyseh, A., Man, H., Dernoncourt, F., Bui, T., and Nguyen, T. (2023). ChatGPT beyond English: Towards a comprehensive evaluation of large language models in multilingual learning. In Bouamor, H., Pino, J., and Bali, K., editors, Findings of the Association for Computational Linguistics: EMNLP 2023, pages 13171-13189, Singapore. Association for Computational Linguistics. DOI: 10.18653/v1/2023.findings-emnlp.878.

Larcher, C., Piau, M., Finardi, P., Gengo, P., Esposito, P., and Caridá, V. (2023). Cabrita: closing the gap for foreign languages. DOI: 10.48550/arxiv.2308.11878.

Li, J., Tang, T., Zhao, W. X., Nie, J.-Y., and Wen, J.-R. (2024a). Pre-trained language models for text generation: A survey. ACM Comput. Surv., 56(9). DOI: 10.1145/3649449.

Li, Z., Shi, Y., Liu, Z., Yang, F., Payani, A., Liu, N., and Du, M. (2024b). Language ranker: A metric for quantifying llm performance across high and low-resource languages. DOI: 10.1609/aaai.v39i27.35038.

Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., and Neubig, G. (2023). Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Comput. Surv., 55(9). DOI: 10.1145/3560815.

Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. DOI: 10.48550/arxiv.1907.11692.

Marion, M., Üstün, A., Pozzobon, L., Wang, A., Fadaee, M., and Hooker, S. (2023). When less is more: Investigating data pruning for pretraining llms at scale. DOI: 10.48550/arxiv.2309.04564.

Matarazzo, A. and Torlone, R. (2025). A survey on large language models with some insights on their capabilities and limitations. DOI: 10.48550/arxiv.2501.04040.

Min, B., Ross, H., Sulem, E., Veyseh, A. P. B., Nguyen, T. H., Sainz, O., Agirre, E., Heintz, I., and Roth, D. (2023). Recent advances in natural language processing via large pre-trained language models: A survey. ACM Comput. Surv., 56(2). DOI: 10.1145/3605943.

Miranda, B., Lee, A., Sundar, S., Casasola, A., and Koyejo, S. (2024). Beyond scale: The diversity coefficient as a data quality metric for variability in natural language data.

Naous, T., Ryan, M. J., Ritter, A., and Xu, W. (2024). Having beer after prayer? measuring cultural bias in large language models. In Ku, L.-W., Martins, A., and Srikumar, V., editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 16366-16393, Bangkok, Thailand. Association for Computational Linguistics. DOI: 10.18653/v1/2024.acl-long.862.

Naveed, H., Khan, A. U., Qiu, S., Saqib, M., Anwar, S., Usman, M., Akhtar, N., Barnes, N., and Mian, A. (2024). A comprehensive overview of large language models. DOI: 10.48550/arXiv.2307.06435.

OpenAI (2022). Introducing chatgpt. Available online [link].

Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P., Leike, J., and Lowe, R. (2024). Training language models to follow instructions with human feedback. In Proceedings of the 36th International Conference on Neural Information Processing Systems, Red Hook, NY, USA. Curran Associates Inc.. DOI: 10.48550/arxiv.2203.02155.

Pellicer, L. F. A. O., Pirozelli, P., Costa, A. H. R., and Inoue, A. (2022). Ptt5-paraphraser: Diversity and meaning fidelity in automatic portuguese paraphrasing. In Pinheiro, V., Gamallo, P., Amaro, R., Scarton, C., Batista, F., Silva, D., Magro, C., and Pinto, H., editors, Computational Processing of the Portuguese Language, pages 299-309, Cham. Springer International Publishing. DOI: 10.1007/978-3-030-98305-5_28.

Piau, M., Lotufo, R., and Nogueira, R. (2024). ptt5-v2: A closer look at continued pretraining of t5 models for the portuguese language. DOI: 10.1007/978-3-031-79032-4_23.

Pires, H., Paucar, L., and Carvalho, J. P. (2025). Deb3rta: A transformer-based model for the portuguese financial domain. Big Data and Cognitive Computing, 9(3). DOI: 10.3390/bdcc9030051.

Pires, R., Abonizio, H., Almeida, T. S., and Nogueira, R. (2023). Sabiá: Portuguese large language models. In Naldi, M. C. and Bianchi, R. A. C., editors, Intelligent Systems, pages 226-240, Cham. Springer Nature Switzerland. DOI: 10.1007/978-3-031-45392-2_15.

Qin, L., Chen, Q., Zhou, Y., Chen, Z., Li, Y., Liao, L., Li, M., Che, W., and Yu, P. S. (2025). A survey of multilingual large language models. Patterns, 6(1). DOI: 10.1016/j.patter.2024.101118.

Qiu, X., Sun, T., Xu, Y., Shao, Y., Dai, N., and Huang, X. (2020). Pre-trained models for natural language processing: A survey. Science China Technological Sciences, 63(10):1872-1897. DOI: 10.1007/s11431-020-1647-3.

Rae, J. W., Borgeaud, S., Cai, T., Millican, K., Hoffmann, J., Song, F., Aslanides, J., Henderson, S., Ring, R., Young, S., et al. (2022). Scaling language models: Methods, analysis and insights from training gopher.

Rodrigues, J., Gomes, L., Silva, J., Branco, A., Santos, R., Cardoso, H. L., and Osório, T. (2023). Advancing Neural Encoding of Portuguese with Transformer Albertina PT-*, page 441–453. Springer Nature Switzerland. DOI: 10.1007/978-3-031-49008-8_35.

Rodrigues, R. B. M., Privatto, P. I. M., de Sousa, G. J., Murari, R. P., Afonso, L. C. S., Papa, J. P., Pedronette, D. C. G., Guilherme, I. R., Perrout, S. R., and Riente, A. F. (2022). Petrobert: A domain adaptation language model for oil and gas applications in portuguese. In Pinheiro, V., Gamallo, P., Amaro, R., Scarton, C., Batista, F., Silva, D., Magro, C., and Pinto, H., editors, Computational Processing of the Portuguese Language, pages 101-109, Cham. Springer International Publishing. DOI: 10.1007/978-3-030-98305-5_10.

Santos, R., Silva, J. R., Gomes, L., Rodrigues, J., and Branco, A. (2024). Advancing generative AI for Portuguese with open decoder gervásio PT*. In Melero, M., Sakti, S., and Soria, C., editors, Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024, pages 16-26, Torino, Italia. ELRA and ICCL. DOI: 10.48550/arXiv.2402.18766.

Schneider, E. T. R., de Souza, J. V. A., Gumiel, Y. B., Moro, C., and Paraiso, E. C. (2021). A gpt-2 language model for biomedical texts in portuguese. In 2021 IEEE 34th International Symposium on Computer-Based Medical Systems (CBMS), pages 474-479. DOI: 10.1109/CBMS52027.2021.00056.

Schneider, E. T. R., de Souza, J. V. A., Knafou, J., Oliveira, L. E. S. e., Copara, J., Gumiel, Y. B., Oliveira, L. F. A. d., Paraiso, E. C., Teodoro, D., and Barra, C. M. C. M. (2020). BioBERTpt - a Portuguese neural language model for clinical named entity recognition. In Rumshisky, A., Roberts, K., Bethard, S., and Naumann, T., editors, Proceedings of the 3rd Clinical Natural Language Processing Workshop, pages 65-72, Online. Association for Computational Linguistics. DOI: 10.18653/v1/2020.clinicalnlp-1.7.

Silva, M. O., Oliveira, G. P., Costa, L. G. L., and Pappa, G. L. (2025). Govbert-br: A bert-based language model for brazilian portuguese governmental data. In Paes, A. and Verri, F. A. N., editors, Intelligent Systems, pages 19-32, Cham. Springer Nature Switzerland. DOI: 10.1007/978-3-031-79032-4_2.

Silveira, R., Ponte, C., Almeida, V., Pinheiro, V., and Furtado, V. (2023). Legalbert-pt: A pretrained language model for the brazilian portuguese legal domain. In Naldi, M. C. and Bianchi, R. A. C., editors, Intelligent Systems, pages 268-282, Cham. Springer Nature Switzerland. DOI: 10.1007/978-3-031-45392-2_18.

Souza, F., Nogueira, R., and Lotufo, R. (2020a). Bertimbau: Pretrained bert models for brazilian portuguese. In Cerri, R. and Prati, R. C., editors, Intelligent Systems, pages 403-417, Cham. Springer International Publishing. DOI: 10.1007/978-3-030-61377-8_28.

Souza, F., Nogueira, R., and Lotufo, R. (2020b). Portuguese named entity recognition using bert-crf. arXiv, (1909.10649). DOI: 10.48550/arxiv.1909.10649.

Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., Rodriguez, A., Joulin, A., Grave, E., and Lample, G. (2023a). Llama: Open and efficient foundation language models. DOI: 10.48550/arxiv.2302.13971.

Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S., et al. (2023b). Llama 2: Open foundation and fine-tuned chat models. DOI: 10.48550/arxiv.2307.09288.

Viegas, C. F. O., Costa, B. C., and Ishii, R. P. (2023). Jurisbert: A new approach that converts a classification corpus into an sts one. In Gervasi, O., Murgante, B., Taniar, D., Apduhan, B. O., Braga, A. C., Garau, C., and Stratigea, A., editors, Computational Science and Its Applications - ICCSA 2023, pages 349-365, Cham. Springer Nature Switzerland. DOI: 10.1007/978-3-031-36805-9_24.

Wang, B. and Komatsuzaki, A. (2021). GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. Available online [link].

Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B., Borgeaud, S., Yogatama, D., Bosma, M., Zhou, D., Metzler, D., Chi, E. H., Hashimoto, T., Vinyals, O., Liang, P., Dean, J., and Fedus, W. (2022). Emergent abilities of large language models. DOI: 10.48550/arxiv.2206.07682.

Yue, X., Song, Y., Asai, A., Kim, S., de Dieu Nyandwi, J., Khanuja, S., Kantharuban, A., Sutawika, L., Ramamoorthy, S., and Neubig, G. (2025). Pangea: A fully open multilingual multimodal llm for 39 languages. DOI: h10.48550/arxiv.2410.16153.

Zanuz, L. and Rigo, S. J. (2022). Fostering judiciary applications with new fine-tuned models for legal named entity recognition in portuguese. In Pinheiro, V., Gamallo, P., Amaro, R., Scarton, C., Batista, F., Silva, D., Magro, C., and Pinto, H., editors, Computational Processing of the Portuguese Language, pages 219-229, Cham. Springer International Publishing. DOI: 10.1007/978-3-030-98305-5_21.

Zhang, P., Zeng, G., Wang, T., and Lu, W. (2024). Tinyllama: An open-source small language model. DOI: 10.48550/arxiv.2401.02385.

Zhao, W. X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., Min, Y., Zhang, B., Zhang, J., Dong, Z., Du, Y., Yang, C., Chen, Y., Chen, Z., Jiang, J., Ren, R., Li, Y., Tang, X., Liu, Z., Liu, P., Nie, J.-Y., and Wen, J.-R. (2023). A survey of large language models. DOI: 10.48550/arXiv.2303.18223.

Zhou, C., Li, Q., Li, C., Yu, J., Liu, Y., Wang, G., Zhang, K., Ji, C., Yan, Q., He, L., Peng, H., Li, J., Wu, J., Liu, Z., Xie, P., Xiong, C., Pei, J., Yu, P. S., and Sun, L. (2023). A comprehensive survey on pretrained foundation models: A history from bert to chatgpt. DOI: h10.48550/arXiv.2302.09419.

Zhuang, F., Qi, Z., Duan, K., Xi, D., Zhu, Y., Zhu, H., Xiong, H., and He, Q. (2021). A comprehensive survey on transfer learning. Proceedings of the IEEE, 109(1):43-76. DOI: 10.1109/JPROC.2020.3004555.

Downloads

Published

2025-10-24

How to Cite

Cruz-Castañeda, W. A., & Amadeus, M. (2025). Large Languages Models in Brazilian Portuguese: A Chronological Survey. Journal of the Brazilian Computer Society, 31(1), 1168–1187. https://doi.org/10.5753/jbcs.2025.5789

Issue

Section

Articles