The Influence of Search Engine Ranking Quality on the Performance of Programmers

Authors

DOI:

https://doi.org/10.5753/jserd.2025.4494

Keywords:

Software Reuse, Search Engines, Ranking Quality

Abstract

Software development is an activity characterized by the continuous search for information by programmers. General-purpose search engines, such as Google, Bing, and Yahoo, are widely used by programmers to find solutions to their problems. However, the best solutions are not always among the initial pages of search results that are ranked according to the search engine's algorithms. This study aims to investigate the influence that the order of the pages returned by the ranking of search engines exerts on programmers' performance when performing programming tasks. We designed an empirical within-subject study with programmers to understand and evaluate their performance when solving programming tasks using a ranked list of pages returned by the Google search engine and artificially modified with two different ranking quality levels (Higher Quality Ranking and Lower Quality Ranking). Moreover, for each entry in the ranking of pages, the most frequent methods mentioned on the respective page were listed in the ranking visualization. Analysis of participants' recorded videos was conducted through a mixed-methods research approach to provide insights into the results. We found that programmers spent approximately 8 minutes longer resolving tasks associated with a Lower Quality Ranking, spending more time on irrelevant pages compared to relevant ones, due to efforts to fix problematic code or new searches for another page. The addition of a list of frequent methods in the ranking visualization could help programmers to skip irrelevant pages and reduce time wastage. The ranking quality influences the programmers' performance during the development of programming tasks. Therefore, we suggest the development of filters aimed at improving the quality of results delivered by search engines. Moreover, the results may encourage the adaptation of this study for other approaches that require information foraging, such as chatting with LLMs.

Downloads

Download data is not yet available.

References

Buse, R. P. L. and Weimer, W. (2012). Synthesizing API usage examples. In Proceedings of the 34th International Conference on Software Engineering, ICSE ’12, page 782–792. IEEE Press.

Cao, K., Chen, C., Baltes, S., Treude, C., and Chen, X. (2021). Automated query reformulation for efficient search based on query logs from Stack Overflow. CoRR, abs/2102.00826.

Chatterjee, S., Juvekar, S., and Sen, K. (2009). Sniff: A search engine for Java using free-form queries. In Chechik, M. and Wirsing, M., editors, Fundamental Approaches to Software Engineering, pages 385–400, Berlin, Heidelberg. Springer Berlin Heidelberg.

Cho, J. and Roy, S. (2004). Impact of search engines on page popularity. In Proceedings of the 13th International Conference on World Wide Web, WWW ’04, page 20–29, New York, NY, USA. Association for Computing Machinery.

Dantas, C., Rocha, A., and Maia, M. (2023). Assessing the readability of ChatGPT code snippet recommendations: A comparative study. In Proceedings of the XXXVII Brazilian Symposium on Software Engineering, SBES ’23, page 283–292, New York, NY, USA. Association for Computing Machinery.

Ebert, C. and Louridas, P. (2023). Generative AI for software practitioners. IEEE Software, 40:30–38.

Fischer, F., Stachelscheid, Y., and Grossklags, J. (2021). The effect of Google search on software security: Unobtrusive security interventions via content re-ranking. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, CCS ’21, page 3070–3084, New York, NY, USA. Association for Computing Machinery.

Fleming, S. D., Scaffidi, C., Piorkowski, D., Burnett, M., Bellamy, R., Lawrance, J., and Kwan, I. (2013). An information foraging theory perspective on tools for debugging, refactoring, and reuse tasks. ACM Trans. Softw. Eng. Methodol., 22(2).

Gallardo-Valencia, R. E. and Elliott Sim, S. (2009). Internet-scale code search. In Proceedings of the 2009 ICSE Workshop on Search-Driven Development—Users, Infrastructure, Tools and Evaluation, SUITE ’09, page 49–52, USA. IEEE Computer Society.

Hora, A. (2021a). Characterizing top ranked code examples in Google. Journal of Systems and Software.

Hora, A. (2021b). Googling for software development: What developers search for and what they find. In International Conference on Mining Software Repositories, pages 1–12.

Johnson, R. E. (1992). Documenting frameworks using patterns. In Conference Proceedings on Object-Oriented Programming Systems, Languages, and Applications, OOPSLA ’92, page 63–76, New York, NY, USA. Association for Computing Machinery.

Keane, M. T., O’Brien, M., and Smyth, B. (2008). Are people biased in their use of search engines? Commun. ACM, 51(2):49–52.

Kim, J., Lee, S., Hwang, S.-w., and Kim, S. (2010). Towards an intelligent code search engine. In Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, AAAI’10, page 1358–1363. AAAI Press.

Kuttal, S. K., Kim, S. Y., Martos, C., and Bejarano, A. (2021). How end-user programmers forage in online repositories? An information foraging perspective. Journal of Computer Languages, 62:101010.

Mondal, S., Bappon, S. D., and Roy, C. (2024). Enhancing user interaction in ChatGPT: Characterizing and consolidating multiple prompts for issue resolution. In Proceedings of the International Conference on Mining Software Repositories (MSR 2024).

Nasehi, S. M., Sillito, J., Maurer, F., and Burns, C. (2012). What Makes a Good Code Example? A Study of Programming Q&A in StackOverflow. In Proc. of the IEEE Intl. Conf. on Software Maintenance (ICSM’12), pages 25–34, Washington, DC, USA.

Niu, H., Keivanloo, I., and Zou, Y. (2017). Learning to rank code examples for code search engines. Empirical Softw. Engg., 22(1):259–291.

Nykaza, J., Messinger, R., Boehme, F., Norman, C. L., Mace, M., and Gordon, M. (2002). What programmers really want: Results of a needs assessment for SDK documentation. In Proceedings of the 20th Annual International Conference on Computer Documentation, SIGDOC ’02, page 133–141, New York, NY, USA. Association for Computing Machinery.

Rahman, M. M., Barson, J., Paul, S., Kayani, J., Lois, F. A., Quezada, S. F., Parnin, C., Stolee, K. T., and Ray, B. (2018). Evaluating how developers use general-purpose web-search for code retrieval. In Proceedings of the 15th International Conference on Mining Software Repositories, MSR ’18, page 465–475, New York, NY, USA. Association for Computing Machinery.

Robillard, M. P. (2009). What makes APIs hard to learn? Answers from developers. IEEE Softw., 26(6):27–34.

Rocha, A. M. and Maia, M. A. (2023). Mining relevant solutions for programming tasks from search engine results. IET Software, 17(4):455–471.

Sadowski, C., Stolee, K. T., and Elbaum, S. (2015). How developers search for code: A case study. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2015, page 191–201, New York, NY, USA. Association for Computing Machinery.

Sim, S. E., Umarji, M., Ratanotayanon, S., and Lopes, C. V. (2011). How well do search engines support code retrieval on the web? ACM Trans. Softw. Eng. Methodol., 21(1).

Stolee, K. T., Elbaum, S., and Dobos, D. (2014). Solving the search for source code. ACM Trans. Softw. Eng. Methodol., 23(3).

Tufano, R., Mastropaolo, A., Pepe, F., Dabić, O., Penta, M. D., and Bavota, G. (2024). Unveiling ChatGPT’s usage in open source projects: A mining-based study.

Xia, X., Bao, L., Lo, D., Kochhar, P. S., Hassan, A. E., and Xing, Z. (2017). What do developers search for on the web? Empirical Softw. Engg., 22(6):3149–3185.

Xiao, T., Treude, C., Hata, H., and Matsumoto, K. (2024). DevGPT: Studying Developer-ChatGPT Conversations. In Proceedings of the International Conference on Mining Software Repositories (MSR 2024).

Zha, Z.-J., Yang, L., Mei, T., Wang, M., Wang, Z., Chua, T.-S., and Hua, X.-S. (2010). Visual query suggestion: Towards capturing user intent in internet image search. ACM Trans. Multimedia Comput. Commun. Appl., 6(3).

Zuccon, G., Koopman, B., and Shaik, R. (2023). ChatGPT hallucinates when attributing answers. In Proceedings of the Annual International ACM SIGIR Conference on Research and Development in Information Retrieval in the Asia Pacific Region, SIGIR-AP ’23, page 46–51, New York, NY, USA. Association for Computing Machinery.

Downloads

Published

2025-06-16

How to Cite

M. Rocha, A., E. C. Dantas, C., & de Almeida Maia, M. (2025). The Influence of Search Engine Ranking Quality on the Performance of Programmers. Journal of Software Engineering Research and Development, 13(2), 13:69 – 13:86. https://doi.org/10.5753/jserd.2025.4494

Issue

Section

Research Article