RecSys-Fairness: A Framework for Reducing Group Unfairness in Recommendations

Authors

DOI:

https://doi.org/10.5753/jbcs.2026.5457

Keywords:

Recommender Systems, Fairness, Individual Fairness, Group Fairness

Abstract

In this study, we address the importance of promoting fairness in recommendation systems, which are highly susceptible to biases that can lead to unfair outcomes for different user groups. We developed a fairness algorithm aimed at mitigating these injustices, which was applied to the MovieLens dataset and analyzed based on the recommendations produced by the ALS (Alternating Least Squares) and NCF (Neural Collaborative Filtering) methods. Users were grouped by activity level, gender, and age, and the results demonstrated the effectiveness of the fairness algorithm in substantially reducing group unfairness (R_{grp}) across all tested configurations, without causing significant losses in recommendation accuracy, measured by the Root Mean Squared Error (RMSE). In particular, a reduction in group unfairness of up to 65.57% was observed in the ALS method. Additionally, we identified an optimal convergence of the fairness algorithm for an estimated number of matrices (h) between 10 and 15, suggesting an effective balance point between promoting fairness and maintaining precision in recommendations. In comparison with the available benchmarks, under identical experimental conditions, we managed to improve group unfairness reductions by approximately 6% (from 59.77% to 65.57%).

Downloads

Download data is not yet available.

References

Beutel, A., Chi, E. H., Cheng, Z., Pham, H., and Anderson, J. (2017). Beyond globally optimal: Focused learning for improved recommendations. In Proceedings of the 26th International Conference on World Wide Web, WWW 2017, Perth, Australia, April 3-7, 2017. DOI: 10.1145/3038912.3052713.

Bishop, C. M. (2006). Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag, Berlin, Heidelberg. Book.

Burke, R., Sonboli, N., and Ordonez-Gauger, A. (2018). Balanced neighborhoods for multi-sided fairness in recommendation. In FAT. Available at:[link].

Dandekar, P., Goel, A., and Lee, D. (2013). Biased assimilation, homophily and the dynamics of polarization. Proceedings of the National Academy of Sciences of the United States of America, 110. DOI: 10.1073/pnas.1217220110.

Deldjoo, Y., Anelli, V. W., Zamani, H., et al. (2021). A flexible framework for evaluating user and item fairness in recommender systems. User Modeling and User-Adapted Interaction, 31:457-511. DOI: 10.1007/s11257-020-09285-1.

Dwork, C., Hardt, M., Pitassi, T., Reingold, O., and Zemel, R. S. (2011). Fairness through awareness. CoRR, abs/1104.3913. DOI: 10.48550/arXiv.1104.3913.

Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep Learning. MIT Press. DOI: 10.1038/nature14539.

Gurobi Optimization, LLC (2024). Gurobi optimizer. Available at:[link].

Hardt, M. (2013). On the provable convergence of alternating minimization for matrix completion. CoRR, abs/1312.0925. DOI: 10.48550/arXiv.1312.0925.

Hardt, M., Price, E., and Srebro, N. (2016). Equality of opportunity in supervised learning. CoRR, abs/1610.02413. DOI: 10.48550/arXiv.1610.02413.

Harper, F. M. and Konstan, J. A. (2015). The movielens datasets: History and context. Acm transactions on interactive intelligent systems (tiis), 5(4):1-19. DOI: 10.1145/2827872.

Hastie, T., Mazumder, R., Lee, J., and Zadeh, R. (2014). Matrix completion and low-rank svd via fast alternating least squares. DOI: 10.48550/ARXIV.1410.2596.

Hastie, T., Tibshirani, R., and Friedman, J. (2009). The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, 2 edition. DOI: 10.1007/978-0-387-84858-7.

He, X., Liao, L., Zhang, H., Nie, L., Hu, X., and Chua, T.-S. (2016). Fast matrix factorization for online recommendation with implicit feedback. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, pages 549-558. ACM. DOI: 10.48550/arXiv.1708.05024.

He, X., Liao, L., Zhang, H., Nie, L., Hu, X., and Chua, T.-S. (2017). Neural collaborative filtering. In Proceedings of the 26th International Conference on World Wide Web, pages 173-182. ACM. DOI: 10.1145/3038912.3052569.

James, G., Witten, D., Hastie, T., and Tibshirani, R. (2013). An Introduction to Statistical Learning: with Applications in R. Springer. DOI: 10.25334/q4ht55.

Kamishima, T. and Akaho, S. (2017). Considerations on recommendation independence for a find-good-items task. In In 11th ACM Conference on Recommender Systems. DOI: 10.18122/B2871W.

Kamishima, T., Akaho, S., and Asoh, H. (2012). Enhancement of the neutrality in recommendation. In In Proc. of the 2nd Workshop on Human Decision Making in Recommender Systems, pages 8-14. Available at:[link].

Kamishima, T., Akaho, S., Asoh, H., and Sakuma, J. (2018). Recommendation independence. In Friedler, S. A. and Wilson, C., editors, Proceedings of the 1st Conference on Fairness, Accountability and Transparency, volume 81 of Proceedings of Machine Learning Research, pages 187-201. PMLR. Available at:[link].

Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2017). Imagenet classification with deep convolutional neural networks. In Communications of the ACM, volume 60, pages 84-90. ACM. DOI: 10.1145/3065386.

LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. Nature, 521(7553):436-444. DOI: 10.1038/nature14539.

Niemiec, W., Borges, R., and Barone, D. (2022). Artificial intelligence discrimination: how to deal with it? In Anais do III Workshop sobre as Implicações da Computação na Sociedade, pages 93-100, Porto Alegre, RS, Brasil. SBC. DOI: 10.5753/wics.2022.222604.

Rastegarpanah, B., Gummadi, K. P., and Crovella, M. (2019). Fighting fire with fire: Using antidote data to improve polarization and fairness of recommender systems. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, WSDM ’19. ACM. DOI: 10.1145/3289600.3291002.

Ruback, L., Avila, S., and Cantero, L. (2021). Vieses no aprendizado de máquina e suas implicações sociais: Um estudo de caso no reconhecimento facial. In Anais do II Workshop sobre as Implicações da Computação na Sociedade, pages 90-101, Porto Alegre, RS, Brasil. SBC. DOI: 10.5753/wics.2021.15967.

Taso, F., Reis, V., and Martinez, F. (2023). Discriminação algorítmica de gênero: Estudo de caso e análise no contexto brasileiro. In Anais do IV Workshop sobre as Implicações da Computação na Sociedade, pages 13-25, Porto Alegre, RS, Brasil. SBC. DOI: 10.5753/wics.2023.229980.

Wang, H., Wang, N., Yeung, D.-Y., and Yeung, D.-Y. (2018). Collaborative filtering with social regularization. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 135-144. ACM. DOI: 10.1145/3219819.3219823.

Yao, S. and Huang, B. (2017). Beyond parity: Fairness objectives for collaborative filtering. CoRR, abs/1705.08804. DOI: 10.48550/arXiv.1705.08804.

Zemel, R., Wu, Y., Swersky, K., Pitassi, T., and Dwork, C. (2013). Learning fair representations. In Dasgupta, S. and McAllester, D., editors, Proceedings of the 30th International Conference on Machine Learning, volume 28 of Proceedings of Machine Learning Research, pages 325-333, Atlanta, Georgia, USA. PMLR. Available at:[link].

Downloads

Published

2026-02-21

How to Cite

dos Santos, R. V. M., & Comarela, G. V. (2026). RecSys-Fairness: A Framework for Reducing Group Unfairness in Recommendations. Journal of the Brazilian Computer Society, 32(1), 159–170. https://doi.org/10.5753/jbcs.2026.5457

Issue

Section

Regular Issue