Fairness and Performance Trade-off in Machine Learning Systems: A Literature Review
DOI:
https://doi.org/10.5753/isys.2025.6007Keywords:
Algorithmic Fairness, Machine Learning, Decision Systems, Algorithmic Unfairness, Fairness MetricsAbstract
This study examines algorithmic unfairness in automated decision making systems, emphasizing equity and transparency given their impact. Machine learning algorithms often reproduce data biases, leading to discriminatory outcomes. Several mitigation strategies have been proposed, including fairness evaluation methods and bias detection tools. This review analyzes these approaches individually and in combination, assessing their ability to balance performance and fairness. The analysis identifies two main findings: the trade-off between accuracy and fairness, and the lack of standardized fairness metrics, which limits comparisons across studies. By systematizing these results, this study advances the debate on ethics and fairness in automated systems.
Downloads
References
Agarwal, A., Beygelzimer, A., Dudík, M., Langford, J., and Wallach, H. (2018). A reductions approach to fair classification. In International Conference on Machine Learning, pages 60–69. PMLR.
Ananny, M. and Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3):973–989.
Ashokan, A. and Haas, C. (2021). Fairness metrics and bias mitigation strategies for rating predictions. Information Processing and Management, 58(5).
Baxter, G. and Sommerville, I. (2011). Socio-technical systems: From design methods to systems engineering. Interacting with Computers, 23(1):4–17.
Biswas, S. and Rajan, H. (2020). Do the machine learning models on a crowdsourced platform exhibit bias? An empirical study on model fairness. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2020), pages 642–653. Association for Computing Machinery, New York, NY, USA.
Calders, T., Kamiran, F., and Pechenizkiy, M. (2009). Building classifiers with independency constraints. In 2009 IEEE International Conference on Data Mining Workshops, pages 13–18.
Goel, N., Yaghini, M., and Faltings, B. (2018). Non-discriminatory machine learning through convex fairness criteria. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’18), page 116. Association for Computing Machinery, New York, NY, USA.
ISACA (2018). COBIT 2019 framework: Introduction and methodology. Disponível em: [link]. Acesso em: 07 set 2025.
ISO/IEC 38500:2015 (2015). ISO/IEC 38500:2015 Information technology — Governance of IT.
Kehrenberg, T., Chen, Z., and Quadrianto, N. (2020). Tuning fairness by balancing target labels. Frontiers in Artificial Intelligence, 3.
Khatri, V. and Brown, C. V. (2010). Designing data governance. Communications of the ACM, 53(1):148–152.
Kitchenham, B. (2004). Procedures for performing systematic reviews. Keele University, Keele, UK, 33:1–26.
L. Cardoso, R., Meira Jr., W., Almeida, V., and Zaki, M. J. (2019). A framework for benchmarking discrimination-aware models in machine learning. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. ACM, New York, NY, USA.
Lee, M. S. A. and Floridi, L. (2021). Algorithmic fairness in mortgage lending: From absolute conditions to relational trade-offs. Minds and Machines, 31(1):165–191.
Mandhala, V. N., Bhattacharyya, D., Midhunchakkaravarthy, D., and Kim, H.-J. (2022). Mitigating bias by optimizing the variance between privileged and deprived data using post processing method. Rev. d’Intell. Artif., 36(1):87–91.
Pessach, D. and Shmueli, E. (2021). Improving fairness of artificial intelligence algorithms in privileged-group selection bias data settings. Expert Systems with Applications, 185:115667.
Pessach, D., Singer, G., Avrahami, D., Chalutz Ben-Gal, H., Shmueli, E., and Ben-Gal, I. (2020). Employees recruitment: A prescriptive analytics approach via machine learning and mathematical programming. Decision Support Systems, 134:113290.
Rodrigo L. Cardoso, Wagner Meira Jr., V. A. M. J. Z. (2019). A framework for benchmarking discrimination-aware models in machine learning. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, USA. ACM.
Salazar, T., Santos, M. S., Araujo, H., and Abreu, P. H. (2021). FAWOS: Fairness-aware oversampling algorithm based on distributions of sensitive attributes. IEEE Access, 9:81370–81379.
Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., and Vertesi, J. (2019). Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT*). ACM.
Shim, J. P., Warkentin, M., Courtney, J. F., Power, D. J., Sharda, R., and Carlsson, C. (2002). Past, present, and future of decision support technology. Decision Support Systems, 33(2):111–126.
Valdivia, A., Sánchez-Monedero, J., and Casillas, J. (2021). How fair can we go in machine learning? Assessing the boundaries of accuracy and fairness. International Journal of Intelligent Systems, 36(4):1619–1643.
Wei, S. and Niethammer, M. (2022). The fairness-accuracy Pareto front. Statistical Analysis and Data Mining: The ASA Data Science Journal, 15(3):287–302.
Zafar, M. B., Valera, I., Gomez Rodriguez, M., and Gummadi, K. P. (2017). Fairness beyond disparate treatment and disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web (WWW ’17), pages 1171–1180. International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE.
Zhang, T., Zhu, T., Li, J., Han, M., Zhou, W., and Yu, P. S. (2022). Fairness in semi-supervised learning: Unlabeled data help to reduce discrimination. IEEE Transactions on Knowledge and Data Engineering, 34(4):1763–1774.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 iSys - Journal of Information Systems

This work is licensed under a Creative Commons Attribution 4.0 International License.

