Document Type
Conference Proceeding
Publication Title
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
Abstract
Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct. However, current progress is hampered by a plurality of definitions of bias, means of quantification, and oftentimes vague relation between debiasing algorithms and theoretical measures of bias. This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning, with two key contributions: (1) making clear inter-relations among the current gamut of methods, and their relation to fairness theory; and (2) addressing the practical problem of model selection, which involves a trade-off between fairness and accuracy and has led to systemic issues in fairness research. Putting them together, we make several recommendations to help shape future work.
First Page
297
Last Page
312
DOI
10.18653/v1/2023.eacl-main.23
Publication Date
5-2023
Recommended Citation
X. Han, T. Baldwin, and T. Cohn, "Fair Enough: Standardizing Evaluation and Model Selection for Fairness Research in NLP". In Proceedings of the 17th Conference of the European Chapter of the Assoc for Comp Linguistics,ACL, pp. 297–312, May 2023. doi:10.18653/v1/2023.eacl-main.23
Additional Links
ACL Anthology Link: https://aclanthology.org/2023.eacl-main.23
Comments
Open Access
Archived thanks to ACL Anthology
Uploaded 30 November 2023