Optimising Equal Opportunity Fairness in Model Training
Document Type
Article
Publication Title
arXiv
Abstract
Real-world datasets often encode stereotypes and societal biases. Such biases can be implicitly captured by trained models, leading to biased predictions and exacerbating existing societal preconceptions. Existing debiasing methods, such as adversarial training and removing protected information from representations, have been shown to reduce bias. However, a disconnect between fairness criteria and training objectives makes it difficult to reason theoretically about the effectiveness of different techniques. In this work, we propose two novel training objectives which directly optimise for the widely-used criterion of equal opportunity, and show that they are effective in reducing bias while maintaining high performance over two classification tasks. Copyright © 2022, The Authors. All rights reserved.
DOI
10.48550/arXiv.2205.02393
Publication Date
4-4-2022
Keywords
Classification tasks, De-biasing, Equal opportunity, Fairness criterion, Model training, Performance, Real-world datasets, Machine learning, Computation and Language (cs.CL), Machine Learning (cs.LG)
Recommended Citation
A. Shen, X. Han, T. Cohn, T. Baldwin, and L. Frermann, "Optimising Equal Opportunity Fairness in Model Training", 2022, arXiv:2205.02393
- Usage
- Abstract Views: 5
Comments
IR Deposit conditions: non-described
Preprint available on arXiv