Title
|
|
|
|
Targeted approaches against discrimination : new methods for bias detection and mitigation in automated decision-making systems
| |
Author
|
|
|
|
| |
Abstract
|
|
|
|
Automated decision-making (ADM) systems used in high-stakes areas such as lending or hiring often perpetuate biases present in their underlying data. Consequently, these systems can adversely impact certain population groups, mirroring the sexist or racist practices of our society. In this thesis, we inspect current approaches to auditing and mitigating such discriminatory biases in ADM systems. We highlight how these approaches typically centre around single definitions of fairness, that aim to express how (un)fair some system is through a single number and try to optimize for fairness accordingly. We explain how these approaches fall short in adequately understanding and resolving discrimination and argue how better approaches should be driven by more nuanced considerations: rather than having one single fairness measure, auditors should focus on which parts of the data a system behaves discriminatory, so that they then can then address this behaviour in a targeted manner. To that end, our first two chapters focus on new tools and methods for bias detection in ADM systems. The first inspects the potential of interactive auditing toolkits, while the second improves an existing method for measuring individual fairness, allowing auditors to decide for one decision subject at a time whether they received just treatment. Our third chapter introduces a human-in-the-loop approach to mitigate bias in ADM systems. We design a selective classifier that refrains from making predictions when they are deemed as discriminatory. These rejected instances, along with an explanation for their rejection, can be passed on to human experts who can make better-informed decisions for them. The fourth chapter shifts focus from new bias mitigation techniques to evaluating their effectiveness. We emphasize how the traditional evaluation scheme, based on single fairness definitions, is not sufficient and instead introduce a benchmarking-dataset to facilitate the evaluation of bias mitigation strategies. This dataset includes a fair and biased version of its decision labels, allowing precise assessment of how well a model can predict the fair labels after being applied on the biased ones. Our fifth and final chapter zooms out from these specific considerations surrounding bias in ADM systems and provides an overview of the research field in general and how it has developed over the last 15 years. By highlighting research gaps, we also conclude this thesis with a discussion and its implications for future work. |
| |
Language
|
|
|
|
English
| |
Publication
|
|
|
|
Antwerpen
:
University of Antwerp, Faculty of Science
,
2024
| |
DOI
|
|
|
|
10.63028/10067/2078140151162165141
| |
Volume/pages
|
|
|
|
180 p.
| |
Note
|
|
|
|
:
Calders, T. [Supervisor]
:
De Raedt, S. [Supervisor]
| |
Full text (open access)
|
|
|
|
| |
|