Publication
Title
A framework and benchmarking study for counterfactual generating methods on tabular data
Author
Abstract
Counterfactual explanations are viewed as an effective way to explain machine learning predictions. This interest is reflected by a relatively young literature with already dozens of algorithms aiming to generate such explanations. These algorithms are focused on finding how features can be modified to change the output classification. However, this rather general objective can be achieved in different ways, which brings about the need for a methodology to test and benchmark these algorithms. The contributions of this work are manifold: First, a large benchmarking study of 10 algorithmic approaches on 22 tabular datasets is performed, using 9 relevant evaluation metrics. Second, the introduction of a novel, first of its kind, framework to test counterfactual generation algorithms. Third, a set of objective metrics to evaluate and compare counterfactual results. And finally, insight from the benchmarking results that indicate which approaches obtain the best performance on what type of dataset. This benchmarking study and framework can help practitioners in determining which technique and building blocks most suit their context, and can help researchers in the design and evaluation of current and future counterfactual generation algorithms. Our findings show that, overall, there's no single best algorithm to generate counterfactual explanations as the performance highly depends on properties related to the dataset, model, score and factual point specificities.
Language
English
Publication
arXiv , 2021
Volume/pages
33 p.
Full text (open access)
UAntwerpen
Faculty/Department
Research group
Project info
Digitalisation and Tax (DigiTax).
Publication type
Subject
Affiliation
Publications with a UAntwerp address
External links
Source file
Record
Identifier
Creation 15.07.2021
Last edited 07.10.2022
To cite this reference