Title
|
|
|
|
Manipulation Risks in Explainable AI: The Implications of the Disagreement Problem
| |
Author
|
|
|
|
| |
Abstract
|
|
|
|
Artificial Intelligence (AI) systems are increasingly used in high-stakes domains of our life, increasing the need to explain these decisions and to make sure that they are aligned with how we want the decision to be made. The field of Explainable AI (XAI) has emerged in response. However, it faces a significant challenge known as the disagreement problem, where multiple explanations are possible for the same AI decision or prediction. While the existence of the disagreement problem is acknowledged, the potential implications associated with this problem have not yet been widely studied. First, we provide an overview of the different strategies explanation providers could deploy to adapt the returned explanation to their benefit. We make a distinction between strategies that attack the machine learning model or underlying data to influence the explanations, and strategies that leverage the explanation phase directly. Next, we analyse several objectives and concrete scenarios the providers could have to engage in this behavior, and the potential dangerous consequences this manipulative behavior could have on society. We emphasize that it is crucial to investigate this issue now, before these methods are widely implemented, and propose some mitigation strategies. |
| |
Language
|
|
|
|
English
| |
Source (journal)
|
|
|
|
ArXiv.org
| |
Publication
|
|
|
|
2023
| |
ISSN
|
|
|
|
2331-8422
| |
DOI
|
|
|
|
10.48550/ARXIV.2306.13885
| |
Full text (Publisher's DOI)
|
|
|
|
| |
Full text (open access)
|
|
|
|
| |
|