Title
|
|
|
|
Model checking for adversarial multi-agent reinforcement learning with reactive defense methods
|
|
Author
|
|
|
|
|
|
Abstract
|
|
|
|
Cooperative multi-agent reinforcement learning (CMARL) enables agents to achieve a common objective. However, the safety (a.k.a. robustness) of the CMARL agents operating in critical environments is not guaranteed. In particular, agents are susceptible to adversarial noise in their observations that can mislead their decision-making. So-called denoisers aim to remove adversarial noise from observations, yet, they are often error-prone. A key challenge for any rigorous safety verification technique in CMARL settings is the large number of states and transitions, which generally prohibits the construction of a (monolithic) model of the whole system. In this paper, we present a verification method for CMARL agents in settings with or without adversarial attacks or denoisers. Our method relies on a tight integration of CMARL and a verification technique referred to as model checking. We showcase the applicability of our method on various benchmarks from different domains. Our experiments show that our method is indeed suited to verify CMARL agents and that it scales better than a naive approach to model checking. |
|
|
Language
|
|
|
|
English
|
|
Source (journal)
|
|
|
|
Proceedings of the International Conference on Automated Planning and Scheduling
|
|
Source (book)
|
|
|
|
International Conference on Automated Planning and Scheduling, July 8–13, 2023, Prague, Czech Republic
|
|
Publication
|
|
|
|
2023
|
|
ISBN
|
|
|
|
978-1-57735-881-7
|
|
DOI
|
|
|
|
10.1609/ICAPS.V33I1.27191
|
|
Volume/pages
|
|
|
|
33
:1
(2023)
, p. 162-170
|
|
Full text (Publisher's DOI)
|
|
|
|
|
|
Full text (open access)
|
|
|
|
|
|