Assured Deep Multi-Agent Reinforcement Learning for Safe Robotic Systems

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

  • Joshua Riley
  • Radu Calinescu
  • Colin Paterson
  • Daniel Kudenko
  • Alec Banks

Organisationseinheiten

Externe Organisationen

  • University of York
  • Defence Science and Technology Laboratory
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksAgents and Artificial Intelligence
Untertitel13th International Conference, ICAART 2021, Virtual Event, February 4–6, 2021, Revised Selected Papers
Herausgeber/-innenAna Paula Rocha, Luc Steels, Jaap van den Herik
Herausgeber (Verlag)Springer Science and Business Media Deutschland GmbH
Seiten158-180
Seitenumfang23
ISBN (elektronisch)978-3-031-10161-8
ISBN (Print)9783031101601
PublikationsstatusVeröffentlicht - 19 Juli 2022
Veranstaltung13th International Conference on Agents and Artificial Intelligence, ICAART 2021 - Virtual, Online, Österreich
Dauer: 4 Feb. 20216 Feb. 2021

Publikationsreihe

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Band13251 LNAI
ISSN (Print)0302-9743
ISSN (elektronisch)1611-3349

Abstract

Using multi-agent reinforcement learning to find solutions to complex decision-making problems in shared environments has become standard practice in many scenarios. However, this is not the case in safety-critical scenarios, where the reinforcement learning process, which uses stochastic mechanisms, could lead to highly unsafe outcomes. We proposed a novel, safe multi-agent reinforcement learning approach named Assured Multi-Agent Reinforcement Learning (AMARL) to address this issue. Distinct from other safe multi-agent reinforcement learning approaches, AMARL utilises quantitative verification, a model checking technique that guarantees agent compliance of safety, performance, and non-functional requirements, both during and after the learning process. We have previously evaluated AMARL in patrolling domains with various multi-agent reinforcement learning algorithms for both homogeneous and heterogeneous systems. In this work we extend AMARL through the use of deep multi-agent reinforcement learning. This approach is particularly appropriate for systems in which the rewards are sparse and hence extends the applicability of AMARL. We evaluate our approach within a new search and collection domain which demonstrates promising results in safety standards and performance compared to algorithms not using AMARL.

ASJC Scopus Sachgebiete

Zitieren

Assured Deep Multi-Agent Reinforcement Learning for Safe Robotic Systems. / Riley, Joshua; Calinescu, Radu; Paterson, Colin et al.
Agents and Artificial Intelligence : 13th International Conference, ICAART 2021, Virtual Event, February 4–6, 2021, Revised Selected Papers. Hrsg. / Ana Paula Rocha; Luc Steels; Jaap van den Herik. Springer Science and Business Media Deutschland GmbH, 2022. S. 158-180 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Band 13251 LNAI).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Riley, J, Calinescu, R, Paterson, C, Kudenko, D & Banks, A 2022, Assured Deep Multi-Agent Reinforcement Learning for Safe Robotic Systems. in AP Rocha, L Steels & J van den Herik (Hrsg.), Agents and Artificial Intelligence : 13th International Conference, ICAART 2021, Virtual Event, February 4–6, 2021, Revised Selected Papers. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Bd. 13251 LNAI, Springer Science and Business Media Deutschland GmbH, S. 158-180, 13th International Conference on Agents and Artificial Intelligence, ICAART 2021, Online, Österreich, 4 Feb. 2021. https://doi.org/10.1007/978-3-031-10161-8_8
Riley, J., Calinescu, R., Paterson, C., Kudenko, D., & Banks, A. (2022). Assured Deep Multi-Agent Reinforcement Learning for Safe Robotic Systems. In A. P. Rocha, L. Steels, & J. van den Herik (Hrsg.), Agents and Artificial Intelligence : 13th International Conference, ICAART 2021, Virtual Event, February 4–6, 2021, Revised Selected Papers (S. 158-180). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Band 13251 LNAI). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-10161-8_8
Riley J, Calinescu R, Paterson C, Kudenko D, Banks A. Assured Deep Multi-Agent Reinforcement Learning for Safe Robotic Systems. in Rocha AP, Steels L, van den Herik J, Hrsg., Agents and Artificial Intelligence : 13th International Conference, ICAART 2021, Virtual Event, February 4–6, 2021, Revised Selected Papers. Springer Science and Business Media Deutschland GmbH. 2022. S. 158-180. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). doi: 10.1007/978-3-031-10161-8_8
Riley, Joshua ; Calinescu, Radu ; Paterson, Colin et al. / Assured Deep Multi-Agent Reinforcement Learning for Safe Robotic Systems. Agents and Artificial Intelligence : 13th International Conference, ICAART 2021, Virtual Event, February 4–6, 2021, Revised Selected Papers. Hrsg. / Ana Paula Rocha ; Luc Steels ; Jaap van den Herik. Springer Science and Business Media Deutschland GmbH, 2022. S. 158-180 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
Download
@inproceedings{1721ed718bfb4e01a186502dfff9ec2d,
title = "Assured Deep Multi-Agent Reinforcement Learning for Safe Robotic Systems",
abstract = "Using multi-agent reinforcement learning to find solutions to complex decision-making problems in shared environments has become standard practice in many scenarios. However, this is not the case in safety-critical scenarios, where the reinforcement learning process, which uses stochastic mechanisms, could lead to highly unsafe outcomes. We proposed a novel, safe multi-agent reinforcement learning approach named Assured Multi-Agent Reinforcement Learning (AMARL) to address this issue. Distinct from other safe multi-agent reinforcement learning approaches, AMARL utilises quantitative verification, a model checking technique that guarantees agent compliance of safety, performance, and non-functional requirements, both during and after the learning process. We have previously evaluated AMARL in patrolling domains with various multi-agent reinforcement learning algorithms for both homogeneous and heterogeneous systems. In this work we extend AMARL through the use of deep multi-agent reinforcement learning. This approach is particularly appropriate for systems in which the rewards are sparse and hence extends the applicability of AMARL. We evaluate our approach within a new search and collection domain which demonstrates promising results in safety standards and performance compared to algorithms not using AMARL.",
keywords = "Assurance, Assured Multi-Agent Reinforcement Learning, Deep Reinforcement Learning, Multi-Agent Reinforcement Learning, Multi-Agent Systems, Quantitative verification, Reinforcement Learning, Safe Multi-Agent Reinforcement Learning, Safety-critical scenarios",
author = "Joshua Riley and Radu Calinescu and Colin Paterson and Daniel Kudenko and Alec Banks",
note = "Funding Information: Supported by the Defence Science and Technology Laboratory.; 13th International Conference on Agents and Artificial Intelligence, ICAART 2021 ; Conference date: 04-02-2021 Through 06-02-2021",
year = "2022",
month = jul,
day = "19",
doi = "10.1007/978-3-031-10161-8_8",
language = "English",
isbn = "9783031101601",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Science and Business Media Deutschland GmbH",
pages = "158--180",
editor = "Rocha, {Ana Paula} and Luc Steels and {van den Herik}, Jaap",
booktitle = "Agents and Artificial Intelligence",
address = "Germany",

}

Download

TY - GEN

T1 - Assured Deep Multi-Agent Reinforcement Learning for Safe Robotic Systems

AU - Riley, Joshua

AU - Calinescu, Radu

AU - Paterson, Colin

AU - Kudenko, Daniel

AU - Banks, Alec

N1 - Funding Information: Supported by the Defence Science and Technology Laboratory.

PY - 2022/7/19

Y1 - 2022/7/19

N2 - Using multi-agent reinforcement learning to find solutions to complex decision-making problems in shared environments has become standard practice in many scenarios. However, this is not the case in safety-critical scenarios, where the reinforcement learning process, which uses stochastic mechanisms, could lead to highly unsafe outcomes. We proposed a novel, safe multi-agent reinforcement learning approach named Assured Multi-Agent Reinforcement Learning (AMARL) to address this issue. Distinct from other safe multi-agent reinforcement learning approaches, AMARL utilises quantitative verification, a model checking technique that guarantees agent compliance of safety, performance, and non-functional requirements, both during and after the learning process. We have previously evaluated AMARL in patrolling domains with various multi-agent reinforcement learning algorithms for both homogeneous and heterogeneous systems. In this work we extend AMARL through the use of deep multi-agent reinforcement learning. This approach is particularly appropriate for systems in which the rewards are sparse and hence extends the applicability of AMARL. We evaluate our approach within a new search and collection domain which demonstrates promising results in safety standards and performance compared to algorithms not using AMARL.

AB - Using multi-agent reinforcement learning to find solutions to complex decision-making problems in shared environments has become standard practice in many scenarios. However, this is not the case in safety-critical scenarios, where the reinforcement learning process, which uses stochastic mechanisms, could lead to highly unsafe outcomes. We proposed a novel, safe multi-agent reinforcement learning approach named Assured Multi-Agent Reinforcement Learning (AMARL) to address this issue. Distinct from other safe multi-agent reinforcement learning approaches, AMARL utilises quantitative verification, a model checking technique that guarantees agent compliance of safety, performance, and non-functional requirements, both during and after the learning process. We have previously evaluated AMARL in patrolling domains with various multi-agent reinforcement learning algorithms for both homogeneous and heterogeneous systems. In this work we extend AMARL through the use of deep multi-agent reinforcement learning. This approach is particularly appropriate for systems in which the rewards are sparse and hence extends the applicability of AMARL. We evaluate our approach within a new search and collection domain which demonstrates promising results in safety standards and performance compared to algorithms not using AMARL.

KW - Assurance

KW - Assured Multi-Agent Reinforcement Learning

KW - Deep Reinforcement Learning

KW - Multi-Agent Reinforcement Learning

KW - Multi-Agent Systems

KW - Quantitative verification

KW - Reinforcement Learning

KW - Safe Multi-Agent Reinforcement Learning

KW - Safety-critical scenarios

UR - http://www.scopus.com/inward/record.url?scp=85135062258&partnerID=8YFLogxK

U2 - 10.1007/978-3-031-10161-8_8

DO - 10.1007/978-3-031-10161-8_8

M3 - Conference contribution

AN - SCOPUS:85135062258

SN - 9783031101601

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 158

EP - 180

BT - Agents and Artificial Intelligence

A2 - Rocha, Ana Paula

A2 - Steels, Luc

A2 - van den Herik, Jaap

PB - Springer Science and Business Media Deutschland GmbH

T2 - 13th International Conference on Agents and Artificial Intelligence, ICAART 2021

Y2 - 4 February 2021 through 6 February 2021

ER -