An evolution strategy with progressive episode lengths for playing games

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

External Research Organisations

  • University of Freiburg
View graph of relations

Details

Original languageEnglish
Title of host publicationProceedings of the 28th International Joint Conference on Artificial Intelligence, IJCAI 2019
EditorsSarit Kraus
Pages1234-1240
Number of pages7
ISBN (electronic)9780999241141
Publication statusPublished - 2019
Externally publishedYes
Event28th International Joint Conference on Artificial Intelligence, IJCAI 2019 - Macao, China
Duration: 10 Aug 201916 Aug 2019

Publication series

NameIJCAI International Joint Conference on Artificial Intelligence
ISSN (Print)1045-0823

Abstract

Recently, Evolution Strategies (ES) have been successfully applied to solve problems commonly addressed by reinforcement learning (RL). Due to the simplicity of ES approaches, their runtime is often dominated by the RL-task at hand (e.g., playing a game). In this work, we introduce Progressive Episode Lengths (PEL) as a new technique and incorporate it with ES. The main objective is to allow the agent to play short and easy tasks with limited lengths, and then use the gained knowledge to further solve long and hard tasks with progressive lengths. Hence allowing the agent to perform many function evaluations and find a good solution for short time horizons before adapting the strategy to tackle larger time horizons. We evaluated PEL on a subset of Atari games from OpenAI Gym, showing that it can substantially improve the optimization speed, stability and final score of canonical ES. Specifically, we show average improvements of 80% (32%) after 2 hours (10 hours) compared to canonical ES.

ASJC Scopus subject areas

Cite this

An evolution strategy with progressive episode lengths for playing games. / Fuks, Lior; Awad, Noor; Hutter, Frank et al.
Proceedings of the 28th International Joint Conference on Artificial Intelligence, IJCAI 2019. ed. / Sarit Kraus. 2019. p. 1234-1240 (IJCAI International Joint Conference on Artificial Intelligence).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Fuks, L, Awad, N, Hutter, F & Lindauer, M 2019, An evolution strategy with progressive episode lengths for playing games. in S Kraus (ed.), Proceedings of the 28th International Joint Conference on Artificial Intelligence, IJCAI 2019. IJCAI International Joint Conference on Artificial Intelligence, pp. 1234-1240, 28th International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, 10 Aug 2019. https://doi.org/10.24963/ijcai.2019/172
Fuks, L., Awad, N., Hutter, F., & Lindauer, M. (2019). An evolution strategy with progressive episode lengths for playing games. In S. Kraus (Ed.), Proceedings of the 28th International Joint Conference on Artificial Intelligence, IJCAI 2019 (pp. 1234-1240). (IJCAI International Joint Conference on Artificial Intelligence). https://doi.org/10.24963/ijcai.2019/172
Fuks L, Awad N, Hutter F, Lindauer M. An evolution strategy with progressive episode lengths for playing games. In Kraus S, editor, Proceedings of the 28th International Joint Conference on Artificial Intelligence, IJCAI 2019. 2019. p. 1234-1240. (IJCAI International Joint Conference on Artificial Intelligence). doi: 10.24963/ijcai.2019/172
Fuks, Lior ; Awad, Noor ; Hutter, Frank et al. / An evolution strategy with progressive episode lengths for playing games. Proceedings of the 28th International Joint Conference on Artificial Intelligence, IJCAI 2019. editor / Sarit Kraus. 2019. pp. 1234-1240 (IJCAI International Joint Conference on Artificial Intelligence).
Download
@inproceedings{b2b9850b51e54130acd164e98e6f1ca8,
title = "An evolution strategy with progressive episode lengths for playing games",
abstract = "Recently, Evolution Strategies (ES) have been successfully applied to solve problems commonly addressed by reinforcement learning (RL). Due to the simplicity of ES approaches, their runtime is often dominated by the RL-task at hand (e.g., playing a game). In this work, we introduce Progressive Episode Lengths (PEL) as a new technique and incorporate it with ES. The main objective is to allow the agent to play short and easy tasks with limited lengths, and then use the gained knowledge to further solve long and hard tasks with progressive lengths. Hence allowing the agent to perform many function evaluations and find a good solution for short time horizons before adapting the strategy to tackle larger time horizons. We evaluated PEL on a subset of Atari games from OpenAI Gym, showing that it can substantially improve the optimization speed, stability and final score of canonical ES. Specifically, we show average improvements of 80% (32%) after 2 hours (10 hours) compared to canonical ES.",
author = "Lior Fuks and Noor Awad and Frank Hutter and Marius Lindauer",
note = "Funding information: Robert Bosch GmbH is acknowledged for financial support. The authors acknowledge support by the state of Baden-W{\"u}rrtemberg through bwHPC and the German Research Foundation (DFG) through grant no. INST 39/963-1 FUGG.; 28th International Joint Conference on Artificial Intelligence, IJCAI 2019 ; Conference date: 10-08-2019 Through 16-08-2019",
year = "2019",
doi = "10.24963/ijcai.2019/172",
language = "English",
series = "IJCAI International Joint Conference on Artificial Intelligence",
pages = "1234--1240",
editor = "Sarit Kraus",
booktitle = "Proceedings of the 28th International Joint Conference on Artificial Intelligence, IJCAI 2019",

}

Download

TY - GEN

T1 - An evolution strategy with progressive episode lengths for playing games

AU - Fuks, Lior

AU - Awad, Noor

AU - Hutter, Frank

AU - Lindauer, Marius

N1 - Funding information: Robert Bosch GmbH is acknowledged for financial support. The authors acknowledge support by the state of Baden-Würrtemberg through bwHPC and the German Research Foundation (DFG) through grant no. INST 39/963-1 FUGG.

PY - 2019

Y1 - 2019

N2 - Recently, Evolution Strategies (ES) have been successfully applied to solve problems commonly addressed by reinforcement learning (RL). Due to the simplicity of ES approaches, their runtime is often dominated by the RL-task at hand (e.g., playing a game). In this work, we introduce Progressive Episode Lengths (PEL) as a new technique and incorporate it with ES. The main objective is to allow the agent to play short and easy tasks with limited lengths, and then use the gained knowledge to further solve long and hard tasks with progressive lengths. Hence allowing the agent to perform many function evaluations and find a good solution for short time horizons before adapting the strategy to tackle larger time horizons. We evaluated PEL on a subset of Atari games from OpenAI Gym, showing that it can substantially improve the optimization speed, stability and final score of canonical ES. Specifically, we show average improvements of 80% (32%) after 2 hours (10 hours) compared to canonical ES.

AB - Recently, Evolution Strategies (ES) have been successfully applied to solve problems commonly addressed by reinforcement learning (RL). Due to the simplicity of ES approaches, their runtime is often dominated by the RL-task at hand (e.g., playing a game). In this work, we introduce Progressive Episode Lengths (PEL) as a new technique and incorporate it with ES. The main objective is to allow the agent to play short and easy tasks with limited lengths, and then use the gained knowledge to further solve long and hard tasks with progressive lengths. Hence allowing the agent to perform many function evaluations and find a good solution for short time horizons before adapting the strategy to tackle larger time horizons. We evaluated PEL on a subset of Atari games from OpenAI Gym, showing that it can substantially improve the optimization speed, stability and final score of canonical ES. Specifically, we show average improvements of 80% (32%) after 2 hours (10 hours) compared to canonical ES.

UR - http://www.scopus.com/inward/record.url?scp=85074913351&partnerID=8YFLogxK

U2 - 10.24963/ijcai.2019/172

DO - 10.24963/ijcai.2019/172

M3 - Conference contribution

AN - SCOPUS:85074913351

T3 - IJCAI International Joint Conference on Artificial Intelligence

SP - 1234

EP - 1240

BT - Proceedings of the 28th International Joint Conference on Artificial Intelligence, IJCAI 2019

A2 - Kraus, Sarit

T2 - 28th International Joint Conference on Artificial Intelligence, IJCAI 2019

Y2 - 10 August 2019 through 16 August 2019

ER -

By the same author(s)