AMENet: Attentive Maps Encoder Network for Trajectory Prediction

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autoren

Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Seiten (von - bis)253-266
Seitenumfang14
FachzeitschriftISPRS Journal of Photogrammetry and Remote Sensing
Jahrgang172
Frühes Online-Datum14 Jan. 2021
PublikationsstatusVeröffentlicht - Feb. 2021

Abstract

Trajectory prediction is a crucial task in different communities, such as intelligent transportation systems, photogrammetry, computer vision, and mobile robot applications. However, there are many challenges to predict the trajectories of heterogeneous road agents (e.g. pedestrians, cyclists and vehicles) at a microscopical level. For example, an agent might be able to choose multiple plausible paths in complex interactions with other agents in varying environments, and the behavior of each agent is affected by the various behaviors of its neighboring agents. To this end, we propose an end-to-end generative model named Attentive Maps Encoder Network (AMENet) for accurate and realistic multi-path trajectory prediction. Our method leverages the target road user's motion information (i.e. movement in xy-axis in a Cartesian space) and the interaction information with the neighboring road users at each time step, which is encoded as dynamic maps that are centralized on the target road user. A conditional variational auto-encoder module is trained to learn the latent space of possible future paths based on the dynamic maps and then used to predict multiple plausible future trajectories conditioned on the observed past trajectories. Our method reports the new state-of-the-art performance (final/mean average displacement (FDE/MDE) errors 1.183/0.356 meters) on benchmark datasets and wins the first place in the open challenge of Trajnet.

ASJC Scopus Sachgebiete

Zitieren

AMENet: Attentive Maps Encoder Network for Trajectory Prediction. / Cheng, Hao; Liao, Wentong; Yang, Michael Ying et al.
in: ISPRS Journal of Photogrammetry and Remote Sensing, Jahrgang 172, 02.2021, S. 253-266.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Cheng H, Liao W, Yang MY, Rosenhahn B, Sester M. AMENet: Attentive Maps Encoder Network for Trajectory Prediction. ISPRS Journal of Photogrammetry and Remote Sensing. 2021 Feb;172:253-266. Epub 2021 Jan 14. doi: 10.1016/j.isprsjprs.2020.12.004
Cheng, Hao ; Liao, Wentong ; Yang, Michael Ying et al. / AMENet : Attentive Maps Encoder Network for Trajectory Prediction. in: ISPRS Journal of Photogrammetry and Remote Sensing. 2021 ; Jahrgang 172. S. 253-266.
Download
@article{705ad4bab7af4269887b7be593a3d495,
title = "AMENet: Attentive Maps Encoder Network for Trajectory Prediction",
abstract = " Trajectory prediction is a crucial task in different communities, such as intelligent transportation systems, photogrammetry, computer vision, and mobile robot applications. However, there are many challenges to predict the trajectories of heterogeneous road agents (e.g. pedestrians, cyclists and vehicles) at a microscopical level. For example, an agent might be able to choose multiple plausible paths in complex interactions with other agents in varying environments, and the behavior of each agent is affected by the various behaviors of its neighboring agents. To this end, we propose an end-to-end generative model named Attentive Maps Encoder Network (AMENet) for accurate and realistic multi-path trajectory prediction. Our method leverages the target road user's motion information (i.e. movement in xy-axis in a Cartesian space) and the interaction information with the neighboring road users at each time step, which is encoded as dynamic maps that are centralized on the target road user. A conditional variational auto-encoder module is trained to learn the latent space of possible future paths based on the dynamic maps and then used to predict multiple plausible future trajectories conditioned on the observed past trajectories. Our method reports the new state-of-the-art performance (final/mean average displacement (FDE/MDE) errors 1.183/0.356 meters) on benchmark datasets and wins the first place in the open challenge of Trajnet. ",
keywords = "cs.CV, Trajectory prediction, Encoder, Generative model",
author = "Hao Cheng and Wentong Liao and Yang, {Michael Ying} and Bodo Rosenhahn and Monika Sester",
note = "Funding Information: This work is supported by the German Research Foundation (DFG) through the Research Training Group SocialCars (GRK 1931). ",
year = "2021",
month = feb,
doi = "10.1016/j.isprsjprs.2020.12.004",
language = "English",
volume = "172",
pages = "253--266",
journal = "ISPRS Journal of Photogrammetry and Remote Sensing",
issn = "0924-2716",
publisher = "Elsevier",

}

Download

TY - JOUR

T1 - AMENet

T2 - Attentive Maps Encoder Network for Trajectory Prediction

AU - Cheng, Hao

AU - Liao, Wentong

AU - Yang, Michael Ying

AU - Rosenhahn, Bodo

AU - Sester, Monika

N1 - Funding Information: This work is supported by the German Research Foundation (DFG) through the Research Training Group SocialCars (GRK 1931).

PY - 2021/2

Y1 - 2021/2

N2 - Trajectory prediction is a crucial task in different communities, such as intelligent transportation systems, photogrammetry, computer vision, and mobile robot applications. However, there are many challenges to predict the trajectories of heterogeneous road agents (e.g. pedestrians, cyclists and vehicles) at a microscopical level. For example, an agent might be able to choose multiple plausible paths in complex interactions with other agents in varying environments, and the behavior of each agent is affected by the various behaviors of its neighboring agents. To this end, we propose an end-to-end generative model named Attentive Maps Encoder Network (AMENet) for accurate and realistic multi-path trajectory prediction. Our method leverages the target road user's motion information (i.e. movement in xy-axis in a Cartesian space) and the interaction information with the neighboring road users at each time step, which is encoded as dynamic maps that are centralized on the target road user. A conditional variational auto-encoder module is trained to learn the latent space of possible future paths based on the dynamic maps and then used to predict multiple plausible future trajectories conditioned on the observed past trajectories. Our method reports the new state-of-the-art performance (final/mean average displacement (FDE/MDE) errors 1.183/0.356 meters) on benchmark datasets and wins the first place in the open challenge of Trajnet.

AB - Trajectory prediction is a crucial task in different communities, such as intelligent transportation systems, photogrammetry, computer vision, and mobile robot applications. However, there are many challenges to predict the trajectories of heterogeneous road agents (e.g. pedestrians, cyclists and vehicles) at a microscopical level. For example, an agent might be able to choose multiple plausible paths in complex interactions with other agents in varying environments, and the behavior of each agent is affected by the various behaviors of its neighboring agents. To this end, we propose an end-to-end generative model named Attentive Maps Encoder Network (AMENet) for accurate and realistic multi-path trajectory prediction. Our method leverages the target road user's motion information (i.e. movement in xy-axis in a Cartesian space) and the interaction information with the neighboring road users at each time step, which is encoded as dynamic maps that are centralized on the target road user. A conditional variational auto-encoder module is trained to learn the latent space of possible future paths based on the dynamic maps and then used to predict multiple plausible future trajectories conditioned on the observed past trajectories. Our method reports the new state-of-the-art performance (final/mean average displacement (FDE/MDE) errors 1.183/0.356 meters) on benchmark datasets and wins the first place in the open challenge of Trajnet.

KW - cs.CV

KW - Trajectory prediction

KW - Encoder

KW - Generative model

UR - http://www.scopus.com/inward/record.url?scp=85100073403&partnerID=8YFLogxK

U2 - 10.1016/j.isprsjprs.2020.12.004

DO - 10.1016/j.isprsjprs.2020.12.004

M3 - Article

VL - 172

SP - 253

EP - 266

JO - ISPRS Journal of Photogrammetry and Remote Sensing

JF - ISPRS Journal of Photogrammetry and Remote Sensing

SN - 0924-2716

ER -

Von denselben Autoren