V2X-Real: A Largs-Scale Dataset for Vehicle-to-Everything Cooperative Perception

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

  • Hao Xiang
  • Zhaoliang Zheng
  • Xin Xia
  • Runsheng Xu
  • Letian Gao
  • Zewei Zhou
  • Xu Han
  • Xinkai Ji
  • Mingxi Li
  • Zonglin Meng
  • Li Jin
  • Mingyue Lei
  • Zhaoyang Ma
  • Zihang He
  • Haoxuan Ma
  • Yunshuang Yuan
  • Yingqian Zhao
  • Jiaqi Ma

External Research Organisations

  • University of California (UCLA)
View graph of relations

Details

Original languageEnglish
Title of host publicationComputer Vision – ECCV 2024 - 18th European Conference, Proceedings
EditorsAleš Leonardis, Elisa Ricci, Stefan Roth, Olga Russakovsky, Torsten Sattler, Gül Varol
PublisherSpringer Science and Business Media Deutschland GmbH
Pages455-470
Number of pages16
ISBN (print)9783031729423
Publication statusPublished - 2025
Externally publishedYes
Event18th European Conference on Computer Vision, ECCV 2024 - Milan, Italy
Duration: 29 Sept 20244 Oct 2024

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume15110 LNCS
ISSN (Print)0302-9743
ISSN (electronic)1611-3349

Abstract

Recent advancements in Vehicle-to-Everything (V2X) technologies have enabled autonomous vehicles to share sensing information to see through occlusions, greatly boosting the perception capability. However, there are no real-world datasets to facilitate the real V2X cooperative perception research – existing datasets either only support Vehicle-to-Infrastructure cooperation or Vehicle-to-Vehicle cooperation. In this paper, we present V2X-Real, a large-scale dataset that includes a mixture of multiple vehicles and smart infrastructure to facilitate the V2X cooperative perception development with multi-modality sensing data. Our V2X-Real is collected using two connected automated vehicles and two smart infrastructure, which are all equipped with multi-modal sensors including LiDAR sensors and multi-view cameras. The whole dataset contains 33K LiDAR frames and 171K camera data with over 1.2 M annotated bounding boxes of 10 categories in very challenging urban scenarios. According to the collaboration mode and ego perspective, we derive four types of datasets for Vehicle-Centric, Infrastructure-Centric, Vehicle-to-Vehicle, and Infrastructure-to-Infrastructure cooperative perception. Comprehensive multi-class multi-agent benchmarks of SOTA cooperative perception methods are provided. The V2X-Real dataset and codebase are available at https://mobility-lab.seas.ucla.edu/v2x-real.

Keywords

    Autonomous Driving, Cooperative Perception, V2X Dataset

ASJC Scopus subject areas

Cite this

V2X-Real: A Largs-Scale Dataset for Vehicle-to-Everything Cooperative Perception. / Xiang, Hao; Zheng, Zhaoliang; Xia, Xin et al.
Computer Vision – ECCV 2024 - 18th European Conference, Proceedings. ed. / Aleš Leonardis; Elisa Ricci; Stefan Roth; Olga Russakovsky; Torsten Sattler; Gül Varol. Springer Science and Business Media Deutschland GmbH, 2025. p. 455-470 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 15110 LNCS).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Xiang, H, Zheng, Z, Xia, X, Xu, R, Gao, L, Zhou, Z, Han, X, Ji, X, Li, M, Meng, Z, Jin, L, Lei, M, Ma, Z, He, Z, Ma, H, Yuan, Y, Zhao, Y & Ma, J 2025, V2X-Real: A Largs-Scale Dataset for Vehicle-to-Everything Cooperative Perception. in A Leonardis, E Ricci, S Roth, O Russakovsky, T Sattler & G Varol (eds), Computer Vision – ECCV 2024 - 18th European Conference, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 15110 LNCS, Springer Science and Business Media Deutschland GmbH, pp. 455-470, 18th European Conference on Computer Vision, ECCV 2024, Milan, Italy, 29 Sept 2024. https://doi.org/10.1007/978-3-031-72943-0_26
Xiang, H., Zheng, Z., Xia, X., Xu, R., Gao, L., Zhou, Z., Han, X., Ji, X., Li, M., Meng, Z., Jin, L., Lei, M., Ma, Z., He, Z., Ma, H., Yuan, Y., Zhao, Y., & Ma, J. (2025). V2X-Real: A Largs-Scale Dataset for Vehicle-to-Everything Cooperative Perception. In A. Leonardis, E. Ricci, S. Roth, O. Russakovsky, T. Sattler, & G. Varol (Eds.), Computer Vision – ECCV 2024 - 18th European Conference, Proceedings (pp. 455-470). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 15110 LNCS). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-72943-0_26
Xiang H, Zheng Z, Xia X, Xu R, Gao L, Zhou Z et al. V2X-Real: A Largs-Scale Dataset for Vehicle-to-Everything Cooperative Perception. In Leonardis A, Ricci E, Roth S, Russakovsky O, Sattler T, Varol G, editors, Computer Vision – ECCV 2024 - 18th European Conference, Proceedings. Springer Science and Business Media Deutschland GmbH. 2025. p. 455-470. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). doi: 10.1007/978-3-031-72943-0_26
Xiang, Hao ; Zheng, Zhaoliang ; Xia, Xin et al. / V2X-Real : A Largs-Scale Dataset for Vehicle-to-Everything Cooperative Perception. Computer Vision – ECCV 2024 - 18th European Conference, Proceedings. editor / Aleš Leonardis ; Elisa Ricci ; Stefan Roth ; Olga Russakovsky ; Torsten Sattler ; Gül Varol. Springer Science and Business Media Deutschland GmbH, 2025. pp. 455-470 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
Download
@inproceedings{ec3f4f10acf6468baf299acdf02f415d,
title = "V2X-Real: A Largs-Scale Dataset for Vehicle-to-Everything Cooperative Perception",
abstract = "Recent advancements in Vehicle-to-Everything (V2X) technologies have enabled autonomous vehicles to share sensing information to see through occlusions, greatly boosting the perception capability. However, there are no real-world datasets to facilitate the real V2X cooperative perception research – existing datasets either only support Vehicle-to-Infrastructure cooperation or Vehicle-to-Vehicle cooperation. In this paper, we present V2X-Real, a large-scale dataset that includes a mixture of multiple vehicles and smart infrastructure to facilitate the V2X cooperative perception development with multi-modality sensing data. Our V2X-Real is collected using two connected automated vehicles and two smart infrastructure, which are all equipped with multi-modal sensors including LiDAR sensors and multi-view cameras. The whole dataset contains 33K LiDAR frames and 171K camera data with over 1.2 M annotated bounding boxes of 10 categories in very challenging urban scenarios. According to the collaboration mode and ego perspective, we derive four types of datasets for Vehicle-Centric, Infrastructure-Centric, Vehicle-to-Vehicle, and Infrastructure-to-Infrastructure cooperative perception. Comprehensive multi-class multi-agent benchmarks of SOTA cooperative perception methods are provided. The V2X-Real dataset and codebase are available at https://mobility-lab.seas.ucla.edu/v2x-real.",
keywords = "Autonomous Driving, Cooperative Perception, V2X Dataset",
author = "Hao Xiang and Zhaoliang Zheng and Xin Xia and Runsheng Xu and Letian Gao and Zewei Zhou and Xu Han and Xinkai Ji and Mingxi Li and Zonglin Meng and Li Jin and Mingyue Lei and Zhaoyang Ma and Zihang He and Haoxuan Ma and Yunshuang Yuan and Yingqian Zhao and Jiaqi Ma",
note = "Publisher Copyright: {\textcopyright} The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.; 18th European Conference on Computer Vision, ECCV 2024 ; Conference date: 29-09-2024 Through 04-10-2024",
year = "2025",
doi = "10.1007/978-3-031-72943-0_26",
language = "English",
isbn = "9783031729423",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Science and Business Media Deutschland GmbH",
pages = "455--470",
editor = "Ale{\v s} Leonardis and Elisa Ricci and Stefan Roth and Olga Russakovsky and Torsten Sattler and G{\"u}l Varol",
booktitle = "Computer Vision – ECCV 2024 - 18th European Conference, Proceedings",
address = "Germany",

}

Download

TY - GEN

T1 - V2X-Real

T2 - 18th European Conference on Computer Vision, ECCV 2024

AU - Xiang, Hao

AU - Zheng, Zhaoliang

AU - Xia, Xin

AU - Xu, Runsheng

AU - Gao, Letian

AU - Zhou, Zewei

AU - Han, Xu

AU - Ji, Xinkai

AU - Li, Mingxi

AU - Meng, Zonglin

AU - Jin, Li

AU - Lei, Mingyue

AU - Ma, Zhaoyang

AU - He, Zihang

AU - Ma, Haoxuan

AU - Yuan, Yunshuang

AU - Zhao, Yingqian

AU - Ma, Jiaqi

N1 - Publisher Copyright: © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.

PY - 2025

Y1 - 2025

N2 - Recent advancements in Vehicle-to-Everything (V2X) technologies have enabled autonomous vehicles to share sensing information to see through occlusions, greatly boosting the perception capability. However, there are no real-world datasets to facilitate the real V2X cooperative perception research – existing datasets either only support Vehicle-to-Infrastructure cooperation or Vehicle-to-Vehicle cooperation. In this paper, we present V2X-Real, a large-scale dataset that includes a mixture of multiple vehicles and smart infrastructure to facilitate the V2X cooperative perception development with multi-modality sensing data. Our V2X-Real is collected using two connected automated vehicles and two smart infrastructure, which are all equipped with multi-modal sensors including LiDAR sensors and multi-view cameras. The whole dataset contains 33K LiDAR frames and 171K camera data with over 1.2 M annotated bounding boxes of 10 categories in very challenging urban scenarios. According to the collaboration mode and ego perspective, we derive four types of datasets for Vehicle-Centric, Infrastructure-Centric, Vehicle-to-Vehicle, and Infrastructure-to-Infrastructure cooperative perception. Comprehensive multi-class multi-agent benchmarks of SOTA cooperative perception methods are provided. The V2X-Real dataset and codebase are available at https://mobility-lab.seas.ucla.edu/v2x-real.

AB - Recent advancements in Vehicle-to-Everything (V2X) technologies have enabled autonomous vehicles to share sensing information to see through occlusions, greatly boosting the perception capability. However, there are no real-world datasets to facilitate the real V2X cooperative perception research – existing datasets either only support Vehicle-to-Infrastructure cooperation or Vehicle-to-Vehicle cooperation. In this paper, we present V2X-Real, a large-scale dataset that includes a mixture of multiple vehicles and smart infrastructure to facilitate the V2X cooperative perception development with multi-modality sensing data. Our V2X-Real is collected using two connected automated vehicles and two smart infrastructure, which are all equipped with multi-modal sensors including LiDAR sensors and multi-view cameras. The whole dataset contains 33K LiDAR frames and 171K camera data with over 1.2 M annotated bounding boxes of 10 categories in very challenging urban scenarios. According to the collaboration mode and ego perspective, we derive four types of datasets for Vehicle-Centric, Infrastructure-Centric, Vehicle-to-Vehicle, and Infrastructure-to-Infrastructure cooperative perception. Comprehensive multi-class multi-agent benchmarks of SOTA cooperative perception methods are provided. The V2X-Real dataset and codebase are available at https://mobility-lab.seas.ucla.edu/v2x-real.

KW - Autonomous Driving

KW - Cooperative Perception

KW - V2X Dataset

UR - http://www.scopus.com/inward/record.url?scp=85211346885&partnerID=8YFLogxK

U2 - 10.1007/978-3-031-72943-0_26

DO - 10.1007/978-3-031-72943-0_26

M3 - Conference contribution

AN - SCOPUS:85211346885

SN - 9783031729423

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 455

EP - 470

BT - Computer Vision – ECCV 2024 - 18th European Conference, Proceedings

A2 - Leonardis, Aleš

A2 - Ricci, Elisa

A2 - Roth, Stefan

A2 - Russakovsky, Olga

A2 - Sattler, Torsten

A2 - Varol, Gül

PB - Springer Science and Business Media Deutschland GmbH

Y2 - 29 September 2024 through 4 October 2024

ER -