DAIR-V2X is the first large-scale, multi-modality, multi-view dataset for Vehicle-Infrastructure Collaborative Autonomous Driving (VICAD), with 2D&3D object annotations. All data is captured from real scenarios.
- 72890 LiDAR frames, 72890 Camera images
- DAIR-V2X Collaborative Dataset (DAIR-V2X-C): 40481 LiDAR frames, 40481 Camera images
- DAIR-V2X Infrastructure Dataset (DAIR-V2X-I): 10084 LiDAR frames, 10084 Camera images
- DAIR-V2X Vehicle Dataset (DAIR-V2X-V): 22325 LiDAR frames, 22325 Camera images
- Temporal-Spatial Synchronized Annotation for Vehicle-Infrastructure Collaboration
- Diverse sensors (vehicle-side Camera, vehicle-side LiDAR, infrastructure-side Camera, infrastructure-side LiDAR)
- Diverse environments (day/night, sunny/rainy, urban/suburban areas)
- Fully annotated scenes with 15 classes, including desensitized raw images and point clouds, 3D annotated files, timestamp files, calibration files
- 10 km city road, 10 km highway, 28 intersections, 38 km² driving regions
If you find this dataset useful, please cite the following website:
@dataset{DAIR-V2X2021,
title={Vehicle-Infrastructure Collaborative Autonomous Driving: DAIR-V2X Dataset},
author={Institue for AI Industry Research (AIR), Tsinghua University},
website={http://air.tsinghua.edu.cn/dair-v2x},
year={2021}
}
- Institute for AI Industry Research, Tsinghua University (AIR)
- Beijing High-Level Autonomous Driving Demonstration Area
- Beijing CheWang Technology Development Cooperation
- Baidu Apollo