Audi announced that it is releasing a large dataset for autonomous driving called A2D2. As yet another in the series of dataset releases from companies, the new dataset is aimed to support academic research and startups working in the field of autonomous driving.
The AEV Autonomous Driving Dataset contains more than 40 000 labeled camera frames coupled with 2D semantic segmentation, 3D point clouds, 3D bounding boxes, and vehicle bus data. According to the description of the dataset, only a part of the dataset containing the four different annotations, including the 3D bounding boxes. As part of the dataset, Audi included an additional 390 000 unlabeled frames.
The semantic segmentation features 38 categories including most of the commonly encountered objects in traffic such as traffic signs, traffic lights, pedestrians, etc. The point cloud segmentation, o the other hand, provides 3D segmentation and is built upon merging camera and LIDAR data. For more than 12 000 samples, the dataset also contains 3D bounding boxes for objects in the field of view of the frontal camera.
The A2D2 dataset was built using a sensor set consisting of: six cameras, five LIDAR sensors, and an automotive gateway. It was collected in three cities using the sensor set which covers full 360 degrees of the environment around the car.
The new dataset can be downloaded from the following link. Engineers also provided tutorials on how to load and use the data from the dataset.