Data

As part of this BigData Cup challenge, we have released a Synthetic Vehicle Orientation (Synthetic VO) dataset containing bounding box annotations of vehicles with their class and orientation simultaneously. The dataset is publicly available at our GitHub repository or can also be downloaded from below: https://github.com/sekilab/VehicleOrientationDataset.

There are three types of data sets:

Training and test dataset (test-1) have been released immediately with the start of the competition, while the second test dataset (test-2) will be released later in September. The second test dataset (test-2) will be slightly more challenging compared to the first dataset (test-1) and we will use 45% weightage for the test-1 dataset and 55% for test-2 dataset to calculate final weighted mAP.

The training dataset contains only synthetic images, and the test dataset (test-1 and test-2) contains eal-world vehicle images. As part of this competition, participants can use synthetic images from driving simulators or video games as the training dataset. You may refer to the following papers to understand more about the evaluation dataset from real-world.

Vehicle class Number of annotations (train-1 and train-2)
car_front 42,273
car_back 35,017
car_side 13,131
truck_front 1,995
truck_back 2,667
truck_side 1,220
motorcycle_front 770
motorcycle_back 1,476
motorcycle_side 2,614
cycle_front 498
cycle_back 1,284
cycle_side 1,881

The number of annotations per class in the Synthetic VO dataset is imbalanced and follows a long-tail distribution. Thus, for a fair assessment, we will consider weighted mAP as the metric for evaluation, where we will give weights to each class based on its dominance in the test dataset provided by us, as shown below:

Ranking will be decided based on the value of the weighted mAP. Submissions will be ranked solely based on the weighted mAP on the test data sets. The submission with the highest weighted mAP will be ranked as the winner.