An Abandoned and Stolen Object Discrimination dataset


These pages describe an Abandoned and Stolen Object Discrimination dataset (ASODds), a corpus of video sequences that provide a representative test-set  for comparing systems devoted to discriminate previously detected stationary regions between abandoned and stolen objects in video surveillance.  Morever, manual annotations of the events of interest are also provided.

The dataset is composed of several annotated surveillance sequences of different levels of complexity. Sequences have been extracted from public datasets related with the people detection/object classification task. They are:

1) PETS2006: 9th IEEE International Workshop on Performance Evaluation of Tracking and Surveillance, 2006.

2) PETS2007: 10th IEEE International Workshop on Performance Evaluation of Tracking and Surveillance, 2007.

3) WCAM: Video Surveillance video sequences (The test visual material used in this work has been provided with courtesy of Thales Security Systems within the scope of the IST FP6 WCAM project).

4) VISOR: Video Surveillance Online Repository.

5) CVSG: A Chroma-based Video Segmentation Ground-truth.

6) AVSS2007:7th IEEE International Conference on Advanced Video and Signal based Surveillance,September 2007.

7) CANTATA:  Left object dataset for outdoor scenarios.

8) CANDELA:  Abandoned object detection dataset for  indoor scenarios.

 For more details please refer to the "Content" section.

This dataset is only available for research purposes. Institutions or research groups that have downloaded it include:

Related publications (using dataset) :

[1] L. Caro, J.C. SanMiguel and J. M. Martínez, “Discrimination of abandoned and stolen objects based on active contours”, in Proc. of the IEEE 8th Int. Conf. on Advanced Video and Signal based Surveillance, AVSS2011, Klagenfurt (Austria), September 2011, pp. 101-106.

[2] J.C. SanMiguel, L. Caro and J. M. Martínez,"Pixel-based colour contrast for abandoned and stolen object discrimination in video surveillance", Electronic Letters, 48 (2): pp. 86-87, February 2012.

 Work partially supported by the Spanish Goverment under project TEC2011-25995 (EventVideo).