Video Processing and Understanding Lab
Universidad Autónoma de Madrid, Escuela Politécnica Superior

TEC2011-25995 EventVideo (2012-2014)
Strategies for Object Segmentation, Detection and Tracking in Complex Environments for Event Detection in Video Surveillance and Monitoring

Supported by the Ministerio de Economía y Competividad of the Spanish Goverment
Home

Main achievements of the project

The TEC2011-25995 EventVideo project, focused on segmentation, detection and tracking in video sequences for complex environments, has covered its objectives, yielding 16 articles in international journals, 2 book chapters and 11 articles in international conferences, as well as several evaluation datasets of utility for the scientific community.

In the segmentation stage, a multimodal technique for foreground segmentation robust to light changes and camouflage has been developed. An algorithm based on RANSAC has also been designed for estimating camera motion when the background is smaller than the moving objects or it contains homogeneous regions. A background generation algorithm based on iterative analysis of visual features has also been proposed.

In the people detection stage, various state-of-the-art algorithms have been evaluated. In addition, an algorithm that applies tracking to retrofit the fusion of motion and appearance has also been developed. Moreover, a technique for person-background segmentation has been proposed. Finally, two post-processing techniques (fusion of independent detectors and use of person-background maps) and a detector of people in groups have been proposed for dense environments.

In the event detection stage, various algorithms based on anomalous and static regions have been designed, as well as some finite-state machine models to define sets of rules.

As horizontal tasks, several techniques have been developed to assess the quality of both generic and specific tracking algorithms (Particle filter and MeanShift). Feedback techniques for skin detection through dynamic integration of detectors have also been proposed. Finally, both the hardware and software (DiVA platform) of the distributed video surveillance system have been updated, adding an application development environment.

A very active collaboration with Dr. Andrea Cavallaro from the Queen Mary University of London has been carried out all over the project and a new collaboration with Dr. Thomas Sikora from Technische Universität Berlin has been established.

Project proposal overview

The objective of the EventVideo Project is to tackle the problem of event detection in video sequences in complex situations. In this direction, in order to achieve improvements in the detection of state-of-the-art events in the video surveillance and monitoring domain, the workplan proposes research in the design and development of new strategies in the different phases or stages of a video analysis chain.

The main proposed innovat ion starts from the hypothesis that the most traditional or popular approaches to object segmentation, detection and tracking, while successful for many simple and/or controlled applications, do not achieve satisfactory results in video sequences captured in quite frequent and specially complex situations (e.g., outdoor environments, adverse weather conditions, presence of heavy shadows and reflection, high objects density, low resolution objects, partial occlusions, moving camera, cluttered backgrounds). These situations may require a more thorough consideration of light effects, of robust approaches to global motion estimation, of combined models to tackle object detection and tracking, and of the use of complementary sensors.

Additionally, the project proposes to explore the possible exploitation of the benefits of connecting, via feedback approaches, the results of every analysis stage, in order both to enhance overall results and to dynamically control the efficiency of the analysis chain.

The proposed hypothesis are based on previous experience and initial achievements of the Video Processing and Understanding Lab (VPULab), resulting from work in previous and ongoing research and development projects and on several master thesis and PhD dissertations in progress.

Last update 16/03/2015
Universidad Autónoma de Madrid, Escuela Politécnica Superior
Video Processing and Understanding Lab