Video Processing and Understanding Lab
Universidad Autónoma de Madrid, Escuela Politécnica Superior

TEC2017-88169-R MobiNetVideo (2018-2020-2021)
Visual Analysis for Practical Deployment of Cooperative Mobile Camera Networks

MobiNetVideo_logo
Supported by the
Ministerio de Economía, Industria y Competividad
of the Spanish Goverment
Home

Project proposal overview

The main objective of this project is to achieve advances in the state-of-the-art for video analysis systems employing mobile imaging devices, as a response to the growing social and industry demand for potential visual-based applications supported by the boom of mobile devices featuring high imaging and computing capabilities.

This project will focus on networked systems composed of fixed, pan-tilt-zoom and mobile cameras (i.e., heterogeneous camera networks) and it will address key challenges that nowadays limit their widespread adoption: autonomous and coordinated operation, performance of visual analysis tools and practical deployment issues. The outcomes of this project will underpin emerging applications in many areas where camera mobility plays a key role, such as unmanned aerial vehicles exploration for search and rescue missions and disaster management, smart traffic and people surveillance, in-home pervasive networks for monitoring the elderly, real-time health supervision and life-logging cameras integrated with wearable body sensors, etc.

Based on our experience on the topic, this project starts from the hypothesis that most of the operating video analysis systems rely on quite constrained infrastructures mainly characterized by static and pan-tilt-zoom cameras. These sets of cameras are not usually managed as networked devices since cameras run independently, mainly for storage operations. Such systems also present very limited mobility which restricts their ability for adapting to unforeseen conditions while concurrently performing various tasks. Moreover, many existing approaches for camera networks focus on a theoretical basis, just being validated by few controlled laboratory experiments. Therefore, practical deployment issues such as robustness to network constraints, resource availability, scalability, etc., are not often considered. All these abovementioned limitations prevent the use of current approaches in practical scenarios.

This context brings new research opportunities, in innovative approaches that can take advantage of the capabilities provided by networked mobile cameras. This project focuses its efforts on three related aspects: 1) visual analysis, 2) cooperation in camera networks and 3) practical deployment.

  1. The goals for visual analysis aim to achieve a space-time description of the visual content captured by a single mobile camera.
  2. The cooperation will explore decentralized and distributed schemes to allocate tasks during runtime via task-related decisions (e.g., learning the appearance of moving targets) and operational decisions (e.g., information sharing across cameras).
  3. Practical deployment will deepen into self-adaptation to manage the available resources (e.g., sensing and computation), handling of the operational constraints (e.g., real-time and communication) and an efficient use of the application environment (e.g., heat maps for detection).

The research achievements will be put in practice for selected case studies reflecting the potential applications of mobile camera networks. To overcome the limited availability of real-world data for complex heterogeneous camera networks, this project considers simulation as an intermediate stage to validate performance objectives before deployment. Virtual reality simulation will be jointly used with communication and hardware simulation to provide a rich test environment where approaches can be applied over situations that can be repeated as needed.

Last update 14/06/2018
Universidad Autónoma de Madrid, Escuela Politécnica Superior
Video Processing and Understanding Lab