Video Processing and Understanding Lab
Universidad Autónoma de Madrid, Escuela Politécnica Superior |
|
This work package aims at the initial establishment and maintenance of a development framework for the remaining work packages.
Arrangement and configuration of the available equipment, and, the acquisition of complementary equipment, for establishing the necessary infrastructure to meet the objectives of the project (servers, wearable cameras and additional sensors).
Development of new features for the simulation environments of heterogeneous camera networks: MSS virtual reality and WiseMnet simulators. This task considers generating test scenarios using the software simulators available at the VPULab.
Support to other tasks for generating test data and defining evaluation methodologies. It includes the selection of appropriate datasets (sequences and associated ground-truth) and their generation if required.
Milestones:
Deliverables:
To perform a study of current technologies for applications related to heterogeneous camera networks where camera mobility plays a key role. Such studies will be performed on public data-sets. If required, small scenarios will be recorded.
To evaluate state-of-the-art approaches for target tracking in an active vision application (e.g., tracking a teacher during a lecture) where the field of view of a single camera is continuously adapted to the environment, thus deciding what to observe over time.
To evaluate state-of-the-art approaches for detection in two case studies using a single moving camera: collision detection with a wearable camera (that may help handicapped people to avoid obstacles) and predicting collisions for traffic monitoring environments using on-board cameras and/or PTZ monitoring.
To evaluate state-of-the-art approaches for object detection, scene identification and semantic segmentation in applications for single wearable cameras such as life-logging.
To evaluate state-of-the-art approaches for multiple target detection and tracking in data captured from a single camera on-board of an aerial unmanned vehicle.
Milestones:
Deliverables:
To obtain research contributions to the current state-of-the-art in mobile camera networks. We consider two tasks devoted to single cameras (scene identification and semantic segmentation) and another two tasks for multiple mobile cameras (multi-view matching and cooperative detection/tracking).
To propose new methods for obtaining high-level descriptions of a scene and to arrange scenes into semantically coherent clusters. The goals of this task will be adapted accordingly to the conclusions of T2.3.
To propose new methods to use descriptions of semantic segments to drive object detection approaches. The goals of this task will be adapted accordingly to the conclusions of T2.3.
To propose new methods or strategies for online multi person/object association for each camera in multi-camera settings to provide complementary information to the multi-view approaches. This task will be adapted the conclusions achieved in T2.1 and T2.2.
To exploit interactions among multiple cameras to optimize the overall performance (accuracy) via aggregation or cooperative fusion schemes. The goals of this task will be adapted to the conclusions of T2.1, T2.2 and T2.4.
Milestones:
Deliverables:
To develop demonstrators of the research contributions for selected case studies. Moreover, solutions for the identified practical issues will be provided. These demonstrators will provide applications both for developers (for evaluation and testing) as well as for final users (use cases to be defined with the help of the Observing Partners). This workpackage makes use of the results in WP1, WP2 and WP3
Using WP2 and WP3 outcomes, the final use cases will be clearly defined (scenario) and the WP3 technology will be applied to enhance the results obtained in WP2 with a single camera by making use of heterogeneous camera networks/configurations. This task also considers the implementation of the software required for each case study.
To extend the demonstrators developed in the task T4.1 by providing solutions to the practical issues arising from the deployment of the demonstrators. This task will consider real-time and latency as the main challenges to address by applying in-node and in-network optimization strategies (e.g., exploring the accuracy-cost trade-off or by dynamic task allocation based on available resources).
To extend the demonstrators developed in the task T4.1 by providing solutions to the practical issues arising from the limitations of the visual sensors. As initial approach, this task will consider the use of sensors embedded in single mobile (wearable) cameras such as the depth-based Tango sensor and GPS geolocalization.
Milestones:
Deliverables:
To coordinate the project activities, follow-up progress and the dissemination.
This task has the following activities: monthly follow up of Project progress and achievements, workplan milestones and outcomes deadlines control, Workplan updates, corrective actions and administrative issues.
This task coordinates the compilation and internal publication of intermediate results, as well as planning the publications in journals and conferences. It will also coordinate the dissemination of results via the project web page and Newsletters every six months.
This task is devoted to the organization of one-day workshops, short-courses and tutorials for each year of the project. The target audience are researchers, developers and professionals from observing partners.
Milestones:
Deliverables:
This workplan was updated 02/07/2018 and 29/09/2017 adjusting the original workplan after the official project approval notification in order to adjust to the budget and dates.
Last update 29/09/2018