A Hand Gesture Detection Dataset

Presentation


These pages describe a Hand Gesture dataset (HGds), a dataset composed of several annotated hand gestures captures performed by eleven different subjects and also, synthetically generated.

Some of the dictionaries included in this dataset can be found in the State Of Art: (Kollorz et al., 2008), (Molina et al., 2013) and (Soutschek et al., 2008)  . While two new collections of gestures are proposed.

 For more details please refer to the "Dataset Downloading" section.

This dataset is only available for research purposes. Institutions or research groups that have downloaded it include:

Main Reference :

  Molina, J., Pajuelo, J.A., Escudero-Viñolo, M., Bescós, J., Martínez, J.M.: A natural and synthetic corpus for benchmarking of hand gesture recognition systems. Machine Vision and Applications. 25(4), 943–954 (2014).

Additional References :

(Kollorz et al., 2008)  Kollorz, E., Penne, J., Hornegger, J., Barke, A.: Gesture recognition with a time-of-flight camera. Int. J. Intell. Syst. Technol. Appl. 5(3/4), 334–343 (2008).

(Molina et al., 2013)  Molina, J., Escudero-Viñolo, M., Signoriello, A., Pardás, M., Ferrán, C., Bescós, J., Marqués, F., Martínez, J.M.: Real-time user independent hand gesture recognition from time-of-flight camera video using static and dynamic models. Machine Vision and Applications. 24(1). 187-204 (2013).

(Soutschek et al., 2008) Soutschek, S., Penne, J., Hornegger, J., Kornhuber, J.: 3-d gesture-based scene navigation in medical imaging applications using time-of-flight cameras. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 1-6 (2008).

(Molina et al., 2014)  Molina, J., Martínez, J.M.: A synthetic training framework for providing gesture scalability to 2.5D pose-based hand gesture recognition systems. Machine Vision and Applications. 25(5). 1309-1315 (2014).