TCTS Lab Staff

Sohaib Laraba

[Bio] [Research] [Related Projects] [Publications]

Researcher, PhD student  

University of Mons (UMONS) 
Engineering Faculty of Mons (FPMs) 

Numediart Research Institute  
TCTS Lab  
31, Boulevard Dolez  
B-7000 Mons (Belgium)  

phone: +32 6 537 4721  
fax: +32 6 537 4729  

LinkedIn profile Researchgate profile Scholar profile UMONS Staff
Sohaib Laraba is a PhD student at UMONS and is part of the motion capture and analysis research group at the Numediart institute. His research includes gesture analysis, recognition and evaluation using Machine Learning and Deep Learning techniques.

^ Top ^   

Title: Gesture Recognition and Evaluation with Deep Learning:

Sumarry: Motion capture sequences are high dimensional signals that represent the spatio-temporal variation of human body skeleton joints. Classification of these signals is widely used and is considered as an active area of research. In fact, gesture recognition is an essential task in many domains, such as gaming, physical rehabilitation, sports, movies…etc. Multiple researches have been performed to improve the state-of-the-art. However, most of these researches include hand-crafted feature extraction which is a very time-consuming task and requires, for some specific domain, feedback from experts.
Recent studies focused on the use of deep learning methods for gesture classification. The biggest advantage of these methods is that they operate directly on raw data. Results obtained outperforms most of state-of-the-art studies, though. This is also the case for many domains, including image and video classification, plant diseases detection, etc.
In our work, we propose an end-to-end gesture classification system using deep learning models, and particularly Convolutional Neural Networks. We propose a new representation of motion capture data by transforming a sequence into 2D RGB image. This representation can allow the use of different methods applied in image and video classification. The most powerful methods in the state-of-the-art are based on deep learning and particularly Convolutional Neural Networks (CNN). We use already trained models in huge image datasets and we fine-tune these models to fit our task of gesture classification. We follow the architecture shown in Figure 1. All methods applied in the field of image classification can be thus applied to our task.

Figure 1: General Architecture.

1- Propose a new spatio-temporal representation of mocap data (from high dimensional signals to 2D image-like representation - Seq2Im)
2- Use deep learning models (Convolutional Neural Networks - CNN) for skeleton-based gesture classification => Advance the state-of-the-art of skeleton-based gesture recognition
3- Implement a real-time gesture classification technique based on deep learning
4- Develop a new method for skills evaluation (applied to sports - Taichi martial art.)

^ Top ^   

Related Projects:


The objective of the ParkAR project is to research and deveop innovative solutions for for augmented reality (AR) interactions with small groups of people.
More prcecisely, AR requires the coherence of virtual content and the surroinding real space through specialized glasses such as Microsoft's highly publicized Hololens. While the development of augmented reality for an isolated individual has been widely explored by eyeglass and viewing helmet manufacturers among other things, group interaction of individuals in a common physical space remains an unexplored area. The ParkAR project will have to invent methods for:
- Generate augmented reality visual content that integrates the real presence of other individuals into the environment.
- Analyze what, in a real, virtual, or mixed scene, attracts the attention of the participants as well as the nature of the interactions between the members of a group and these points of focus.

In order to achieve this challenge, the ParkAR project will bring together a complementary team of two university laboratories and one company. The development will be done in close collaboration between all the partners gathered in a laboratory, the "ARLab" (Augmented Reality Lab) of Louvain-la-Neuve, where researchers from both entities and the representative of the industrial partner will work as a team.
The partners are the ELEN department of UCL Louvain-la-Neuve, the Institute Numediart of the University of Mons, and the creator of attractions for theme parks Alterface Project of Wavre.


i-Treasures (Intangible Treasures - Capturing the Intangible Cultural Heritage and Learning the Rare Know-How of Living Human Treasures FP7-ICT-2011-9-600676-i-Treasures) is an Integrated Project (IP) of the European Union's 7th Framework Programme 'ICT for Access to Cultural Resources'. The project started on February 1, 2013, and will last 48 months.

The main objective of i-Treasures is to develop an open and extendable platform to provide access to ICH resources, enable knowledge exchange between researchers and contribute to the transmission of rare know-how from Living Human Treasures to apprentices. To this end, the project aims to go beyond the mere digitization of cultural content. Its main contribution is the creation of new knowledge by proposing novel methodologies and new technological paradigms for the analysis and modeling of Intangible Cultural Heritage (ICH). One of the main objectives of the proposal is the development of an appropriate methodology based on multisensory technology for the creation of information (intangible treasures) that has never been analyzed or studied before.

Within the i-Treasures project, the usability of the platform will be demonstrated in four different case studies: a) Rare Traditional Songs, b) Rare Dance Interactions, c) Traditional Craftsmanship and d) Contemporary Music Composition.

MotionMachine framework

MotionMachine is a C++ software toolkit for rapid prototyping of motion feature extraction and motion-based interaction design. It encapsulates the complexity of motion capture data processing into an intuitive and easy-to-use set of APIs, associated with the openFrameworks environment for visualisation. MotionMachine is a new framework designed for “sense-making”, i.e. enabling the exploration of motion-related data so as to develop new kinds of analysis pipelines and/or interactive applications.

^ Top ^   

Publications :

Regular Papers in Journals

S. LARABA, M. BRAHIMI, J. TILMANNE, T. DUTOIT. 2017, "3D Skeleton-Based Action Recognition by Representing Motion Capture Sequences as 2D-RGB Images", Computer Animation and Virtual Worlds (CAVW), 2017. doi:10.1002/cav.1782.

S. LARABA, J. TILMANNE, 2016, "Dance performance evaluation using hidden Markov models", Computer Animation and Virtual Worlds (CAVW), 2016. doi:10.1002/cav.1715.

Papers in Conference Proceedings

A. Grammatikopoulou, S. LARABA, K. Dimitropoulos, N. Grammalidis. 2017, "An adaptive framework for the creation of bodymotion-based games", 9th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), 2017. doi:10.1109/VS-GAMES.2017.8056603.
T. Ravet, J. TILMANNE, N. d’Alessandro and S. Laraba, 2016, "Motion Data and Machine Learning: Prototyping and Evaluation", Human Centered Machine Learning at CHI, 2016.

N. Grammalidis, K. Dimitropoulos, F. Tsalakanidou, A. Kitsikidis, P. Roussel, B. Denby, P. Chawah, L. Buchman, S. Dupont, S. Laraba, B. Picart, M. Tits, J. Tilmanne, S. Hadjidimitriou, L. Hadjileontiadis, V. Charisis, C. Volioti, A. Stergiaki, A. Manitsaris, O. Bouzos, S. Manitsaris, 2016, "The i-Treasures Intangible Cultural Heritage Dataset", Proceedings of the 3rd International Symposium on Movement and Computing (MOCO), 2016. doi:10.1145/2948910.2948944.

S. LARABA, J. TILMANNE, T. DUTOIT, 2016, "Adaptation Procedure for HMM-Based Sensor-Dependent Gesture Recognition", Proceedings of the 8th ACM SIGGRAPH Conference on Motion in Games (MIG), 2015. ACM, 2015, pp. 17-22. doi:10.1145/2822013.2822032.

J. Tilmanne, N. d'Alessandro, P. Barborka, F. Bayansar, F. Bernardo, R. Fiebrink, A. Heloir, E. Hemery, S. Laraba, A. Moinet, F. Nunnari, T. Ravet, L. Reboursière, A. Sarasua, M. Tits, N. Tits, and F. Zajéga, 2015, "Prototyping a New Audio-Visual Instrument Based on Extraction of High-Level Features on Full-Body Motion", Proceedings of the 10th International Summer Workshop on Multimodal Interfaces (eNTERFACE). Mons, Belgium, August 2015.

Technical Reports

M. Tits, S. LARABA, J. TILMANNE, D. Ververidis, S. Nikolopoulos, S. Nikolaidis, A. P. Chalikias, 2016, "Intangible Cultural Heritage Indexing by Stylistic Factors and Locality Variations - FP7 i-Treasures Deliverable 4.5. 2016.

B. Denby, C. Leboullenger, P. Roussel, A. Manitsaris, A. Katos, A. Glushkova, G. Nikos, J. TILMANNE, S. LARABA, V. Christina, K. Dimitropoulos, F. Tsalakanidou, A. Kitsikidis, P. Chawah, S. Dupont, L. Hadjileontiadis, G. D. Sergiadis, S. Manitsaris, 2016, "Final Report on ICH Capture and Analysis - FP7 i-Treasures Deliverable 3.3. January 2016.

F. Pozzi, G. Cozzani, F. Dagnino, M. Ott, M. Tavella, P. Chawah, L. Buchman, N. Grammalidis, G. Chantas, K. Dimitropoulos, A. Kitsikidis, F. Tsalakanidou, A. Manitsaris, J. Tilmanne, S. Dupont, S. Laraba, L. Hadjileontiadis, V. Charisis, S. K. Hadjidimitriou, H. Cao, 2015, "Second Report on User Requirements Identification and Analysis - FP7 i-Treasures Deliverable 2.3. July 2015.

^ Top ^