Robust Real-Time Finger Tracking with Neural Networks

computer-brain.jpg About_PhaseSpace.png

Background

PhaseSpace system is an effective professional Mocap system. It’s an optical tracking system, meaning that it uses several cameras which get the signal emitted from LEDs. Thus, we can detect position and orientation of these LED in the tracking space. This system is very flexible and allows us to track several objects as well as the entire body.     

However, this system, which is optical, has some occlusion issues. That’s why we need a certain number of LED and Cameras to reduce these risks but it’s not enough in some cases. 

Project Idea

In our case, we want to track our fingers but we have to deal with too many occlusions to have something robust. Indeed, we need a reliable system to keep our subjects immersed in the virtual world during our experiments. Consequently, we will use a predictive model to get finger positions.

In this project, we are going to use an advanced machine learning method to get a good predictive model. We will begin to identify and record different hand postures. We will feed our classifier with these data to train it. The amount of data is critical. Indeed, we need to get enough data to have a robust classifier but also to have something continuous. We don’t want to get a set of predefined gestures but we want to get all the positions of our fingers in real-time. So we can reproduce the true movement of our hands and fingers. Finally, we will use this classifier to find the proper data to animate the finger from a virtual avatar (which represents us in our virtual world). Game development with Untiy3D/C# and data analysis skills are required in this project.

Goal

– Implement a Unity application to record a big enough dataset (different hand postures from real subjects)     to get a continuous movement.
– Generate the training dataset.
– Implement the chosen Machine Learning Method and train our classifier.
– Export this classifier in Unity.
– Animate the finger from a Virtual Avatar in Unity thanks to our Machine learning.

Requirements

  • Unity (scripting in C#/DLL in C++)
  • Machine learning ( Matlab/C++)
  • 3D geometry and quaternions (Vectors, cross products, rotations)