Reactive Virtual Characters

In this project we seek to facilitate the creation of action-reaction sequences between a participant facing a virtual mirror and his virtual reflection.

The final user will be able to use the software in 2 phases. The first phase will consist in recording pairs of actions (action reaction). Each action will be recorded in front of a virtual mirror (created in Unity). Once recorded, the user will adjust the time delay between the action and the reaction. Examples of pairs of actions can be:

·       A waves his hand, B waves his hand,

·       A points to a virtual object object, B goes to pick it but stumbles and falls down,

·       A approaches B to see if he is alright, B wakes up

In the second phase, once the actions are recorded, it will be enough for a user to start doing one of the gestures done in the first phase to trigger the reaction in the virtual mirror. This will be based on the gesture follower software http://imtr.ircam.fr/imtr/Gesture_Follower . An example of this library used to synchronize video can be found in: https://www.youtube.com/watch?v=8b2vQeV0SyI 

Main goal: To implement software allowing to, first, record these actions within the virtual world and, second, automatically trigger the prerecorded actions when it is appropriate.

Advanced goal: To extend the method to integrate spatial constraints (for example, show the system works if the participant changes its position in the virtual world, or if an object used for a pointing gesture is changed within the virtual world).

Calendar: During the first month, the student will set up the basic scenario using a back projection screen, a kinnect, and Unity3D. He will implement the record and replay functionality, and also hook the tracking information provided by the kinect with the gesture follower (already implemented in Max/MSP).

 

The second and third months of work will be focused in implementing the algorithm described in Bevilacqua et al. (2010) in order the system can run independently from Max/MSP. If there is time left, we will explore how to extend this algorithm to integrate other constraints (for example, adapt a pointing gesture to an object that moves within the virtual scene).

 

Materials

A Kinnect, a back-projection screen, a projector, Unity3Dpro, the gesture follower (beta free version http://imtr.ircam.fr/imtr/Gesture_Follower ) and a Max/MSP license.

 

Expected Outcome

A functional demo running in Unity that can record and replay sequences of actions, and detect when a participant has performed one of the actions recorded in order to trigger the reactions. The flexibility and facility of use of the system will be particular points of interest.

Expected Background

Programming in C++

Previous knowledge of Unity3D, gesture recognition, Max/MSP or VR displays is a plus.

References

F Bevilacqua, B Zamborlin, A Sypniewski, N Schnell, F Guedy and N Rasamimanana (2010) Continuous realtime gesture following and recognition. Gesture in Embodied Communication and Human-Computer Interaction.

Contact

Joan Llobera

email: joan (dot) Llobera (ad) epfl.ch

office INJ140