Research

National Projects

 SNF – Immersive Embodied Interactions in Virtual Environments, in collaboration with EPFL Cognitive Neuroscience Laboratory (Prof. O. Blanke & Dr B. Herbelin): this project has started in 2012 with the PhD of Henrique Galvan Debarba who defended his PhD in August 2016. A PLOS-ONE article will appear in 2018 on the influence of first and third person viewpoints for the user embodiment. Presently the project continues with the PhD student Thibault Porssut.

 

 

Past Projects

  CTI-KTI Projects: Walt and Walt-Mocap: These technology transfert projects aim at leveraging on the EPFL-IIG know-how regarding the posture optimization of articulated structure with Inverse Kinematics and to improve performances for the end users in terms of responsiveness and usability. The first aspect has been addressed through a new parallelized IK algorithm and the second aspect has been handled through an intuitive 2D-to-3D stroke analysis.
SUVA Project ‘Parcours d’embuches virtuel’, in collaboration with EPFL Laboratory of Movement Analysis And Measurement (Prof. Aminian Kamiar). Dr. Nan WANG is contributed to this project on using VR technologies to increase the sensibility of risk in different environments and configurations.

  SNF Sinergia: AERIAL CROWDS – Populating Mixed Reality Cities. Dr S. Gobron, Dr J. Ahn and Q. Silvestre contributed to this research topic on capturing and re-using real-crowd trajectories. The image on the left is our proposition of within-crowd evaluation of various crowd simulation approaches ; it has been presented at ACM VRCAI in Singapore in December 2012. The project was completed on April 2013.

 

 SNF – Interactive Optimization of Mobile Articulated Structures: This direction of research started in 1997 with the PhD thesis of Paolo Baerlocher, and was followed by Benoît Le Callennec, Daniel Raunhardt and Eray Molla (2015). This IIG short presentation gather key contributions for the postural control of Virtual Mannequin (= Virtual Human). See also our Movie and Publication pages. At the moment this project is on hold.

Some of these algorithms are currently extended through the collaboration with the startup Mokastudio.

 

International Collaborations

 

Decoding Human Interactions in Virtual Environments from EEG

Collaboration with Vale Institute of Technology (ITV) in Brazil (Dr. Schubert Carvalho). A technology under development aims to capture and decode cognitive processes (coded within EEG signals) from the internal interpretation of the user, related to human actions during 3D environment interaction. BSc. Iraquitan Filho and Alexandre Gomes both from ITV are also contributing to this research project.

 

European projects and networks

 

CYBEREMOTIONS –  Prof. D. Thalman initiated the project in 2009. Dr R. Boulic followed up from 2011 to 2013. Dr S. Gobron, Dr J. Ahn, Dr N. Wang and Q. Silvestre contributed to this research topic on Collective Emotions in Cyberspace. Our contribution was focusing on the real-time expression of complex facial emotions.

 Check the project final synthesis video on YouTube

You can download our Facial Expression demonstrator [Unity 3D Application  for Mac (43 MB) or  Windows (38 MB) ].

Just contact us to obtain the zip file password

what can you do ?

  • Run the Unity CE_Face executable file (visible at top level after unzipping)
    • Select the window size when launching the executable. Quality is better in windowed mode. If you prefer “full-screen” you can quit with Alt-F4 .
  • Choose among four virtual characters (top right menu: default is CE_W_linda)
  • Design an asymmetric complex emotion in the Valence-Arousal Plane (top left corner). Just be aware that not all combinations lead to plausible expressions:
    • right button for the right side of the face (green cross)
    • left button for the left side of the face (blue cross)
    • Additional slider for the specifying the Dominance
  • Explore the emotion dynamic model by activating the top toggle and specifying successive emotions in the Valence-Arousal plane with the left button (blue cross). Only for symmetric facial expressions. (the dynamic model is a contribution from D. Garcia of ETHZ)
  • Play a few prerecorded full body animations combined with facial expressions (top righ menu under the H-Anim label).

The Facial Expression demonstrator implements the technique described in this  journal paper: Asymmetric facial expressions: revealing richer emotions for embodied conversational agents, Computer Animation And Virtual Worlds, vol. 24, num. 6, p. 539-551, 2013

Some more results on human sensitivity to asymmetric facial expressions can be found in this paper: N. Wang, J. Ahn & R. Boulic (April 2017): “Evaluating the Sensitivity to Virtual Characters Facial Asymmetry in Emotion Synthesis”, Applied Artificial Intelligence, Taylor & Francis.

 

IMS_VISTRA – Intelligent Manufracturing System network associated with the EU FP7 project VISTRA on Virtual Simulation and Training

 

 

VISONAIR – collaboration with Ecole Centrale de Nantes, December 2012