Demo / Dataset / Code

EPFL-IIG key research contributions since its creation in 2011

Virtual Reality

Sense of Embodiment

Many contributions presented within this research topic were done within the SNF project “Immersive Embodied Interactions in Virtual Environments” that started in 2012 in collaboration with EPFL-LNCO (Prof. Olaf Blanke and Dr. Bruno Herbelin).


Avatar error in your favor: Embodied avatars can fix users’ mistakes without them noticing

Abstract
In a context of fast-paced finger movements and with clear correct or incorrect responses, we swapped the finger animation of the avatar (e.g. user moves the index finger, the avatar moves the middle one) to either automatically correct for spontaneous mistakes or to introduce incorrect responses. Subjects playing a VR game were asked to report when they noticed the introduction of a finger swap. Results based on 3256 trials (∼24% of swaps noticed) show that swaps helping users have significantly fewer odds of being noticed (and with higher confidence) than the ones penalizing users. This demonstrates how the context and the intention for motor action are important factors for the SoA and for embodiment, opening new perspectives on how to design and study interactions in immersive VR.

Delahaye M, Blanke O, Boulic R, Herbelin B (2023) Avatar error in your favor: Embodied avatars can fix users’ mistakes without them noticing. PLOS ONE 18(1): e0266212. https://doi.org/10.1371/journal.pone.0266212


Changing Finger Movement Perception: Influence of Active Haptics on Visual Dominance

Experimental set-up

We show that participants’ visual judgment over their finger action is sensitive to multisensory conflicts (vision, proprioception, motor afferent signals, and haptic perception), thus bringing an important nuance to the widely accepted view on a general visual dominance.

Boban, L., Pittet, D., Herbelin, B., & Boulic, R.
“Changing Finger Movement Perception: Influence of Active Haptics on Visual Dominance”
in Frontiers in Virtual Reality, https://doi.org/10.3389/frvir.2022.860872

Dataset: https://zenodo.org/record/7009205


Adapting parameters through Reinforcement Learning

Our system can partially manipulate the displayed avatar movement through some distortion to make the overall experience more enjoyable and effective (e.g. training, exercising, rehabilitation). We propose a method taking advantage of Reinforcement Learning (RL) to efficiently adapt our system to each individual.

Porssut, T., Hou, Y., Blanke, O., Herbelin, B., & Boulic, R.
“Adapting Virtual Embodiment through Reinforcement Learning”
in IEEE Transactions on Visualization and Computer Graphics,
DOI :10.1109/TVCG.2021.3057797

Dataset: DOI: 10.5281/zenodo.4298840


Hand motion capture robust to occlusion

Real-time hand motion capture robust to occlusion (video link)

We trained a model of hand poses by combining IMU and optical mocap data ; this allows in a second stage to capture the hands poses more robustly in case of occlusion (and without using IMU due to their drift).

Pavllo, T. Porssut, B. Herbelin, R. Boulic, “Real-Time Neural Network Prediction for Handling Two-Hands Mutual Occlusions“, Computers & GraphicsDOI: 10.1016/j.cagx.2019.100011


Investigating the tread-off between Being-in-Control and being-Helped in VR


First person viewpoint (top left) and the 3 standard views (side, top, front)(video link)


Avatar (left) of a seated subject (right)(video link)

We proposed the attraction well metaphor to help users achieve a complex tracking task while still ensuring them to feel being-in-control.

T. Porssut, B. Herbelin, R. Boulic, “Reconciling Being in-Control vs. Being Helped for the Execution of Complex Movements in VR “, in Proc. of IEEE Conference on Virtual reality and 3D User Interfaces (VR), Osaka, march 23rd-27th  2019, DOI: 10.1109/VR.2019.8797716


Sensitivity to reach gesture distortion


Reach Movement manipulation (neutral, helping, hindering) for a subject wearing a Head-Mounted Display.(video link)

We investigated the extend to which the human reach gesture can be manipulated when visualized through an avatar in VR. Beyond identifying the tolerance threshold, it appears that we are more tolerant to being helped than to being hindered.

H. Debarba, R. Boulic, R. Solomon, O. Blanke and B. Herbelin, “Self-Attribution of Distorted Reaching Movements in Immersive Virtual Reality(In Open Access), 20th Symposium of Virtual and Augmented Reality, Oct. 29th-Nov 1st 2018, Brazil, Computers & Graphics, Vo 76, pp 142-152, Nov 2018.

Dataset and 2 Unity 3D scripts on Zenodo: https://zenodo.org/record/1409823#.XU0r75MzZhE


Sensitivity to self-contact distortion vs gesture amplification


gesture manipulation (video link)


First person viewpoint (top left) and the 3 standard views (side, top, front) (video link)

We confirmed the hypothesis that we are far more sensitive to the consistency of self-contact compared to gesture manipulation (amplification) when visualized through an avatar in VR. It is hence critical to dedicate the necessary computing resources to ensure consistent self-contacts when being impersonated in VR through an avatar .

S. Bovet, H. Debarba, B. Herbelin, E. Molla and R. Boulic, “The Critical Role of Self-Contact for Embodiment in Virtual Reality“, ( in open Access), IEEE Trans. Vis. Comput. Graphics, 24(4), April 2018, presented at the IEEE VR conference 2018.


Sensitivity to finger pointing redirection


Subject with a Head-Mounted display (left), First person view (right)(video link)

We showed that participants are often unaware of the movement manipulation, even when it requires higher pointing precision than suggested by the visual feedback.

H. Galvan-Debarba, J-N Khoury, S, Perrin, B. Herbelin and R. Boulic, Perception of Redirected Pointing Precision in Immersive Virtual Reality”In Proc. of  IEEE Virtual Reality, Reutlingen, Germany, March 2018


Characterizing First and Third Person Viewpoints and their Alternation for Embodied Interaction in Virtual Reality

experimental setup
A subject in the virtual pit experiment (video link)

Video of the experimental protocol when studying the alternation of first and third person viewpoint, associated to the following paper:

H. Debarba, S. Bovet, R. Solomon, O. Blanke, B. Herbelin & R. Boulic “Characterizing First and Third Person Viewpoints and their Alternation for Embodied Interaction in Virtual Reality“, (in Open Access) ,PLOSONE December 2017

Dataset on publisher site:  https://doi.org/10.1371/journal.pone.0190109.s008


influence of the point of view: first persion (1PP) vs third person (3PP)


Subject with mocap suit during a reach task (video link)

This is a small sample illustrating the motion capture used to evaluate performance and the relation the subject creates with the avatar in first and third person perspective:

Debarba, H. G., Molla, E., Herbelin, B., & Boulic, R. (2015, March). Characterizing embodied interaction in First and Third Person Perspective viewpoints. In 3D User Interfaces (3DUI), 2015 IEEE Symposium on (pp. 67-72).


Controlling a two-arms avatar by a single-arm subject


A single-arm user is able to control a two-arms avatar (video link)

Our Two-Arm Coordination Model (TACM) synthesizes the missing limb pose from the instantaneous variations of the intact opposite limb for a given reach task:
E. Molla, R. Boulic, “A two-arm coordination model for phantom limb pain rehabilitation”, (Free Access with the ACM Authorizer link), The 19th ACM Symposium on Virtual Reality Software and Technology (VRST 2013), Singapore, October 2013

==============================

Cybersickness

Studying the influence of the body posture

supine pose
supine pose used to evaluate cybersickness created by playing the 3Dpacman game in VR (video link)

Devices such as MRI and EEG significantly limit participant posture and can lead to adverse side effects such as simulation sickness (SS). This study intends to cover the lack of studies and explore how posture can have various degrees of influence on SS:

Marengo, P. Lopes, R. Boulic, “On the Influence of the Supine Posture on Simulation Sickness in Virtual Reality“, in Proc. of IEEE Conference on Games (CoG), London, August 20th-23rd 2019

Dataset on Zenodo: https://doi.org/10.5281/zenodo.3367270

Unity 3D code of the 3Dpacman game: https://gitlab.epfl.ch/iig/research/3dpacman.git


Studying Eye-Gaze and Blink-Rate Behaviors during cybersickness episodes

Previous studies have explored methods of cybersickness mitigation in addition to correlating physiological factors. Thanks to advances in eye tracking technology within HMDs, this project focuses on exploring how eye behaviour can change depending on the intensity of cybersickness an individual is currently suffereing.

Lopes, P., N. Tian and R. Boulic, “Exploring Blink-Rate Behaviors for Cybersickness Detection in VR”, in Proc. of the IEEE Conference on Virtual Reality and 3D User Interfaces, Atlanta, March 22nd-26th, 2020.

Lopes, P., N. Tian and R. Boulic, “Eye Thought You Were Sick! Exploring Eye Behaviors for Cybersickness Detection in VR”, in Proc. of the ACM SIGGRAPH Conference on Motion, Interaction and Games (MIG), Charleston, October 16th-18th, 2020.

Dataset on Zenodo: https://zenodo.org/record/3702309

==================================

Projection Techniques

influence of the projection: planar vs non-planar


Interaction with the nearby virtual environment through various projections (video link)

In this paper we evaluate the use of non-planar projections as a means to increase the Field of View (FoV) in embodied Virtual Reality (VR). Our main goal is to bring the virtual body into the user’s FoV and to understand how this affects the virtual body/environment relation and quality of interaction.

Debarba, H. G., Perrin, S., Herbelin, B., & Boulic, R. (2015, November). Embodied interaction using non-planar projections in immersive virtual reality. (Free Access with the ACM Authorizer link), In Proceedings of the 21st ACM Symposium on Virtual Reality Software and Technology (pp. 125-128). ACM.

====================================

Scale-1:1 crowd immersion within a CAVE


Subjects within a virtual crowd (video link)

Within-crowd immersive evaluation of collision avoidance behaviors – 2012

Within the SNF Sinergia project ArealCrowds, we proposed a crowd simulation method, Trajectory Variant Shift (TVS), based on real pedestrian trajectories re-use. We detail how to re-use and shift these trajectories to avoid collisions while retaining the liveliness of captured data. Second, we conducted a user study at scale 1:1 in a four-screen CAVE to compare our approach with three others when the subject is standing within the crowd to perform a visual search task (waiting for a specific person). Results confirm that our approach is considered as good as the state of the art regarding subject’s spatial awareness within the crowd, and better regarding not only the perceived liveliness of the crowd, but also the comfort in the CAVE.

J. Ahn, N. Wang, D. Thalmann, R. Boulic, Within-Crowd Immersive Evaluation  of Collision Avoidance Behaviors, (Free Access with the ACM Authorizer link), Proc. of ACM SIGGRAPH VRCAI 2012, Singapore, December 2012

================================================

Motion Capture & Virtual Human

Performance Animation Retargeting


Mapping of an actor pose onto a child 3D character (video link)

Within the SNF project “Interactive Optimization of Mobile Articulated Structures”, we proposed an approach to transfer the performer movement on a target character while preserving the relative location of body parts. This allows to prevent interpenetrations and to preserve the meaning of the performer action. This is achieved in real-time while the performer is moving thus allowing to produce 3D animation content more efficiently.

E. Molla, H. Galvan-Debarba and R. Boulic “Egocentric Mapping of Body Surface Constraints“, ( in Open Access), IEEE Trans. Vis. Comput. Graphics, July 2018, 24(7), DOI: 10.1109/TVCG.2017.2708083.


IK swivel singularity (left) ; singularity-free method (right) (video link)

We propose the Middle-Axis-Rotation (MAR) parametrization of human limbs that addresses the ill-conditioned cases of analytical Inverse Kinematics (IK) algorithms. The MAR parametrization is singularity-free in the reach space of the human limbs.

E. Molla, R. Boulic, “Singularity Free Parametrization of Human Limbs”, (Free Access with the ACM Authorizer link), MIG ’13 Proceedings of the ACM Conference Motion in Games, Dublin, Ireland, November 2013, https://doi.org/10.1145/2522628.2522649


Subjective Experience of Embodied Interaction (with Jacobian-based IK)


Full-body interaction in a virtual kitchen (video link)

The subjective experience of Embodied Interaction goes much further than ensuring a good immersive experience in a virtual environment to the user. In fact we want also that the user, who is equipped with mocap sensors, feels as if he were a target subject with a very different body height. We retained the example of a child as an extreme case that can make sense for the design of a large range of consumer’s equipments and dedicated facilities (home, schools, hospital, car, etc…). We advocate that a better subjective immersion is achieved by scaling the whole space and virtual environment by the ratio user height / target subject height. This makes sense because it has been demonstrated in neurosciences that an individual evaluates distances egocentrically, in body size units. In the video it results in scaling up the whole space when an adult user adopts the body size of a child.

R. Boulic, D Maupu, D Thalmann, “On Scaling Strategies for the Full Body Interaction with Virtual Mannequins”, Journal Interacting with Computers, Special Issue on Enactive Interfaces, Elsevier, 21(1-2), January 2009. 11-25.
Motion capture: Eray Molla
Video edit: Nan Wang

=========================================

Computer Animation

Intuitive software for 3D character posing from 2D strokes

sketching 3D character poses
sketching 3D character poses (video link)

 

The CTI WALT project [2015-2017] (with Moka Studio) provided an intuitive tool suitable  for 2D artists using touch-enabled pen tablets. The artist-oriented tool is easy-to-use, real-time, versatile, and locally refinable .

M. Mahmudi, P. Harish, B. Le Callennec, and R. Boulic “Artist-Oriented 3D Character Posing from 2D Strokes”, Computer & Graphics, Vol 57, p. 81-91, June 2016, Elsevier, in Open Access, DOI:10.1016/j.cag.2016.03.008

Scenes and characters are designed by Mireille Clavien


Superfast prioritized IK for 3D character posing (video link)

The second core contribution from the CTI WALT project (with Moka Studio) is a superfast Jacobian-based IK solver taking advantage of GPU and/or multi-core architectures, achieving 10 to 150 times speedup over the state of the art.

Pawan Harish, Mentar Mahmudi, Benoît Le Callennec, and Ronan Boulic. 2016. Parallel Inverse Kinematics for Multithreaded Architectures (Free Access with the ACM Authorizer link). ACM Trans. Graph. 35, 2, Article 19 (February 2016), 13 pages. Presented at SIGGRAPH 2016, Anaheim.

YouTube links: ACM TOG demo, SIGGRAPH16 teaser

Scenes and characters are designed by Mireille Clavien


Long term real trajectory reuse – 2011


On the fly reuse of real-crowd movements (video link)

Within the SNF Sinergia project ArealCrowds, we improved  the realism of real-time simulated crowds by reducing short term collision avoidance through long term anticipation of pedestrian trajectories. For this aim, we reused outdoor pedestrian trajectories obtained with non-invasive means.

The concept of region goal is employed to enforce the principle of “sufficient satisfaction”: it allows the pedestrians to relax the prescribed trajectory to the traversal of successive region goals.

J. Ahn, S. Gobron, Q. Silvestre, B. Shitrit and H. Beny et al. Long Term Real Trajectory Reuse Through Region Goal Satisfaction . The Fourth International Conference on Motion in Games, Edinburgh, UK, 2011.


Emotion


example of asymmetric facial expressions (video link)

Conveying ambivalent feelings through asymmetric facial expressions – 2017

Achieving effective facial emotional expressivity within a real-time rendering constraint requests to leverage on all possible inspiration sources and especially from the observations of real individuals. One of them is the frequent asymmetry of facial expressions of emotions, which allows to express complex emotional feelings such as suspicion, smirk, and hidden emotion due to social conventions. To achieve such a higher degree of facial expression, we propose a new model for mapping emotions onto a small set of 1D Facial Part Actions (FPA)s that act on antagonist muscle groups or on individual head orientation degree of freedoms. The proposed linear model can automatically drive a large number of autonomous virtual humans or support the interactive design of complex facial expressions over time.

N. Wang, J. Ahn and R. Boulic “Evaluating the Sensitivity to Virtual Characters Facial Asymmetry in Emotion Synthesis“, Applied Artificial Intelligence, 31 (2), 103-118, April 2017, Taylor & Francis, DOI: 10.1080/08839514.2017.1299983

Associated image dataset on Zenodo: 10.5281/zenodo.399066

J. Ahn, S. Gobron, D. Thalmann, R. Boulic,”Asymmetric Facial Expressions: Revealing Richer Emotions for Embodied Conversational Agents”, Journal of Computer Animation and Virtual Worlds , 24(6) , Wiley, 2013, DOI: 10.1002/CAV.1539

This research was initiated through the CyberEmotions EU project led by Prof. D. Thalmann from 2009 to 2011, followed by Dr R. Boulic from 2011 to 2013. Dr S. Gobron, Dr J. Ahn, Dr N. Wang and Q. Silvestre contributed to this research topic on Collective Emotions in Cyberspace. Our contribution was focusing on the real-time expression of complex facial emotions as can be tested with the Unity 3D demonstrator described below:

You can download our Facial Expression demonstrator [Unity 3D Application  for Mac (43 MB) or  Windows (38 MB) ].

Just contact us to obtain the zip file password

what can you do ?

  • Run the Unity CE_Face executable file (visible at top level after unzipping)
    • Select the window size when launching the executable. Quality is better in windowed mode. If you prefer “full-screen” you can quit with Alt-F4 .
  • Choose among four virtual characters (top right menu: default is CE_W_linda)
  • Design an asymmetric complex emotion in the Valence-Arousal Plane (top left corner). Just be aware that not all combinations lead to plausible expressions:
    • right button for the right side of the face (green cross)
    • left button for the left side of the face (blue cross)
    • Additional slider for the specifying the Dominance
  • Explore the emotion dynamic model by activating the top toggle and specifying successive emotions in the Valence-Arousal plane with the left button (blue cross). Only for symmetric facial expressions. (the dynamic model is a contribution from D. Garcia of ETHZ)
  • Play a few prerecorded full body animations combined with facial expressions (top righ menu under the H-Anim label).

The following two papers describe the CyberEmotions system architecture.

movie
CyberEmotion chatting system (video link)

An NVC emotional model for conversational VHs – 2012

This research proposed a new emotional model for Virtual Humans (VHs) in a conversational environment. As a part of a multi-users emotional 3D-chatting system, the research focus on how to formulate and visualize the flow of emotional state defined by the Valence-Arousal-Dominance (VAD) parameters. From this flow of emotion over time, we successfully visualized the change of VHs’ emotional state through the proposed emoFaces and emoMotions. The notion of Non-Verbal Communication (NVC) was exploited for driving plausible emotional expressions during conversation. With the help of a proposed interface, where a user can parameterize emotional flow, we succeeded to vary the emotion expressions and reactions of VHs in a 3D conversation scene.

J. Ahn, S. Gobron, D. Garcia, Q. Silvestre, D. Thalmann, R. Boulic, An NVC Emotional Model for Conversational Virtual Humans in a 3D Chatting Environment, AMDO 2012, LNCS Vol7378, 2012, pp 47-57


Experimental setup for evaluating the chatting system (video link)

An event-based 3D NVC chatting architecture – 2012

Non-verbal communication (NVC) such as gesture, posture, and facial expression makes up about two-thirds of all communication. However, this fundamental aspect of communicating is often omitted in 3D social forums or virtual world oriented games. This research proposed an answer to this issue by presenting a multi-user 3D-chatting system enriched with NVC relative to motion. Basically, this event-based architecture tries to recreate a context by extracting emotional cues from dialogs and derives virtual human potential body expressions from that event triggered context model.
We structured the system architecture enabling the modeling NVC in a multi-user 3D-chatting environment. There, we present the transition from dialog-based emotional cues to body language, and the management of NVC events in the context of a virtual reality client-server system

S. Gobron, J. Ahn, D. Garcia, Q. Silvestre, D. Thalmann, R. Boulic, An Event-Based Architecture to Manage Virtual Human Non-Verbal Communication in 3D Chatting Environment, AMDO 2012, LNCS Vol 7378, 2012, pp 58-68

Video of the CyberEmotions EU project final synthesis.

=========================================

Past key research topics performed before 2011, within EPFL-VRLAB and EPFL-LIG

Virtual Reality

Full-body Avatar control with collision avoidance – 2009


Full-body interaction with collision damping (video link)

In this line of research we complemented the full-body avatar control in VR, performed with Jacobian-based IK, with a collision anticipation mechanism that damps the movement component moving towards an obstacle (visualized with temporary red lines in the video). The goal is to automatically adjust the avatar pose to prevent interpenetration so that the user is not disturbed by such issue induced by the lack of haptic feedback.

Peinado,D. Meziat, D. Maupu, D. Raunhardt, D. Thalmann, R. Boulic,” Full-body Avatar Control with Environment Awareness”, IEEE CGA, 29(3), May-June 2009. DOI: 10.1109/MCG.2009.42


Scaling the environment to impersonate a range of potential users (video link)

Alter body – 2008 [with Jacobian-based IK]

We collaborated with the University of Geneva (anthropometric body meshes) and the CEIT in Spain (real-time motion capture) to assess the importance of scaling the virtual environment when controlling an avatar that may have a different body height. The study shows that the avatar control is much more effective when working with a third person viewpoint in front of a large immersive screen rather than with a head-Mounted Display. This work has been achieved in the framework of the European Union Network of Excellence “Enactive Interfaces”.

EPFL VRLAB: Damien Maupu, Ronan Boulic
University of Geneva: Mustafa Kasap
CEIT: Luis Unzueta


3D Virtual calligraphy – 2006


Painting on a large virtual canvas (video link)

 

We explored a new type of light painting interface in the framework of the European Union Network of Excellence “Enactive Interfaces”. Full body movements are exploited to created 3D entities with either ribbon, tube or spray. The entities are visualized on a transparent canvas.

Damien Maupu, José Rosales, Ronan Boulic and the painter Muma


Real-Time Full-Body Motion capture with magnetic sensors – 1998


Real-time full-body motion capture back in XXth century (video link)

Demonstration of a Motion Capture technique that relies on the position/orientation measurement provided by magnetic sensors strapped on body segments. The real-time posture reconstruction algorithm employs the position measurement of one sensor (at the spine base) and only the orientation measurement for all the others. This guarantees the high robustness of the method.

T. Molet, Boulic R., Thalmann, D., Human Motion Capture Driven by Orientation Measurements, Presence, 8(2), pp 187-203,  MIT Press, April 1999

T. Molet, Boulic R., Rezzonico S., Thalmann, D.,An architecture for immersive evaluation of complex human tasks”, IEEE Transaction in Robotics and Automation, Special Section on Virtual Reality, 15 (3), pp 475-485, June 1999


Virtual fighting: full-body real-time interaction with an autonomous agent (NPC) – 1997


Karate full-body interaction (video link)

A hierarchical model of human actions is used to capture the human body posture in real-time, via magnetic sensors attached to the user. The demo shows a life participant with ten sensors used to animate the avatar in the virtual scene. The participant performs fight gestures which are recognized by the virtual opponent. The latter responds by playing back pre-recorded keyframe sequences.

L. Emering, Boulic R., Thalmann D., Conferring human action recognition skills to life-like agents , Journal of Applied Artificial Intelligence, Special Issue on Animated Interface Agents, Volume 13 (4-5), pp 539-565, June-August 1999

L. Emering, Boulic R., Thalmann, D. Interacting with Virtual Humans Through Body Actions, IEEE Journal of Computer Graphics and Application, “Projects in VR” pp 8 – 11, January 1998

================================================

Human Motion Modelling & Computer Animation

Encapsulating motion continuity for constraint-based motion editing – 2013


Prioritized motion editing in Motion latent space (video link)

We introduced a novel method for interactive human motion editing. Our main contribution is the development of a Low-dimensional Prioritized Inverse Kinematics (LPIK) technique that handles user constraints within a low-dimensional motion space – also known as the latent space.

Our technique is based on the mathematical connections between linear motion models such as Principal Component Analysis (PCA) and Prioritized Inverse Kinematics (PIK). Furthermore, two strategies to impose motion continuity based on PCA are introduced.

S. Carvalho , R.Boulic, C. Vidal, D. Thalmann. Latent motion spaces for full-body motion editing, The Visual Computer, 29(3), 171-188, 2013.


Motion constraint


Prioritized motion editing in Poses latent space (video link)

We proposed a hybrid postural control approach taking advantage of data-driven and goal-oriented methods while overcoming their limitations. We took advantage of the latent space characterizing a given motion database. We introduced a motion constraint operating in the latent space to benefit from its much smaller dimension compared to the joint space. This allows its transparent integration into a Prioritized Inverse Kinematics (PIK) framework. The motion constraint benefits from the natural flow of movement provided by the motion database to channel the convergence of the PIK while retaining the spatio-temporal coherence of the captured motions.

D. Raunhardt and R. Boulic . Motion constraint , in Visual Computer, vol. 25, p. 509-518, 2009.

D. Raunhardt and R. Boulic . Immersive singularity-free full-body interactions with reduced marker set, in Computer Animation And Virtual Worlds, vol. 22, p. 407-419, 2011.


Interactive low-dimensional motion synthesis by combining motion models and PIK


Golf swing motion editing in latent space (video link)

We introduced a constraint-based motion editing technique enforcing the intrinsic motion flow of a given motion pattern (e.g., golf swing). Its major characteristic is to operate in the motion Principal Coefficients (PCs) space instead of the pose PCs space. By construction, it is sufficient to constrain a single frame with Inverse Kinematics (e.g., the hitting position of the golf club head) to obtain a motion solution preserving the motion pattern style.

S. Carvalho, R. Boulic and D. Thalmann . Interactive Low-Dimensional Human Motion Synthesis by Combining Motion Models and PIK , in Computer Animation & Virtual Worlds, vol. 18, 2007.


Progressive clamping


progressive inequality joint constraints (video link)

We proposed the progressive clamping method to better handle the kinematic anisotropy of joint limits for virtual mannequins or robots. Our method damps only the joints’ variation component heading towards the limits. In addition we proposed to dynamically express the corrective joint variation as a highest priority constraint that naturally extends the management of inequality constraints.

D. Raunhardt, R. Boulic, “Progressive Clamping”, IEEE  ICRA07, Roma


Robust kinematic constraint detection for motion data


Constraint detection in a captured ballet motion (video link)


Automatic constraint detection in captured human movement (video link)

We developped a method for detecting kinematic constraints for motion data, which is an important step to ease further operations such as blending or motion editing. It detects when an object (or an end-effector) is stationary in space or is rotating around an axis or a point.
Our method is fast, generic and may be used on any kind of objects in the scene. Furthermore, it is robust to highly noisy data as we detect and reject aberrant data by using a least median of squares (LMedS) method.

B. Le Callennec, R. Boulic  Robust Kinematic Constraint Detection for Motion Data , Proc. of  EG-SIGGRAPH SCA06, Vienna Sept. 2006

 ——————————————————————————————————————————–

Data-based locomotion engine


PCA-based integrated walking and running motion synthesis (video link)

We proposed an on-line reactive animation method generalizing a biped locomotion pattern combining standing, walking and running. The resulting engine is able to animate human-like characters of any size and proportions. For that purpose, several motion capture data from several persons have been organized into a hierarchical PCA (Principal Component Analysis) structure to perform not only interpolation, but also extrapolation.

P. Glardon, R. Boulic and D. Thalmann . Robust on-line adaptive footplant detection and enforcement for locomotion, Visual Computer, 22(3), 194-209, 2006.

P. Glardon, R. Boulic and D. Thalmann . Dynamic obstacle avoidance for real-time character animation, Visual Computer, 22(6), 399-414, 2006


Motion deformation with prioritized constraints


Ballet motion editing with prioritzed IK (video link)


reach, walk and karate motion editing with prioritized IK (video link)

 

We introduced an interactive motion deformation method to modify animations so that they satisfy a set of prioritized constraints. Our approach successfully handles the problem of retargetting, adjusting a motion, as well as adding significant changes to preexisting animations. The concept of prioritized constraints, introduced by P Baerlocher within the SNF project “Interactive Optimization of Mobile Articulated Structures”, allows to avoid tweaking issues for competing constraints. Each frame is individually and smoothly adjusted to enforce a set of prioritized constraints. The iterative construction of the solution channels the convergence through intermediate solutions, enforcing the highest prioritized constraints first.
In addition, we proposed a new, simple formulation to control the position of the center of mass so that the resulting motions are physically plausible.

  1. Le Callennec and R. Boulic . Interactive motion deformation with prioritized constraints, in Graphical Models, vol. 68, num. 2, p. 175-93, Graph. Models (USA), 2006
  2. Baerlocher and R. Boulic . An inverse kinematics architecture enforcing an arbitrary number of strict priority levels, Visual Computer, 20(6), 402-417, 2004.

Locomotion modelling 1990-2004


Gait style editor (video link)


Various gait styles for HANIM-compliant 3D characters (video link)

The Gait style editor provides an H-ANIM compliant interface for the interactive design of real-time gait styles. The user can select both a desired linear speed and a desired angular speed and tune the current gait style for numerous postural parameters. An integrated step frequency adjustement always ensures the correct realization of the desired linear speed.

The H-Anim compliant walk engine produces a realistic walking pattern with continuously evolving velocity. Apart from an intrinsic real-time requirement the described walking model addresses three issues: generalization (animation of a wide population of virtual humans for a wide range of the walking parameters), openness (user-defined personification of the gait style) and reactivity (changing the user defined context at any time while maintaining the coherence of the model).

R. Boulic, B. Ulicny, D. Thalmann «Versatile Walk  Engine», Journal of Game Development, 1(1), pp 29-52, Michael van Lent Editor, Charles River Media. www.jogd.com, 2004

R. Boulic, Thalmann D, Magnenat-Thalmann N, A global human walking model with real time kinematic personification, The Visual Computer, 6 (6), December 1990