Exploring and optimizing the motion capture volume of a Phasespace system with an Occulus HMD

——- en cours de définition —-


  two Phasespace motion capture systems are currently exploited in EPFL-IGG (here in the CAVE)


The purpose of this project to combine the immersive visualization provided by the Occulus rift HMD (DK2) with the Phasespace motion capture system to interactively optimize the camera setup by seeing the overlap of camera frustums. Presently it is rather difficult to imagine how good (or bad) is our camera setup. So the idea is to create a virtual environment colocated with the camera setup where the user would see these frustums and additional features.

Development framework:

The development of this project is based on the interactive rendering engine, Unity 3D ver 4.x with Oculus Rift SDK.

The goal is to get the camera calibration data from Phasespace, especially their position and orientation in a common coordinate system, and propose an intuitive visualization tool for evaluation the quality of the motion capture coverage. This will be possible by viewing the proposed visualization in an occulus HMD that will be tracked by the Phasespace system. Additionnal marker should be visualized too, e.g. a marker on a wand.

Depending on the type of project (semester / Master Diploma), the scope of the tool is different:

  • semester project: mocap space viz tool with HMD + one marker on a wand
  • Master diploma: consider also using more markers (e.g. using the full body mocap suit) to refine the boundaries of the mocap space (exploit the additional quality data provided by phasespace). the tool could try to suggest a better placement of some cameras.



Programming (C++ for phasespace / C# + DLL  for Unity3D)


Nan WANG (nan.wang@epfl.ch INJ138 )

Ronan BOULIC (Ronan.boulic@epfl.ch   INJ141 )