• Denominación Human Machine Interface through facial features tracking; Interaction with a 3D avatar and subjective scene visualization; immersion in a 3D scene.

The development of a human-machine interface through face detection and tracking opens a wide number of applications where users interact with the system via their head movement.

In this project, we should distinguish two main lines: the first one dealing with face detection and tracking, and the second regarding 3D model/scene rendering. The face detection/tracking module (FDT) is transparent to the user who does not need to wear any additional equipment: he/she will automatically notice that interaction is produced when his/her head moves.

The FDT module, as its name reads, comprises both face/facial features detection and tracking. Both algorithms are combined using a wise strategy that allows the system to recover from tracking errors almost immediately. The code is written in C++ and makes us of the OpenCV library.

Whenever a face appears in the webcam’s field of view, the system detects and tracks the face and both eyes. Based on eyes’ positions, the system is able to determine the location and movement of the user, and acts accordingly.

Scene Rendering and Visualization

Real time 3D model/scene rendering is accomplished through OpenGL library. Up to now, we have devised two different applications:

In the first of them, the user sees a subjective view of a given 3D scene, and this point of view changes as long as user’s face moves. Moreover, if the user approaches the screen, the scene appears bigger and vice versa. An example of a 3D scene, where the user can move among the donuts, is shown next.

The second application focuses on the interaction with a 3D computer generated face model, in which both full head and eyes movements are considered. The system tracks the position of the user and changes the orientation of the synthetic head or the gaze direction, so that it seems that the 3D model is always looking at you.