New! Sign up for our free email newsletter.
Science News
from research organizations

3-D face models that give animators intuitive control of expressions

Date:
September 18, 2011
Source:
Carnegie Mellon University
Summary:
Flashing a wink and a smirk might be second nature for some people, but computer animators can be hard-pressed to depict such an expression realistically. Now scientists have created computerized models derived from actors' faces that reflect a full range of natural expressions while also giving animators the ability to manipulate facial poses.
Share:
FULL STORY

Flashing a wink and a smirk might be second nature for some people, but computer animators can be hard-pressed to depict such an expression realistically. Now scientists at Disney Research, Pittsburgh, and Carnegie Mellon University's Robotics Institute have created computerized models derived from actors' faces that reflect a full range of natural expressions while also giving animators the ability to manipulate facial poses.

The researchers developed a method that not only translates the motions of actors into a three-dimensional face model, but also sub-divides it into facial regions that enable animators to intuitively create the poses they need. The work, to be presented Aug. 10 at SIGGRAPH 2011, the International Conference on Computer Graphics and Interactive Techniques in Vancouver, envisions creation of a facial model that could be used to rapidly animate any number of characters for films, video games or exhibits.

"We can build a model that is driven by data, but can still be controlled in a local manner," said J. Rafael Tena, a Disney research scientist, who developed the interactive face models based on principal component analysis (PCA) with Iain Matthews, senior research scientist at Disney, and Fernando De la Torre, associate research professor of robotics at Carnegie Mellon.

Previous data-driven approaches have resulted in models that capture motion across the face as a whole. Tena said these are of limited use for animators because attempts to alter one part of an expression -- a cocked eye, for instance -- can cause unwanted motions across the entire face. Attempts to simply divide these holistic models into pieces are less effective because the resulting model isn't tailored to the motion of each piece.

As a result, Tena said, most facial animation still depends on "blendshape" models -- a set of facial poses sculpted by artists based on static images. Given the wide range of human expressions, it can be difficult to predict all of the facial poses required in a film or videogame, however. Many additional poses often must be created during the course of production.

By contrast, Tena, De la Torre and Matthews created their models by recording facial motion capture data from a professional actor as he performed sentences with emotional content, localized actions and random motions. To cover the whole face, 320 markers were applied to enable the camera to capture facial motions during the performances.

The data from the actor was then analyzed using a mathematical method that divided the face into regions, based in part on distances between points and in part on correlations between points that tend to move in concert with each other. These regional sub-models are independently trained, but share boundaries. In this study, the result was a model with 13 distinct regions, but Tena said more regions would be possible by using performance capture techniques that can provide a dense reconstruction of the face, rather than the sparse samples produced by traditional motion capture equipment.

Future work will include developing models based on higher-resolution motion data and developing an interface that can be readily used by computer animators.

For more information and to see a video, visit this project's web page: http://drp.disneyresearch.com/projects/irblfm/irblfm.html


Story Source:

Materials provided by Carnegie Mellon University. Note: Content may be edited for style and length.


Cite This Page:

Carnegie Mellon University. "3-D face models that give animators intuitive control of expressions." ScienceDaily. ScienceDaily, 18 September 2011. <www.sciencedaily.com/releases/2011/08/110809104314.htm>.
Carnegie Mellon University. (2011, September 18). 3-D face models that give animators intuitive control of expressions. ScienceDaily. Retrieved April 19, 2024 from www.sciencedaily.com/releases/2011/08/110809104314.htm
Carnegie Mellon University. "3-D face models that give animators intuitive control of expressions." ScienceDaily. www.sciencedaily.com/releases/2011/08/110809104314.htm (accessed April 19, 2024).

Explore More

from ScienceDaily

RELATED STORIES