Modeling and Animating the Gesture Style of Particular Individuals

Michael Neff - University of California, Davis

Consider the following goal:  a system that takes as input a string of
text and provides as output an animated character that says the text while
gesticulating appropriately in the style of a specified target
subject.  Satisfying this goal would allow animated characters to be used
in a wide range of applications.  In this talk, I will discuss some of our
work towards this goal, presenting a system capable of producing full-body
gesture animation for given input text in the style of a particular
performer. Our process starts with video of a person whose gesturing style
we wish to imitate. A tool-assisted annotation process is performed on the
video, from which a statistical model of the person.s particular gesturing
style is built. Using this model and input text tagged with theme, rheme
and focus, our generation algorithm creates a gesture script. As opposed
to isolated singleton gestures, our gesture script specifies a stream of
continuous gestures coordinated with speech. This script is passed to an
animation system, which enhances the gesture description with additional
detail. It then generates either kinematic or physically simulated motion
based on this description. The system is capable of generating gesture
animations for novel text that are consistent with a given performer.s
style, as was successfully validated in an empirical user study.