Modeling Gaze Behavior for Virtual Demonstrators


Abstract:

Achieving autonomous virtual humans with coherent and natural motions is key for being effective in many educational, training and therapeutic applications. Among several aspects to be considered, the gaze behavior is an important non-verbal communication channel that plays a vital role in the effectiveness of the obtained animations. This paper focuses on analyzing gaze behavior in demonstrative tasks involving arbitrary locations for target objects and listeners. Our analysis is based on full-body motions captured from human participants performing real demonstrative tasks in varied situations. We address temporal information and coordination with targets and observers at varied positions.

Paper:

Modeling Gaze Behavior for Virtual Demonstrators
Yazhou Huang, Marcelo Kallmann, Justin L Matthews and Teenie Matlock
International Conference on Intelligent Virtual Agents
Reykjavik, Iceland, 2011




(the original publication is available at www.springerlink.com)


Bibtex:
  @inproceedings { huang11iva,
  author    = { Yazhou Huang and Marcelo Kallmann and Justin L Matthews and Teenie Matlock },
  title     = { Modeling Gaze Behavior for Virtual Demonstrators },
  booktitle = { Proceedings of the 11th International Conference on Intelligent Virtual Agents (IVA) },
  year      = { 2011 },
  location  = { Reykjav\'ik, Iceland },
 }
 


(for information on other projects, see our research and publications pages)