Modeling Gaze Behavior for Virtual Demonstrators

Yazhou Huang, Justin L Matthews, Teenie Matlock, Marcelo Kallmann

Research output: Contribution to journalArticlepeer-review

Abstract

Achieving autonomous virtual humans with coherent and natural motions is key for being effective in many educational, training and therapeutic applications. Among several aspects to be considered, the gaze behavior is an important non-verbal communication channel that plays a vital role in the effectiveness of the obtained animations. This paper focuses on analyzing gaze behavior in demonstrative tasks involving arbitrary locations for target objects and listeners. Our analysis is based on full-body motions captured from human participants performing real demonstrative tasks in varied situations. We address temporal information and coordination with targets and observers at varied positions

Keywords

  • gaze model
  • motion synthesis
  • virtual humans
  • virtual reality

Disciplines

  • Social Psychology

Cite this