Planning Humanlike Actions in Blending Spaces


Abstract:

We introduce an approach for enabling sampling based planners to compute motions with humanlike appearance. The proposed method is based on a space of blendable example motions collected by motion capture. This space is explored by a sampling-based planner that is able to produce motions around obstacles while keeping solutions similar to the original examples. The results therefore largely maintain the same humanlike characteristics observed in the example motions. The method is applied to generic upper-body actions and is complemented by a locomotion planner that searches for suitable body placements for executing the upper-body actions successfully. As a result, our overall multi-modal planning method is able to automatically coordinate whole-body motions for action execution among obstacles, and the produced motions remain similar to the considered example motions.

Paper:

Planning Humanlike Actions in Blending Spaces
Yazhou Huang, Mentar Mahmudi and Marcelo Kallmann
IEEE/RSJ International Conference on Intelligent Robots and Systems
San Francisco, CA, 2011


Video:



(5 MB .mp4)



Bibtex:
  @inproceedings { huang11iros,
  author    = { Yazhou Huang and Mentar Mahmudi and Marcelo Kallmann },
  title     = { Planning Humanlike Actions in Blending Spaces },
  booktitle = { Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) },
  year      = { 2011 },
  location  = { San Francisco, California },
 }
 


(for information on other projects, see our research and publications pages)