Abstract

Embodied Cognitive Architectures

To make progess in understanding the operations of the human brain, we will
need to understand its basic functions at an abstract level. One way to achieve
such an understanding is to create a model of a human that has a sufficient
amount of complexity so as to be capable of interpreting abstract behavioral
models. Recent technological advances have been made that allow progress to be
made in this direction. Virtual reality(VR) graphics models that simulate
extensive human capabilities can be used as platforms from which to develop
synthetic models of visuo-motor behavior. Currently such models can capture
only a small portion of a full behavioral repertoire, but for the behaviors
that they do model, they can describe complete visuo-motor subsystems at a
useful level of detail. The value in doing so is that the body’s elaborate
visuo-motor structures greatly constrain and simplify the specification of the
abstract behaviors that guide them. The result is that, essentially, one is
left with proposing an embodied “operating system” model for picking the right
set of abstract behaviors at each instant. This paper outlines one such model.
A centerpiece of the model uses vision to aid the behavior that has the most to
gain from taking environmental measurements. Preliminary tests of the model
against human performance in realistic VR environments show that the main
features of the model show up in human behavior. 

--------------------------------

Bio

Dana Ballard is a professor of computer science at the University of Texas,
Austin having moved recently from the University of Rochester, His main
research interest is in computational theories of the brain with emphasis on
human vision. In 1985 Chris Brown and he led a team that designed and built a
high speed binocular camera control system capable of simulating human eye
movements which spurred to an increased understanding of the role of behavior
in vision.  Currently he is pursuing this research by using model humans in
virtual reality environments. His group was the first to integrating eye
tracking in a head-mounted display. He is the author of two books, Computer
Vision (w. Chris Brown) and Natural Computation.