Embodied Models of Human Vision
From Santa Fe Institute Events Wiki
To make progress in understanding human visuo-motor behavior, we will need to understand its basic components at an abstract level. One way to achieve such an understanding would be to create a model of a human that has a sufficient amount of complexity so as to be capable of generating such behaviors. Recent technological advances have been made that allow progress to be made in this direction. Graphics models that simulate extensive human capabilities can be used as platforms from which to develop synthetic models of visuo-motor behavior. Currently such models can capture only a small portion of a full behavioral repertoire, but for the behaviors that they do model, they can describe complete visuo-motor subsystems at a useful level of detail. The value in doing so is that the body's elaborate visuo-motor structures greatly simplify and encapsulate the specification of the abstract behaviors that guide them. The net result is that, essentially, one is left with proposing an embodied ‘operating system’ model for picking the right set of abstract behaviors at each instant. We describe one such model. A centerpiece of the model uses vision to aid the behavior that has the most to gain from taking environmental measurements. Preliminary tests of the model against human performance in realistic VR environments show that main features of the model show up in human behavior.
Back to Program