High-level Perception using Low-level Mechanisms?

From Santa Fe Institute Events Wiki

Revision as of 15:37, 18 September 2007 by Mm (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Vision used to be divided into clearly distinct levels (e.g. low-level, mid-level and high-level) and it was generally assumed that processing was done sequentially, with high-level perceptual mechanisms only being involved once the lower-level processing had been completed. However, there is increasing evidence that certain "high-level" tasks, including detecting the presence of animals and humans in complex scenes, can be performed faster than many a priori simpler tasks. In particular, when two scenes are flashed left and right of fixation, subjects can initiate saccades to the side where there is an animal in as little as 120 ms. I will argue that feed-forward processing mechanisms can be used to detect the presence of certain key forms very rapidly and without the need for complex mechanisms such as scene segmentation. Furthermore, simulation studies have demonstrated that this sort of rapid processing can be achieved using information encoded in the order of firing of neurons in a population, because the first neurons to fire are generally the ones that are most strongly excited. Interestingly, it appears that Spike Time Dependent Plasticity (STDP), when coupled with temporally encoded information, naturally leads to the development of neurons selective to frequently occurring and important stimuli.

Back to Program