July 21, 2003 Bigger and brighter isn’t better, at least not when trying to view moving objects.
That is the counter-intuitive result of a study performed by a team of Vanderbilt psychologists which sheds new light on one of the most sophisticated processes performed by the brain: identifying and tracking moving objects.
“The bigger an object, the easier it is to see. But it is actually harder for people to determine the motion of objects larger than a tennis ball held at arms length than it is to gauge the motion of smaller objects,” says Duje Tadin, first author of the paper on the study appearing in the July 17 issue of the journal Nature. Tadin is a graduate student in psychology at Vanderbilt and his co-authors are postdoctoral fellow Lee A. Gilroy and professors Joseph S. Lappin and Randolph Blake.
In the article, the researchers show that this unexpected result is due to the way in which visual signals are processed in the part of the brain known as the medial temporal visual area or MT, one of the 30-plus cortical centers involved in processing visual signals. Their findings support the hypothesis that the neurons in MT employ a mechanism called “center-surround receptive field organization.” This same mechanism, which acts to highlight differences, is found in a number of other senses, including touch, hearing and smell.
In the visual system, the center-surround organization is a clever way that nature has developed for filtering out spurious signals caused by shifting patterns of light that fall on the retina that don’t have anything to do with the movement of objects in the external world.
One of the most difficult things that the brain does is pick out objects from the visual background. Objects can differ from the background in a number of different ways, including texture, color, brightness, binocular displacement (the difference in image placement in each eye due to the distance between them) and motion. So the brain uses these and a number of other visual clues to pick out individual objects.
Information from the eyes goes first to the primary visual cortex at the very back of the brain. Here the information is separated into different characteristics, such as texture, color, brightnes and motion.
But how does the brain “see” motion? Just detecting shifting light patterns is not enough. Each time you shift your eyes or move your body, for example, the patterns on the retina change in ways that must be ignored. That is where the researchers think that center-surround receptive field organization comes in. Neurons in the primary visual cortex relay motion information to the neurons in MT, an area that Vanderbilt neuroscientist Jon Kaas helped discover. Experiments indicate that in the center of the visual field MT each neuron “monitors” an area that is the size of a tennis ball held at arms length. However, each neuron is not just affected by what happens in this central area. It is also influenced by the responses of the neurons that monitor a surrounding area about the size of a soccer ball (held at arms length).
The central-surround mechanism works as follows. Each neuron has a preferred direction: right, left, up, down, sideways, et cetera. If a neuron that prefers right motion detects a motion to the right while the neurons in its surround area are not registering any motion, then it fires vigorously. If the neurons in its surround area are stimulated by leftward motion, however, then it sometimes fires even more vigorously. But, if the surrounding neurons are also registering motions to the right, the neuron does not fire. This inhibitory effect is the hallmark of the center-surround mechanism.
“This is what causes moving objects to stand out distinctly even against moving backgrounds,” Lappin comments, “But when objects are the size of the surround area or larger, then they tend to be treated as background motion and so are less visible.”
The researchers discovered this effect when they analyzed the results of a series of psychophysical experiments in which human observers were asked to determine the direction of motion of patterns of varying speed, size and contrast that were flashed briefly on a screen. Not only did these experiments confirm that people have more trouble determining the motion of larger objects, they also showed that this effect was greatest in conditions of high contrast. The influence of surrounding neurons weakens as contrast levels decline.
“This shows that the visual system adapts to the amount of information available. When visual information is plentiful, it uses a differentiation strategy to identify moving objects. As light levels drop, however, it switches to an integration strategy that uses the available information more efficiently,” says Lappin.
Once the researchers had successfully documented the odd side-effect of this motion-enhancing mechanism – that it is harder to determine the motion of larger objects – they designed a series of follow-on experiments that pinpointed the effect to MT. They did so by applying what is known about how MT works to predict how observers should respond to another set of experiments, running the experiments and comparing the results with the predictions. For example, they knew that MT neurons are not very responsive to color. So they revised their experiments so that the patterns were produced by color motion. As predicted, they found that the center-surround effects did not appear.
The research was supported by the National Institutes of Health.
Other social bookmarking and sharing tools:
The above story is reprinted from materials provided by Vanderbilt University.
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead.