How hard to hit the brake is probably the most important judgment a driver can make, and though cognitive scientists have known for several decades that the brain uses changes in the visual image to measure self-motion, a scientist at the University of Rochester has uncovered a new trick that the mind uses to judge just how quickly we’re hurtling toward something. The findings are in the April 12 issue of Nature.
David C. Knill, associate professor of brain and cognitive sciences at the University of Rochester, found that the brain measures continuous changes in an object’s size to determine the rate at which the object is closing in. Scientists have long thought that the brain relies entirely on the relative motions of objects in the visual field, such as the speed at which the taillights of the car ahead of us appear to be moving apart. Knill has found that our brains utilize a second method that in many ways may be more important than the first.
“If our brains only used the motions of objects and not their changes in size, then situations like a busy street would confound our ability to navigate,” says Knill. “Imagine driving toward a crowd of people. The random movement of people in the crowd would add considerable ‘noise’ to the pattern of relative motions in your visual field. For example, just because two people appear to move apart does not mean that you are moving toward them. By monitoring how quickly objects grow or shrink, however, you can derive much more reliable information about how fast you are approaching.”
Complicating Knill’s work is the fact that size change and optic flow—the motion of an object in our visual field—are so tightly intertwined. When an image gets larger, its edges spread out across the retina. That movement is one of the cues the brain uses to measure how fast an object is approaching. Knill had to devise a way to increase the apparent size of an object without letting any part of it move in the visual field. His team came up with the ingenious idea of creating a video that looks like “growing” static—amorphous smudges of black and white that grow larger with each frame.
No single smudge or feature persisted from one frame of the video to the next, so there was no perception of motion. Instead, all-new smudges of a slightly larger size appeared in each successive frame of the two-second video. By controlling the rate at which the smudges grow, Knill was able to make viewers feel as if the amorphous mass was rushing toward them, even though no single part of any image in the video moved.
Surprisingly, after viewing the test movies, people reported seeing stationary images as shrinking. This is an example of a perceptual after-effect and strongly suggests that our brains contain automatic mechanisms designed to directly measure changes in object size without using optic flow information.
In the future, Knill says he’d like to see studies done on people who can’t perceive motion properly, such as many Alzheimer’s disease patients, to learn whether they can still judge an object’s three-dimensional motion based on its changing size. Such studies could shed light on both how Alzheimer’s affects the brain, and how a healthy brain makes sense of the visual world around us. Even “seeing robots,” such as those that might someday drive your car for you, could be helped by this research since it describes a shortcut that programmers could use to help computers interpret a robot’s motion from real-world images.
Knill began the research while at the University of Pennsylvania, along with his graduate student, Paul R. Schrater, and Eero P. Simmoncelli, assistant professor at the Center for Neural Science at New York University. Knill continues his work in optic flow at the University of Rochester.
The above post is reprinted from materials provided by University Of Rochester. Note: Materials may be edited for content and length.
Cite This Page: