Dec. 5, 2010 A ballet dancer grasps her partner's hand to connect for a pas de deux. Later that night, in the dark, she reaches for her calf to massage a sore spot. Her brain is using different "maps" to plan for each of these movements, according to a new study at UC Santa Barbara.
In preparing for each of these reaching movements, the same part of the dancer's brain is activated, but it uses a different map to specify the action, according to the research. Planning to hold hands is based on her visual map of space. Her second plan, to reach for her calf, depends on the dancer's mental body map.
Two UCSB scientists studied the brains of 18 individuals who made 400 distinct arm reaches as they lay in an MRI scanner. The researchers found clear differences in brain planning activity with regard to the two types of reaching behavior. Their discovery is reported in the journal Neuron.
"Our results have two important applications," said Scott T. Grafton, professor of psychology. "One is robotics. The other is in the area of machine-brain interface; for example, in developing machines to help paraplegics. A critical issue is to understand how movement-related information is represented in the brain if we're to decode it." Grafton, a leading expert in brain imaging, directs the UCSB Brain Imaging Center where the university's MRI scanner is located.
"We're interested in movement planning and movement control," said Grafton. "We're looking at goal-directed behaviors, when we reach to grasp objects -- visually defined objects in our environment. This forms the basis of our interactions with the world."
The current scientific view is that all reaching movements -- those directed to visual targets or toward one's own body -- are planned using a visual map. "Our findings suggest otherwise," said Pierre-Michel Bernier, first author and postdoctoral fellow. "We found that when a target is visual, the posterior parietal cortex is activated, coding the movement using a visual map. However, if a movement is performed in darkness and the target is non-visual, the same brain region will use a fundamentally different map to plan the movement. It will use a body map."
The maps are located in a brain region called the precuneus, inside the parietal lobe. The researchers measured the "Blood Oxygen Level Dependent Signal," or BOLD signal, when looking at the MRI brain images. BOLD is an indirect way of looking at brain activity at a millimeter scale. They also used a methodology called "repetition suppression." This is what makes the study novel, according to the authors, as it is one of the first to identify where these maps are nested in the human brain. "We are a leader in the use of repetition suppression," said Grafton.
Repetition suppression relies on the fact that when a brain region is involved in two similar activities in a row, it is less active the second time around. The team was able to pinpoint the brain's use of body maps versus visual maps by isolating the location in the brain where the responses were less active with repeated, similar arm reaches.
Grafton explained: "The brain is trying to make a map of the world. One map is what you see, which is provided by the visual system. The other map is where the body is in space. This map is based on proprioception -- the sense of limb position -- which is derived from receptors in the skin, muscles, and joints. These maps are very different. How do you connect them? Either the visual map or the body map may be fixed, or neither may be fixed."
The authors' findings argue for the latter, demonstrating that the brain is capable of flexibly switching between these maps depending on the context. No doubt this flexibility underlies our ability to interact with the world with ease despite the ever-changing conditions in which our actions take place.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
- Pierre-Michel Bernier, Scott T. Grafton. Human Posterior Parietal Cortex Flexibly Determines Reference Frames for Reaching Based on Sensory Context. Neuron, 2010; 68 (4): 776 DOI: 10.1016/j.neuron.2010.11.002
Note: If no author is given, the source is cited instead.