Two mechanisms have been commonly described which allow us to locate objects in space. Direct perception occurs when we see, hear or feel an object; by directly looking at an object, for example, we can easily describe its size, shape and where it is located in space. However, when an object is not directly in front of us, we need to rely on a higher-level mental process known as visualization to help us recreate the object's location. An everyday example of visualization is reading a map; a map can tell us where a particular landmark is, even if that landmark is not in our direct line of sight.
In a new report in Current Directions in Psychological Science, a journal of the Association for Psychological Science, Roberta L Klatzky, Bing Wu and George Stetten of Carnegie Mellon University suggest that we can locate objects in space by accurately combining features from perception and visualization. The resulting spatial representation is called amodal, because it is independent of any particular sensory experience, like vision. The use of ultrasound technology provides an everyday context where people need to combine information from direct perception and visualization, and the authors used this technology for their experiments.
Ultrasound works by using sound waves to generate images through soft tissue. A transducer (a probe that is passed along the skin) sends out sound waves below the skin's surface. As the sound waves encounter sound-reflecting objects (such as organs), echoes are sent back to the transducer. These echoes are converted into 2-D images, showing us on a monitor what the objects look like. Doctors are increasingly relying on ultrasound technology to help guide them in a number of surgical procedures, such as placing peripheral catheters and performing breast biopsies. As doctors become more and more reliant on ultrasound, it becomes more important to fully understand and improve use of this technology.
One of the challenging aspects of using ultrasound is that the image we see is located away from its original source. An ultrasound image of an abdomen, for example, shows up on an external monitor. The image on the monitor is entirely dependent on the placement of the transducer—the image will move whenever the transducer does. In addition, the image on the monitor may be enlarged to focus on a specific detail. These factors can make ultrasound images difficult to interpret and making sense of them requires a number of higher level processes. When viewing an ultrasound, the doctor needs to be able to mentally rotate and rescale the image to really know what he is looking at. Therefore, when clinicians use ultrasound images to guide surgical interventions, they are relying on visualization to understand what they are seeing and to determine target locations.
The authors have developed a novel ultrasound display — one in which the ultrasound image will appear within the object being scanned (known as an in situ image). In other words, during an abdominal ultrasound, the ultrasound image will actually appear inside the abdomen, floating at the location of the imaged data. They achieved this by mounting a video display and mirror onto the handle of the transducer. Light rays coming from the monitor are bounced back by the mirror, placing the image at its originating location (the abdomen, for example). Viewing the resulting ultrasound image in the abdomen itself relies on direct perception and results in more accurate analysis of the image, compared to visualization, which is needed for viewing ultrasound images on traditional monitors.
However, the authors suggest that viewing ultrasound images on monitors and using them to guide surgeries is not based entirely on visualization. They note that when the transducer contacts the skin, the skin presses down and moves towards the object of interest, and this difference must be taken into account by doctors who will be performing surgeries. Seeing the indentation and feeling the change in pressure do not require higher levels of processing but are mediated by direct perception. To ensure proper placement of the surgical tool, the doctor needs to be able to look at the image on the screen and know how deep to make the cut and where to place the tool. The authors conducted a series of experiments to see if the two methods of spatial localization, direct perception and visualization, can be integrated by means of amodal representation to ensure proper localization of the target of interest using ultrasound.
In these experiments, participants had to locate a bead in a tank filled with opaque liquid using ultrasound images. To test the accuracy of the visualization component of perception, the participants were asked to use the ultrasound to help them guide a needle to the bead. The subjects made errors in locating the bead; they consistently picked locations that were too shallow.
To test the accuracy of the direct perception component, the researchers created a set of tank lids; the center portion of each lid was indented by a certain amount and covered with a rubber surface. Below the lids were elastic bands, which created a resisting force on the lids. The subjects pushed the transducer into the lid until it "bottomed out" and were asked to draw how deeply the transducer was pressed into the lid. The results showed that the more the surface pushed back against the transducer, the greater the subjects judged the indentation to be.
In the third experiment, the researchers tested the accuracy of combining both direct perception and visualization to locate the bead in the tank. They found that the target depth was again underestimated. What was interesting, however, was that the errors from the first two experiments predicted the outcome of the third experiment; the sum of the errors of the individual components (direct perception and visualization) was the same as the errors of the combined condition. The authors surmise that this is evidence of a specific mental process that accurately combines perception with visualization, supporting the notion of amodal representation.
In addition to gaining more insight into perception and how we normally locate objects, the authors note that their findings are very relevant in medical settings, especially for doctors who routinely use ultrasounds to diagnose and treat medical conditions. The authors suggest that by improving localization of proper targets via ultrasound, "the success of clinical outcomes such as catheter placement could be enhanced."
Cite This Page: