Science News
from research organizations

Machines can learn to respond to new situations like human beings would

Date:
April 28, 2016
Source:
KU Leuven
Summary:
How does the image-recognition technology in a self-driving car respond to a blurred shape suddenly appearing on the road? Researchers have shown that machines can learn to respond to unfamiliar objects like human beings would.
Share:
FULL STORY

How does the image-recognition technology in a self-driving car respond to a blurred shape suddenly appearing on the road? Researchers from KU Leuven, Belgium, have shown that machines can learn to respond to unfamiliar objects like human beings would.

Imagine heading home in your self-driving car. The rain is falling in torrents and visibility is poor. All of a sudden, a blurred shape appears on the road. What would you want the car to do? Should it hit the brakes, at the risk of causing the cars behind you to crash? Or should it just keep driving?

Human beings in a similar situation will usually be able to tell the difference between, say, a distracted cyclist who's suddenly swerving, and road-side waste swept up by the wind. Our response is mostly based on intuition. We may not be sure what the blurred shape actually is, but we know that it looks like a human being rather than a paper bag.

But what about the self-driving car? Can a machine trained to recognize images tell us what the unfamiliar shape looks like? According to KU Leuven researchers Jonas Kubilius and Hans Op de Beeck, it can.

"Current state-of-the-art image-recognition technologies are taught to recognize a fixed set of objects," Jonas Kubilius explains. "They recognize images using deep neural networks: complex algorithms that perform computations somewhat similarly to the neurons in the human brain."

"We found that deep neural networks are not only good at making objective decisions ('this is a car'), but also develop human-level sensitivities to object shape ('this looks like ...'). In other words, machines can learn to tell us what a new shape -- say, a letter from a novel alphabet or a blurred object on the road -- reminds them of. This means we're on the right track in developing machines with a visual system and vocabulary as flexible and versatile as ours."

Does that mean we may soon be able to safely hand over the wheel? "Not quite. We're not there just yet. And even if machines will at some point be equipped with a visual system as powerful as ours, self-driving cars would still make occasional mistakes -- although, unlike human drivers, they wouldn't be distracted because they're tired or busy texting. However, even in those rare instances when self-driving cars would err, their decisions would be at least as reasonable as ours."


Story Source:

Materials provided by KU Leuven. Note: Content may be edited for style and length.


Journal Reference:

  1. Jonas Kubilius, Stefania Bracci, Hans P. Op de Beeck. Deep Neural Networks as a Computational Model for Human Shape Sensitivity. PLOS Computational Biology, 2016; 12 (4): e1004896 DOI: 10.1371/journal.pcbi.1004896

Cite This Page:

KU Leuven. "Machines can learn to respond to new situations like human beings would." ScienceDaily. ScienceDaily, 28 April 2016. <www.sciencedaily.com/releases/2016/04/160428152316.htm>.
KU Leuven. (2016, April 28). Machines can learn to respond to new situations like human beings would. ScienceDaily. Retrieved May 23, 2017 from www.sciencedaily.com/releases/2016/04/160428152316.htm
KU Leuven. "Machines can learn to respond to new situations like human beings would." ScienceDaily. www.sciencedaily.com/releases/2016/04/160428152316.htm (accessed May 23, 2017).

RELATED STORIES