Featured Research

from universities, journals, and other organizations

Falling Prey To Machines?

Date:
February 14, 2003
Source:
University Of Michigan College Of Engineering
Summary:
It's coming, but when? From Garry Kasparov to Michael Crichton, both fact and fiction are converging on a showdown between man and machine. But what does a leading artificial intelligence expert--the world's first computer science PhD--think about the future of machine intelligence? Will computers ever gain consciousness and take over the world?

ANN ARBOR, Mich (Feb. 10, 2003) -- It's coming, but when? From Garry Kasparov to Michael Crichton, both fact and fiction are converging on a showdown between man and machine. But what does a leading artificial intelligence expert--the world's first computer science PhD--think about the future of machine intelligence? Will computers ever gain consciousness and take over the world?

"Computer sentience is possible," said John Holland, professor of electrical engineering and computer science and professor of psychology at the University of Michigan. "But for a number of reasons, I don't believe that we are anywhere near that stage right now."

In the 1960s, Holland created the field of genetic algorithms, a process in which computers solve problems by mimicking biological evolution. By adapting concepts of natural selection and sexual reproduction to computer programming, Holland showed that computers could "evolve" their programming to solve complex problems in ways that even their creators did not fully understand.

Researchers have since then been able to use genetic algorithms to "breed" optimal solutions for things like managing energy distribution systems or designing ultra-efficient aircraft engines. Genetic algorithms also provide the basis for much of Michael Crichton's best-selling novel Prey, in which nano-sized machines evolve into an intelligent, life-threatening swarm. Holland's research and that of several of his students is cited as source material for the book.

But evolving solutions for well-defined optimization problems is distinctly different than synthesizing something as opened-ended as consciousness or freewill.

According to Holland, the problem with developing artificial intelligence through things like genetic algorithms is that researchers don't yet understand how to define what computer programs should be evolving toward. Human beings did not evolve to be intelligent--they evolved to survive. Intelligence was just one of many traits that human beings exploited to increase their odds of survival, and the test for survival was absolute. Defining an equivalent test of fitness for targeting intelligence as an evolutionary goal for machines, however, has been elusive. Thus, it is difficult to draw comparisons between how human intelligence developed and how artificial intelligence could evolve.

"We don't understand enough about how our own human software works to come even close to replicating it on a computer," says Holland.

According to Holland, advances in software have not kept pace with the exponential improvements in hardware processing power, and there are many artificial intelligence problems that cannot be solved by simply performing more calculations. While hardware performance continues to double almost every year and a half, the doubling time for software performance is at least 20 years.

"In the final analysis, hardware is just a way of executing programs," says Holland. "It's the software that counts."

Comparisons between the brain and electronic hardware are also difficult to draw. For example, the issue of "fanout" demonstrates the complexity of the brain over even today's most sophisticated computers. Fanout refers to the number of connections an element in a network can have to another element of a network. Today's most complicated computers have a fanout factor of about 10. The human brain, however, has a fanout of 10,000.

"We don't have the faintest idea of what machines with that kind of fanout would be like, so inference from the capabilities of present machines to such machines is feeble at best," notes Holland. "As Nobel Laureate physicist Murray Gell-Mann says, three orders of magnitude is a new science."

Advances in hardware, however, have helped computers tackle simpler feats of human-like intelligence with some success. In 1997, IBM's Deep Blue supercomputer was the first machine to beat world chess champion Garry Kasparov. In a recent rematch, Deep Blue's successor, Deep Junior, fought Kasparov to a dramatic 3-3 draw. Kasparov said he played better than the machine and would have pressed a human opponent for a win, but he was afraid that the tireless computer would punish him for on any small mistake he might have made in his fatigue.

"It is a remarkable, but not necessarily surprising accomplishment for computers to play chess at this level. They've been approaching this kind of capability for years," says Holland of the Kasparov-Deep Junior match. "But AI researchers are much more amazed that human beings can still compete with computers on such an even basis given their limited abilities for detailed search. It shows us how much we don't know about the human brain."

Human beings approach playing chess very differently than computers. Kasparov, the top ranking chess player in the world, can probably evaluate about two or three moves a second, relying on his superb intuition and pattern-recognition abilities--things very difficult to teach a computer--to help him win. Deep Junior, on the other hand, crunches up to 3 million moves per second and draws on a huge library of past games and possible moves to succeed. Relying on a weighted algorithm that calculates a numerical advantage representing each possible move, the computer mostly powers through a list of potential ways any given game can play out.

"Until the last decade of the 20th century, AI relied on clever programming and brute computation," says Holland. "Deep Junior is an example of this approach. But the next step for machine intelligence will be in getting them to invent truly creative solutions to complex problems."

For Holland, the crucial leap in machine intelligence will be when computers start thinking like human beings, rather than just reaching the same results as them with different processes. This kind of advanced artificial intelligence would involve learning new skills, adapting to unforeseen circumstances and using analogy and metaphor like humans do. To make these breakthroughs possible, researchers will need an overarching theory that can shape the field of artificial intelligence in the same way that Maxwell's theory of electromagnetism shaped modern physics.

"We are at the earliest stages of theory-making in AI and mature theories of this kind typically take decades of work," says Holland. "Sentient computers are possible, but I don't think we will have them until we have such guidance."

Holland is a pioneer in the fields of artificial intelligence, parallel computation, adaptive systems and cognitive processes, and author of the book Hidden Order: How Adaptation Builds Complexity. He received the world's first PhD in computer science in 1959 from the University of Michigan. He also holds an MA ('54) in mathematics from the University of Michigan and a BS ('50) in physics from the Massachusetts Institute of Technology.

The University of Michigan College of Engineering is consistently ranked among the top engineering schools in the world. The College is composed of 11 academic departments: aerospace engineering; atmospheric, oceanic and space sciences; biomedical engineering; chemical engineering; civil and environmental engineering; electrical engineering and computer science; industrial and operations engineering; materials science and engineering; mechanical engineering; naval architecture and marine engineering; and nuclear engineering and radiological sciences. Each year the College enrolls over 7,000 undergraduate and graduate students and grants about 1,200 undergraduate degrees and 800 masters and doctoral degrees. For more information, please visit our web site at http://www.engin.umich.edu .


Story Source:

The above story is based on materials provided by University Of Michigan College Of Engineering. Note: Materials may be edited for content and length.


Cite This Page:

University Of Michigan College Of Engineering. "Falling Prey To Machines?." ScienceDaily. ScienceDaily, 14 February 2003. <www.sciencedaily.com/releases/2003/02/030214075837.htm>.
University Of Michigan College Of Engineering. (2003, February 14). Falling Prey To Machines?. ScienceDaily. Retrieved July 23, 2014 from www.sciencedaily.com/releases/2003/02/030214075837.htm
University Of Michigan College Of Engineering. "Falling Prey To Machines?." ScienceDaily. www.sciencedaily.com/releases/2003/02/030214075837.htm (accessed July 23, 2014).

Share This




More Matter & Energy News

Wednesday, July 23, 2014

Featured Research

from universities, journals, and other organizations


Featured Videos

from AP, Reuters, AFP, and other news services

Government Approves East Coast Oil Exploration

Government Approves East Coast Oil Exploration

AP (July 18, 2014) The Obama administration approved the use of sonic cannons to discover deposits under the ocean floor by shooting sound waves 100 times louder than a jet engine through waters shared by endangered whales and turtles. (July 18) Video provided by AP
Powered by NewsLook.com
Sunken German U-Boat Clearly Visible For First Time

Sunken German U-Boat Clearly Visible For First Time

Newsy (July 18, 2014) The wreckage of the German submarine U-166 has become clearly visible for the first time since it was discovered in 2001. Video provided by Newsy
Powered by NewsLook.com
Obama: U.S. Must Have "smartest Airports, Best Power Grid"

Obama: U.S. Must Have "smartest Airports, Best Power Grid"

Reuters - US Online Video (July 17, 2014) President Barak Obama stopped by at a lunch counter in Delaware before making remarks about boosting the nation's infrastructure. Mana Rabiee reports. Video provided by Reuters
Powered by NewsLook.com
Crude Oil Prices Bounce Back After Falling Below $100 a Barrel

Crude Oil Prices Bounce Back After Falling Below $100 a Barrel

TheStreet (July 16, 2014) Oil Futures are bouncing back after tumbling below $100 a barrel for the first time since May yesterday. Jeff Grossman is the president of BRG Brokerage and trades at the NYMEX. Grossman tells TheStreet the Middle East is always a concern for oil traders. Oil prices were pushed down in recent weeks on Libya increasing its production. Supply disruptions in Iraq fading also contributed to prices falling. News from China's economic front showing a growth for the second quarter also calmed fears on its slowdown. Jeff Grossman talks to TheStreet's Susannah Lee on this and more on the Energy Department's Energy Information Administration (EIA) report. Video provided by TheStreet
Powered by NewsLook.com

Search ScienceDaily

Number of stories in archives: 140,361

Find with keyword(s):
Enter a keyword or phrase to search ScienceDaily for related topics and research stories.

Save/Print:
Share:

Breaking News:
from the past week

In Other News

... from NewsDaily.com

Science News

Health News

Environment News

Technology News



Save/Print:
Share:

Free Subscriptions


Get the latest science news with ScienceDaily's free email newsletters, updated daily and weekly. Or view hourly updated newsfeeds in your RSS reader:

Get Social & Mobile


Keep up to date with the latest news from ScienceDaily via social networks and mobile apps:

Have Feedback?


Tell us what you think of ScienceDaily -- we welcome both positive and negative comments. Have any problems using the site? Questions?
Mobile: iPhone Android Web
Follow: Facebook Twitter Google+
Subscribe: RSS Feeds Email Newsletters
Latest Headlines Health & Medicine Mind & Brain Space & Time Matter & Energy Computers & Math Plants & Animals Earth & Climate Fossils & Ruins