New! Sign up for our free email newsletter.
Science News
from research organizations

Measuring ability of artificial intelligence to learn is difficult

Date:
January 17, 2019
Source:
University of Waterloo
Summary:
Organizations looking to benefit from the artificial intelligence (AI) revolution should be cautious about putting all their eggs in one basket, a study has found.
Share:
FULL STORY

Organizations looking to benefit from the artificial intelligence (AI) revolution should be cautious about putting all their eggs in one basket, a study from the University of Waterloo has found.

In a study published in Nature Machine Intelligence, Waterloo researchers found that contrary to conventional wisdom, there can be no exact method for deciding whether a given problem may be successfully solved by machine learning tools.

"We have to proceed with caution," said Shai Ben-David, lead author of the study and a professor in Waterloo's School of Computer Science. "There is a big trend of tools that are very successful, but nobody understands why they are successful, and nobody can provide guarantees that they will continue to be successful.

"In situations where just a yes or no answer is required, we know exactly what can or cannot be done by machine learning algorithms. However, when it comes to more general setups, we can't distinguish learnable from un-learnable tasks."

In the study, Ben-David and his colleagues considered a learning model called estimating the maximum (EMX), which captures many common machine learning tasks. For example, tasks like identifying the best place to locate a set of distribution facilities to optimize their accessibility for future expected consumers. The research found that no mathematical method would ever be able to tell, given a task in that model, whether an AI-based tool could handle that task or not.

"This finding comes as a surprise to the research community since it has long been believed that once a precise description of a task is provided, it can then be determined whether machine learning algorithms will be able to learn and carry out that task," said Ben-David.

The study, "Learnability can be Undecidable," was co-authored by Ben-David, Pavel Hrubeš from the Institute of Mathematics of the Academy of Sciences in the Czech Republic, Shay Morgan from the Department of Computer Science, Princeton University, Amir Shpilka, Department of Computer Science, Tel Aviv University, and Amir Yehudayoff from the Department of Mathematics, Technion-IIT.


Story Source:

Materials provided by University of Waterloo. Note: Content may be edited for style and length.


Journal Reference:

  1. Shai Ben-David, Pavel Hrubeš, Shay Moran, Amir Shpilka, Amir Yehudayoff. Learnability can be undecidable. Nature Machine Intelligence, 2019; 1 (1): 44 DOI: 10.1038/s42256-018-0002-3

Cite This Page:

University of Waterloo. "Measuring ability of artificial intelligence to learn is difficult." ScienceDaily. ScienceDaily, 17 January 2019. <www.sciencedaily.com/releases/2019/01/190117092604.htm>.
University of Waterloo. (2019, January 17). Measuring ability of artificial intelligence to learn is difficult. ScienceDaily. Retrieved March 27, 2024 from www.sciencedaily.com/releases/2019/01/190117092604.htm
University of Waterloo. "Measuring ability of artificial intelligence to learn is difficult." ScienceDaily. www.sciencedaily.com/releases/2019/01/190117092604.htm (accessed March 27, 2024).

Explore More

from ScienceDaily

RELATED STORIES