New! Sign up for our free email newsletter.
Science News
from research organizations

Grammar Lost Translation Machine In Researchers Fix Try Will

Date:
September 9, 2005
Source:
University of Southern California
Summary:
The makers of a University of Southern California computer translation system consistently rated among the world's best are teaching their software something new: English grammar.
Share:
FULL STORY

The makers of a University of Southern California computertranslation system consistently rated among the world's best areteaching their software something new: English grammar.

Mostmodern "machine translation" systems, including the highly rated onecreated by USC's Information Sciences Institute, rely on brute forcecorrelation of vast bodies of pre-translated text from such sources asnewspapers that publish in multiple languages.

Software matchesup phrases that consistently show up in parallel fashion — the English"my brother's pants" and Spanish "los pantalones de mi hermano," — andthen use these matches to piece together translations of new material.

Itworks — but only to a point. ISI machine translation expert DanielMarcu (left) says that when such a system is "trained on enoughrelevant bilingual text ... it can break a foreign language up intophrasal units, translate each of them fairly well into English, and dosome re-ordering. However, even in this good scenario, the output isstill clearly not English. It takes too long to read, and it isunsatisfactory for commercial use."

So Marcu and colleague KevinKnight (right), both ISI project leaders who also hold appointments inthe USC Viterbi School of Engineering department of computer science,have begun an intensive $285,000 effort, called the Advanced LanguageModeling for Machine Translation project, to improve the system theycreated at ISI by subjecting the texts that come out of theirtranslation engine to a follow-on step: grammatical processing.

Thestep seems simple, but is actually imposingly difficult. "For example,there is no robust algorithm that returns 'grammatical' or'ungrammatical' or 'sensible' or 'nonsense' in response to a user-typedsequence of words," Marcu notes.

The problem grows out of anatural language feature noted by M.I.T. language theorist Noam Chomskydecades ago. Language users have literally a limitless ability to nestand cross-nest phrases and ideas into intricate referential structures— "I was looking for the stirrups from the saddle that my ex-wife'soldest daughter took with her when she went to Jack's new place inColorado three years ago, but all she had were Louise's second-handsaddle shoes, the ones Ethel's dog chewed during the fire."

Unravelingthese verbal cobwebs (or, in the more common description, tracingbranching "trees" of connections) is such a daunting task thatprogrammers long ago went in the brute force direction of matchingphrases and hoping that the relation of the phrases would become clearto readers.

With the limits of this approach becoming clear,researchers have now begun applying computing power to trying toassemble grammatical rules. According to Knight, one crucial step hasbeen the creation of a large database of English text whose syntax hasbeen hand-decoded by humans, the "Penn Treebank."

Using this andother sources, computer scientists have begun developing ways to modelthe observed rules. A preliminary study by Knight and two colleagues in2003 showed that this approach might be able to improve translations.

Accordingly,for their study, "We propose to implement a trainable tree-basedlanguage model and parser, and to carry out empiricalmachine-translation experiments with them. USC/ISI's state-of-the-artmachine translation system already has the ability to produce, for anyinput sentence, a list of 25,000 candidate English outputs. This listcan be manipulated in a post-processing step. We will re-rank theselists of candidate string translations with our tree- based languagemodel, and we plan for better translations to rise to the top of thelist."

One crucial trick that the system must be able to do is topick out separate trees from the endless strings of words. But this isdoable, Knight believes -- and in the short, not the long term.

Referringto the annual review of translation systems by the National Instituteof Science and Technology, in which ISI consistently gains top scores,"we want to have the grammar module installed and working by the nextevaluation, in August 2006," he said.

Knight and Marcu arecofounders and, respectively, chief scientist and chief technology andoperating officer of a spinoff company, Language Weaver.


Story Source:

Materials provided by University of Southern California. Note: Content may be edited for style and length.


Cite This Page:

University of Southern California. "Grammar Lost Translation Machine In Researchers Fix Try Will." ScienceDaily. ScienceDaily, 9 September 2005. <www.sciencedaily.com/releases/2005/09/050909074409.htm>.
University of Southern California. (2005, September 9). Grammar Lost Translation Machine In Researchers Fix Try Will. ScienceDaily. Retrieved March 28, 2024 from www.sciencedaily.com/releases/2005/09/050909074409.htm
University of Southern California. "Grammar Lost Translation Machine In Researchers Fix Try Will." ScienceDaily. www.sciencedaily.com/releases/2005/09/050909074409.htm (accessed March 28, 2024).

Explore More

from ScienceDaily

RELATED STORIES