AI-driven machine translation still has a long way to go. Image: Facebook

This is the 12th installment in a series on the stupidity of artificial intelligence. Read Part 1, Part 2Part 3Part 4Part 5Part 6, Part 7Part 8Part 9, Part 10 and Part 11.

Common sense would tell us that it is necessary to understand a text in order to translate it. So can an artificial intelligence system actually understand a text in the same sense a human being can? 

The simplest approach to translation would be simply to have a computer translate word-by-word, utilizing a digitalized bilingual dictionary, and ignoring grammatical structures.

Needless to say, the results of this simplistic procedure are often incomprehensible and useless. Translating between human languages requires intelligence in some form. An ideal field for AI to flex its muscles!   

One of the biggest dilemmas for machine translation (MT) lies in the fact that human language is full of ambiguities. The meanings of words or even entire sentences – and hence also their translations – cannot be determined in isolation, but only in context. The latter can include not only other words and sentences in the text, but also knowledge about the subject matter of the text.    

In the following English-language examples, observe how one and the same phrase signifies completely different things, depending on what comes before or after it: 

  1. The man just arrived at the hospital. His state is poor.
    The governor of Mississippi pleaded for more investment. His state is poor.
  2. The decision will require a great deal of intelligence from our secret agents worldwide. The decision will require a great deal of intelligence but, alas, the president is stupid.  
  3. The plane was grounded to drain away the static electricity. 
    The plane was grounded because its electrical systems had been tampered with.

Note, that different meanings of a single English phrase will generally be translated differently in another language. Thus, in Example 1, the first instance of “his state is poor” could be rendered in Chinese as 他身体不好 = “his bodily condition is not good”; while in the second case, completely different terms would be used: 他的州很穷, where 穷 unambiguously signifies economic poverty and could not be applied to a state of health. 

Note, also, that deciding on the correct meaning often requires factual knowledge. 

In Example 3, the translator has to know that connecting an object electrically to the ground is a way to remove static electricity; and that malfunctioning systems are a reason to forbid an airplane from taking off – a completely different meaning of “grounding”. 

It’s interesting also that in Example 3, Google Translate seems to be tricked by the fact that “electric” occurs in both sentences, and it falsely chooses the former meaning for the second sentence as well as the first: In Chinese 接地 = connected to the ground, where the correct translation would be 停飞  = (ordered to) stop flying. This translation failure reveals that Google Translate is working with correlations between words, rather than their actual meaning. 

In fact, I deliberately design the sentences to provoke this error, as an “adversary example”  (In the meantime, Google may have installed a fix.)

It is great fun to invent similar examples based on the literal versus idiomatic meanings of an idiomatic phrase.

Don’t pull my leg. It was badly injured in an accident. 
Don’t pull my leg. Tell me the truth. 

I am sick and tired. My doctor told me to rest and stay at home. 
My boss keeps screaming at me. I am sick and tired.

He is trying to fool you. Don’t let him take you for a ride. 
He is a very poor driver. Don’t let him take you for a ride.

I had to sign the agreement, otherwise they would sue me. So I had no choice but to bite the bullet.   
He put a bullet in my mouth and said, if you don’t bite it, I’ll kill you.  So I had no choice but to bite the bullet.        

Google Translate also has problems with these. 

If you observe your own mental process while reading these examples, you will find that your mind thinks mainly in terms of situations, rather than words and phrases. Reading a phrase such as, “the man just arrived at the hospital” automatically activates your imagination.

You probably won’t directly, visually imagine a scene with the patient coming in, but something in your mind starts moving in that direction. One might call it “pre-imagination.”

In some ways, a text being read is like a play performed in a crowded theater; the audience is your mental processes, reacting to the unfolding dramatic scene in a variety of ways at the same time. Sometimes the “audience” is relatively quiet, but at other times it becomes loud and raucous, reacting to the text with all kinds of thoughts and impulses.

You might easily lose your concentration. 

Drawing by Aubrey Hammond to advertise the Grand Guignol Productions at the Duke of York’s Theater, 1921. Source: Wikemedia Commons

The point I am trying to make is that reading and understanding in the human mind is not a “clean” mathematical process like arranging and rearranging pieces of a puzzle.

Something very different is going on. To use a metaphor from physics: The words are like particles interacting with the “field” of your mental processes. Those processes constitute a continuum –  a whole that has no parts that you can cleanly separate from each other. 

This continuum of mental processes is impossible for AI – in its present form – to deal with directly. AI is too stupid. The best that AI can do is to approximate it using huge databases and complex mathematical procedures. 

As I mentioned in my last article, Y. Bar Hillel, one of the pioneers of machine translation, argued that 100% automated translation between natural languages, of the sort that could compete with human translators, would be impossible.

He emphasized the fact that correct translation often requires background knowledge not contained in the text. But the totality of knowledge about the world, which a human translator can bring to bear on translating a text, could never be downloaded into a computer. Bar-Hillel spoke of a virtually “infinite” number of facts. I prefer to speak of a “continuum.”  

Human-level translation?

Now the reader might rightly ask: “If computers are so inherently stupid as you claim, and if Bar-Hillel saw fundamental barriers to human-level automatic translation, then how do you explain the fact that today’s AI is threatening to put many professional translators out of work?

Indeed, 70 years of concentrated effort has actually led to AI translation systems that can compete with humans, at least for certain types of texts and certain levels of quality criteria.  

Earlier this year, the prestigious journal Nature Communications published an article entitled, “Transforming machine translation: a deep learning system reaches news translation quality comparable to human professionals.”

The article reports on a specific system called CUBBITT (Charles University Block-Backtranslation-Improved). The results reflect progress across-the-board in machine translation systems, especially those employing so-called “deep learning” based on artificial neural networks. The authors state:

“The quality of human translation was long thought to be unattainable for computer translation systems. In this study, we present a deep-learning system, CUBBITT, which challenges this view.

“In a context-aware blind evaluation by human judges, CUBBITT significantly outperformed professional-agency English-to-Czech news translation in preserving text meaning (translation adequacy)…. Moreover, most participants … struggle to distinguish CUBBITT translations from human translations.”

The article makes an elaborate assessment of the CUBBITT system, which I won’t try to summarize. But here is one highlight: 

The authors presented test persons fluent in both languages, with a series of sentences in English together with translations into the Czech language. Half of them were done by professional human translators, and half by the CUBBITT system, in random order. 

The test persons were asked to identify which of them were done by professionals, and which were machine translations. Out of 15 participants, nine gave false answers about half the time, indicating that they were unable to reliably distinguish between the two. These nine test persons included three professional translators, three MT researchers, and three other participants. 

Interestingly, an analogous test performed with a second group of test persons, using Google Translate instead of CUBBITT, yielded a much less favorable result. All but one of the 16 participants could pick out the human translations, and only one failed to do so in a statistically significant way.      

The article concludes:

“This work approaches the quality of human translation and even surpasses it in adequacy in certain circumstances. This suggests that deep learning may have the potential to replace humans in applications where conservation of meaning is the primary aim.”

It was admitted, at the same time, that CUBBITT’s translations did not achieve the same smoothness or fluency of style, as human translators.  Indeed:

“Highly qualified human translators with infinite amount of time and resources will likely produce better translations than any MT system. However, many clients cannot afford the costs of such translators and instead use services of professional translation agencies, where the translators are under certain time pressure.”

Time pressure naturally degrades the work quality of human translators, even to the point that they will make more errors on average than the best MT systems. The latter can also offer higher speed. 

Thus, in a certain domain of MT – especially the most routine, non-sensitive translation tasks, where fluent style is not a major consideration -– AI can even achieve a “superhuman” capability. 

How did this happen? By what magic has AI overcome the kinds of problems I indicated above? Have AI systems become capable of understanding what texts mean, in the way a human being does? 

Look for answers in the next installment of this series. 

Jonathan Tennenbaum received his PhD in mathematics from the University of California in 1973 at age 22. Also a physicist, linguist and pianist, he’s a former editor of FUSION magazine. He lives in Berlin and travels frequently to Asia and elsewhere, consulting on economics, science and technology.