On Google Translate, again
In a couple of postings last winter (see here and here), I already discussed Google’s machine translation tool, Google Translate (GT) and expressed my skepticism about it. Today, another article on GT by David Bellos from The Independent has crossed my desktop. Three points made in that article are worth discussing.
Point number one: as I showed experimentally with my little “gay goose” investigation, and as the article confirms, GT does not really translate from any one of its 58 languages to any one of the 58 languages directly. Only a few pairs are subject to direct translation. So in fact GT does not really provide 3,306 separate translation services, as advertised. Rather, it uses the so-called “pivots”, or intermediary languages. For example, if you ask GT to translate a bit of text from Farsi into Icelandic (or vice versa), the translation will be mediated by English (which is indeed the most common intermediary language, for obvious reasons).
The reason that GT uses intermediary languages is because GT does not really “translate”. Instead, it searches an enormous database of translations already made by human translators for a good match. Believe it or not, a big chunk of that database is mystery novels (I just knew it that mystery novels are good for something!!!). Thus, as the author of the article puts it, “John Grisham makes a bigger contribution to the quality of GT’s Icelandic-Farsi translation device than Rumi or Halldór Laxness ever will”. As any human translator would know, using intermediary languages doesn’t improve the quality of the translation. Many anecdotal stories about such “mediated” translations are out there and you might have heard some. If you still don’t believe it, read what happened to the “happy geese” in my mediated translation exercise.
Point number two: although the author of the article David Bellos admits that GT “may also produce nonsense”, he claims that
“the kind of nonsense a translation machine produces is usually less dangerous than human-sourced bloopers. You can usually see instantly when GT has failed to get it right, because the output makes no sense, and so you disregard it. (This is why you should never use GT to translate into a language you do not know very well. Use it only to translate into a language in which you are sure you can recognise nonsense.)”
While it is true that a GT-produced “translation” may be less “dangerous” in the sense that it is typically so outlandish that anyone with some knowledge of the target language will be able to spot it, it is also far less useful in practical terms. Because in most cases — our little “gay geese” experiments aside — people use a translation tool (or turn to a human translator, for that matter) because they need a useful translation. When GT spits out lots of nonsense, what are you going to do with it, even if you recognize it as such? You needed a text translated, you used GT and you are still no closer to having a decent translation than you were before you started. If you need to decipher a sentence or two, you might curse under your breath and ask your Facebook friends for help. If you have a real-life large text, say, a legal document or a technical manual, you will call your local translation company for their estimate. One way or another, you will turn to human translators who may produce “bloopers”, may even produce a bad, inaccurate translation, for all you know (and assuming you don’t know the source language, you might not be able to recognize it), but they give you a chance of producing a translation that you can actually use.
As for the danger of human translators producing errors that can’t be easily recognized by someone who knows only the target language, it is true that it may happen. And it often does. Because errare humanum est, as we all know. But that’s exactly why any reputable translation service will use quality control measures which in the overwhelming majority of cases will catch the errors: each translation done by a reputable firm will be edited by an editor and proofread by yet another set of eyes. Sometimes, a document will even be backtranslated, edited, proofread and compared to the original. For things like the international space station documentation or clitical trial protocols, the QA is very strict.
In contrast, the assumption with GT is that the machine will either produce a workable translation or such outlandish garbage that it will be thrown away immediately. While this is often the case, more subtle bloopers are not completely excluded either. Once again, I refer you to the “gay geese” story: if you only know the target language, which is not English (and English is used as an intermediary), you might wonder whether a children’s song really talks about some rare breed of birds with an odd sexual orientation (odd for birds, anyway). Or you might just “buy” it. Therefore, using machine translation does not really guarantee that you will be able to catch the errors. But what’s wrong with using the same (human) editors and proofreaders, you might ask? Again, speaking from experience with machine translation in industrial situations, everything! Because most of the time machine translation tools spit out stuff that it is so nonsense, that in practical terms it is easier to do the human translation from scratch than to use an editor to fix a machine-made translation. Thus, while GT may be a cute tool for games and experiments, it is not a tool that can be used for any large-scale translation projects.
Point three: the author challenges all that is known in current linguistic theory about how people know, produce and process language (is he another “amateurish linguist”? — probably!) by saying that
GT is also a splendidly cheeky response to one of the great myths of modern language studies. It was claimed, and for decades it was barely disputed, that what was so special about a natural language was that its underlying structure allowed an infinite number of different sentences to be generated by a finite set of words and rules.
According to this writer, human translators really do the same thing that GT does: scan some huge database of prior translations in search for a good match. Supposedly, humans do not decompose a sentence into its constituent parts, translate those, then understanding the context, recompose them again into a meaningful sentence in the target language. They search their memories for bits and pieces of things they’ve already translated and use those.
If you, like David Bellos, think that human translators store sentences they’ve already translated, try this little experiment. When you are in the middle of a conversation or discussion with someone, stop them and ask them to repeat verbatim the previous sentence they’ve just said. Chances are, they will remember the “pure meaning” of what they said, but not verbatim how they said it. (You might want to wear a wire in order to confirm!). In the off-chance that you get a correct response, as rare as that is, next time ask you interlocutor to repeat verbatim the third sentence back from where you stop them. I’ve tried many times, and always got a negative result (and a stare of incomprehension to go with it!). When you scare all your friends off with your little crazy experiments, try it on yourself — just stop suddenly and think what your sentence three sentences ago was, verbatim.
What this experiment will convince you of, I am sure, is that, contrary to David Bellos’s beliefs, even if we “encounter the same needs, feel the same fears, desires and sensations at every turn”, we do not “say the same things over and over again”, at least not in exactly the same way. Although when I debate the merits of machine translation with its advocates, it does seem to me that we do.
Like this post? Please pass it on:
« The whole story…
Why Some Languages Sound So Fast, or do they? »