Why Machine Translation Should Have a Role in Your Life. Really!
By Spence Green (@LiltHQ)
Reblogged from the The Language of Translation blog with permission from the author (incl. the image)
Guest author Spence Green talks about a heated topic: Machine Translation, Translation Memories and everything in between. Spence Green is a co-founder of Lilt, a provider of interactive translation systems. He has a PhD in computer science from Stanford University and a BS in computer engineering from the University of Virginia.
It is neither new nor interesting to observe that the mention of machine translation (MT) provokes strong opinions in the language services industry. MT is one scapegoat for ever decreasing per-word rates, especially among independent translators. The choice to accept post-editing work is often cast in moral terms (peruse the ProZ forums sometime…). Even those who deliberately avoid MT can find it suddenly before them when unscrupulous clients hire “proof-readers” for MT output. And maybe you have had one of those annoying conversations with a new acquaintance who, upon learning your profession, says, “Oh! How useful. I use Google Translate all the time!”
But MT is a tool, and one that I think is both misunderstood and underutilized by some translators. It is best understood as generalized translation memory (TM), a technology that most translators find indispensable. This post clarifies the relationship between TM and MT, dispels myths about the two technologies, and discusses a few recent developments in translation automation.
Translation Memory
Translation memory (TM) was first proposed publicly by Peter Arthern, a translator, in 1979. The European Commission had been evaluating rule-based MT, and Arthern argued forcefully that raw MT output was an unsuitable substitute for scratch translations. Nonetheless, there were intriguing possibilities for machine assistance. He observed a high degree of repetition in the EC’s text, so efficiency could be improved if the EC stored “all the texts it produces in [a] system’s memory, together with their translations into however many languages are required.” [1, p.94]. For source segments that had been translated before, high precision translations could be immediately retrieved for human review.
Improvements upon Arthern’s proposal have included subsegment matching, partial matching (“fuzzies”) with variable thresholds, and even generalization over inflections and free variables like pronouns. But the basic proposal remains the same: Translation memory is a high-precision system for storing and retrieving previously translated segments.
Machine Translation
Arthern admitted a weakness in his proposal: the TM could not produce output for unseen segments. Therefore, the TM “could very conveniently be supplemented by ‘genuine’ machine translation, perhaps to translate the missing areas in texts retrieved from the text memory” [1, p.95]. Arthern viewed machine translation as a mechanism for increasing recall, i.e., a backoff in the case of “missing areas” in texts.
Think of MT this way: Machine translation is a high-recall system for translating unseen segments.
Modern MT systems are built on large collections of human translations, so they can of course translate previously seen segments, too. But for computational reasons they typically only store fragments of each sentence pair, so they often fail to produce exact matches. TM is therefore a special case of MT for repeated text. TM offers high-precision, and general MT fills in to improve recall.
Myths and countermyths
By understanding MT and TM as closely related technologies, each with a specific and useful role in the translation process, you can offer informed responses when you hear the following proclamations:
- TM is “better than” MT – false. MT is best suited to unseen segments, for which TM often produces no output.
- Post-editing is MT – false. Both TM and MT produce suggestions for input source segments. Partial TM matches are post-edited just like MT. Errors can be present in TM exact matches, too.
- MT post-editing leads to lower quality translation – false. The translator is always free to ignore the MT just as he or she can disregard TM partial matches. Any effect on quality is probably due to priming, apathy, and/or other behavioral phenomena.
- MT is only useful if it is trained on my data – neither true nor false. Statistical MT systems are trained on large collections of human-generated parallel text, i.e., large TMs. If you are translating text that is similar to the MT training data, the output can be surprisingly good. This is the justification for the custom MT offered by SDL, Microsoft, and other vendors.
- TMs improve with use; MT does not – true until recently. Lilt and CasmaCat (see below) are two recent systems that, like TM, learn from feedback.
Tighter MT Integration
Major desktop-based CAT systems such as Trados and memoQ emphasize TM over MT, which is typically accessible only as a plugin or add-on. This is a sensible default since TM has the twin benefits of high precision and domain relevance. But new CAT environments are incorporating MT more directly as in Arthern’s original proposal.
In the November 2015 issue of the ATA Chronicle I wrote about three research CAT systems based on interactive MT, that is an MT system that responds to and learns from translator feedback. Two of them are now available for production use:
- CasmaCat – Free, open source, runs locally on Linux or on a Windows virtual machine.
- Lilt – Free, cloud-based, runs on all major browsers.
The present version of CasmaCat does not include TM, so I’ll briefly describe Lilt, which is based on research by me and others on translator productivity.
Lilt offers the translator an integrated TM / MT environment. TM entries, if present, are always shown before backing off to MT. The MT system is interactive, so it suggests words and full translations as the translator types. Smartphone users will be familiar with this style of predictive typing.
Lilt also learns. Recall that both TM and MT are derived from parallel text. In Lilt, each confirmed translation is immediately added to the TM and MT components. The MT system extracts new words and phrases, which can be offered as future suggestions.
Conclusion
New translators should think about how to integrate MT into their workflows as a backoff. Experiment with it in combination with your TM. Measure yourself. In a future post, I’ll offer some tips for working with both conventional and interactive MT systems.
————— [1] Peter J. Arthern. 1979. Machine translation and computerized terminology systems: A translator’s viewpoint. In Translating and the Computer, B.M. Snell (ed.)