Yes, I hate this buzzword as much as you do, at least as it’s used in the present political climate. But it did capture your attention, and, like it or not, there actually is some meaning associated with the concept of “fake news” in a more traditional sense.
I believe we’re dealing with several “fake news” items when it comes to translation, especially translation technology. I would like to talk about two of these items. The first is something I’ve discussed before at length, though my explanation must have been less than effective since it still dominates the thinking of many. The second item is something we might all be guilty of in some way.
Misconception #1: Working with Machine Translation Is the Same as Post-Editing
The first conceptual misunderstanding is that working with machine translation (MT) is essentially the same as post-editing translation. Most of us translators know this is not true, but not because we were told so or taught that way. It’s because we know that MT really is only one of many resources (alongside translation memories, termbases, corpora, dictionaries, and other online and offline resources) that can be used in the translation processes. We also know that most translation environment tools allow us to dynamically use (or not use) the content that comes from MT engines. Our proven experience stands in sharp contrast to the idea that post-editing (i.e., the correction of raw MT content) is the only way to use that technology.
Of course, we could say, “well, let others believe what they want to believe and let me do what I know is best for my business,” but I think there’s a problem with that kind of thinking. I’ve noticed how very difficult it is to talk about MT with anyone outside those who have some practical experience with it. That includes MT researchers and developers and, maybe more importantly, clients of ours who (are trying to) use MT. Typically, these individuals share the assumption that MT can be used by the translator only in the reactive way: the translator reacting to suggestions coming from the MT engine (i.e., post-editing). If that’s the assumption, then the projects offered to translators will be structured so only that kind of work with MT is possible, and the research and development into working with MT will look only into that avenue.
And this is not because of evil intent. Wordsmiths like us understand the power of words and language. If I have a concept in mind (such as how to work with MT), and the only language I have to apply to it is that of post-editing, it’s just very, very hard to change that. This is why we have to be patient, insistent, and strong in our communication that while there is this one way of working with MT output (in some cases, productively), in more cases than not there are other and better ways to work with that technology. Only then will we be sent a different kind of project and the research will look more deeply into other kinds of approaches.
Misconception #2: AI Emulates Functions of the Human Brain
This brings us to another topic, one where we ourselves might be helping to communicate something erroneous with unfortunate consequences. I’m talking about artificial intelligence (AI). There has been a lot of writing in this column and elsewhere about AI and its effects on the world of translation. Not only via neural MT, but as we discussed a few months ago, on a whole host of other kinds of technology that have an impact on the translation and translation management processes.
Clearly, we need to talk about and understand AI. Not like an AI researcher or developer would, but so we can have a healthy estimation of how much it supports our work now and in the future. But we’ve been led astray on a path littered with our own words and our own imagination. Terms like “neural MT,” “artificial intelligence,” and “deep learning” all seem to suggest that these are processes that emulate functions of the human brain. And this is exactly what pop culture and news outlets also want us to believe.
The fact? It isn’t true. How do I know? Because we don’t understand our brains. We don’t know how memories are stored. We don’t know why some parts of the brain are responsible for some functions but can also be completely reconfigured. We don’t even know whether brain activity is actually a matter of computation or a completely different kind of process. We don’t know what causes moods, creativity, intelligence, wit, and emotions. And we certainly don’t know what “mind” and “consciousness” are. We do know some impressive numbers (100 billion neurons, 100 trillion synapses, etc.), and lots of people are working very hard and making good progress on understanding more and more about the human (or really any) brain. But we’re still very far from having a good grasp on this most elusive of realms.
So, is there no artificial intelligence? Well, yes, there is, but it’s just that it doesn’t work like the human brain. In fact, the term “artificial intelligence” is incomplete. We should always refer to its full and technically correct moniker, which is “narrow AI.” (That already sounds a lot better, doesn’t it?)
Narrow AI is the ability of a machine to non-concurrently process large amounts of data and make predictions exclusively on the basis of that data. That’s what we have today, and computers are incredibly good at it. Much better than we are.
General AI (also referred to as “Artificial General Intelligence,” or AGI), on the other hand, may never actually be achieved. We don’t even know whether AGI will be built on the basis of narrow AI’s current technology. If we ever reach true AGI, machines will be able to reason, use strategy, make judgments, learn, communicate in natural language, and integrate all of this toward common goals. (And, yes, also likely do a good job with translation and pretty much everything else.)
A few weeks ago I did a presentation for a class taught by a super-smart developer who also works for a large technology developer. I explained the differences between narrow AI and AGI, emphasizing as I did here that we don’t understand how our brain works and that it isn’t a model for our current state of AI. At the end of my talk a number of questions were raised, to which my developer acquaintance responded by explaining that our current form of AI is modeled on the human brain. This was exactly the opposite of what I had just said, though I think he didn’t realize it. If we’ve been taught a certain concept over and over and over again, it’s not a matter of hearing the opposite once and being able to replace it easily. It takes a lot of patience and time.
Keep Working to Change Perceptions
Let’s teach ourselves and others that today’s artificial intelligence doesn’t emulate the human brain (and it’s entirely possible that it will never be able to do so). Let’s keep on repeating to the rest of the world that there are many ways to use MT, sometimes better than those that are assumed by default. We might just be able to turn that “fake news” into real and helpful news.
Further Reading
Broussard, Meredith. Artificial Unintelligence: How Computers Misunderstand the World (The MIT Press, 2018), http://bit.ly/Broussard-Artificial-Unintelligence.
Reese, Byron. The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity (Atria Books, 2018), http://bit.ly/Reese-Fourth-Age.
Jost Zetzsche is chair of ATA’s Translation and Interpreting Resources Committee. He is the author of Translation Matters, a collection of 81 essays about translators and translation technology. Contact: jzetzsche@internationalwriters.com.
This column has two goals: to inform the community about technological advances and encourage the use and appreciation of technology among translation professionals.