The New Five-Year-Rule

by Jost Zetzsche

f you carry the office title agent provocateur like our good friend Renato Beninatto, it's your job to say things that are, well, provocative. Everyone would be really disappointed if you didn't, plus this is one of the reasons why we love our agent in the first place.

Most TEnTs will be sold as a Software-as-a-Service offering where you won't pay for an unlimited license but instead will pay something on a monthly or annual basis.
But here's the thing: The title of agent provocateur has already been assigned, and there really is no reason why so many others are trying to jump on the bandwagon these days or are forgetting to put things into context.

What am I talking about? Well, there's a new corollary to the five-year rule being bandied about. The old one, of course, goes back to the 1950s. From then on it was predicted in regular five-year intervals that machine translation was going to "get there" in just another five years. The new corollary claims that translation memory technology will be gone within five years.

Really?

Umm, no!

In fact, I would contend just the opposite: Translation memory technology had been dormant for many years, but in the past two or so years it's woken up with exciting new developments that will in turn spawn yet more developments and accordingly different usage cases.

So, where do these different kinds of evaluations come from? There are certainly agendas that might be motivating some (such as pushing other technologies), but I think much of it is about vantage points. Folks who write about our industry have a tendency to write about the very large translation buyers, and especially those in the technology sector: the Microsofts, Adobes, and Oracles. These translation buyers (along with their peers) are very technology-driven and typically highly involved in language technology initiatives (look, for instance, at the founding members of TAUS, the Translation Automation User Society). These are great drivers for our industry and they're fun to follow. But how much of your work comes from these guys? Some of you will certainly work for some of them, but I think it's fair to say that these companies don't make up the majority of our business. (I've tried to find some hard numbers on how the industry is split up between the spending by very large clients and others, and I found to my surprise that there are no such numbers, even from our industry's premier research group Common Sense Advisory. So you'll need to put up with my best guess, and that would be a ratio of 20:80 - 20 being the very large clients and 80 being the smaller and typically more profitable ones.)

What kind of technology are these very large translation buyers currently investing in? Alongside the "good old" translation memory technology, they are looking for ways to optimize translation workflows, reuse and manage content, and of course improve and use machine translation. Now some of these goals are shared by smaller clients, but typically with much less emphasis overall and much greater emphasis on translation memory. (As a side note about (statistical) machine translation engines: They exist because they are fed with translation memories, along with other bilingual data, and this will continue to be the case.)

The tool kit of the translator in the foreseeable future will contain terminology tools, quality assurance tools, and translation memory tools--these three typically packaged into a translation environment tool--and for some of us a machine translation component. But even for those who will use a machine translation component, will it ever be preferable to use a match from the MT engine to that of a match from a well-maintained TM? Absolutely not. In fact, most machine translation engines use a first translation memory pass before the segment is sent to the MT engine.

True, the appearance of TEnTs will change. Most TEnTs will be sold as a Software-as-a-Service offering where you won't pay for an unlimited license but instead will pay something on a monthly or annual basis; most TEnTs will have most or all of their work done in a browser-based interface; most or all of our data will be stored in the cloud; and there will accordingly be more sharing of data and resources.

Still, if you are completely adamant about not wanting to go that route, there still will be the "traditional" way as well (after all, good old MS-Word-bound Wordfast Classic is also still more popular than the more powerful Wordfast Pro).

Maybe it also helps to remind ourselves how translation memory technology has evolved just in the last few years after essentially lying dormant for 10 or 15 years:

  • Transit
  • has introduced target segment matching (if no source match is found)
  • Trados, memoQ, Lingotek, and Multitrans now support subsegment matching (the latter two already for some time)
  • memoQ and Multitrans support both translation memories and corpora
  • Text United now has term extraction integrated into the very creation of a project by default
  • Déjà Vu has the terminology database and translation memory cooperate with each other to turn fuzzy matches into perfect ones

These are just some examples that show that translation memory technology has not reached the end of its line of development. It's only too obvious that those tools that don't offer one or several of the features mentioned above will look to adopt those at some point and fine-tune them in the process.

(Plus, I can think of a few other features that I think would help us all, and I would be quite happy to consult with some of the technology vendors on those...)

So, is there a new five-year rule concerning the death of translation memory technology? Absolutely. And just like the original one concerning machine translation, it's going to go on and on. And on.