Pages

Tuesday, January 16, 2018

2018: Machine Translation for Humans - Neural MT

This is a guest post by Laura Casanellas @LauraCasanellas  describing her journey with language technology. She raises some good questions for all of us to ponder over the coming year.

 Neural MT is all the rage now and it now appears in almost every translation industry discussion we see today. Sometimes depicted as a terrible job-killing force and sometimes as a savior, though I would bet that it is neither. Hopefully, the hype subsides and we start focusing on solving issues that enable high-value deployments. I have been interviewed by a few people about NMT technology in the last month, so expect to see even more on NMT, and we continue to see that GAFA and the Chinese/Korean giants (Baidu, Alibaba, Naver) also introduce NMT offerings. 

Open source toolkits for NMT proliferate, training data is easier to acquire, and hardware options for neural net and deep learning experimentation continue to expand.  It is very likely that we will see even more generic NMT solutions appear in the coming year, but generic NMT solutions are often not suitable for professional translation use.  For many reasons, but especially because of the inability to properly secure data privacy, properly integrate the technology into carefully built existing production workflows, customize NMT engines for very specific subject domains, and implement controls, and feedback cycles that are critical to ongoing NMT use in professional translation scenarios. It is quite likely that many LSPs will waste time and resources with multiple NMT toolkits, only to find out that NMT is far from being a Plug'nPlay technology, and real competence is not easily acquired without significant long-term knowledge building investments. We are perhaps reaching a threshold year for the translation industry where skillful use of MT and other kinds of effective automation are a requirement, both for business survival and for developing a sustainable competitive advantage.

The latest Multilingual magazine (January 2018) contains several articles on NMT technology but unfortunately does not have any contributions from SDL and Systran, who I think are the companies that are probably the most experienced with NMT technology use in the professional translation arena.  I have pointed out many of the challenges that still exist with NMT in previous posts in this blog, but I noted better definition of interesting challenges and some new highlights (for me) listed in the articles in Multilingual, for example:

  • DFKI documented very specifically that even though NMT systems have lower BLEU scores they exhibit fewer errors in most linguistic categories and are thus preferred by humans
  • DFKI also stated that terminology and tag management are major issues for NMT, and need to be resolved somehow to enable more professional deployments
  • Several people reported that using BLEU to compare NMT vs. SMT is unlikely to give meaningful results, but this is still often the means of comparison used in many cases
  • Capita TI reported that the cost of building an NMT engine is 50X that of an SMT engine, and the cost of running it is 70X the cost of an SMT engine
  • Experiments run at this stage of technology exploration by most in the professional translation world, should not be seen as conclusive and final. Their results will often be a reflection of their lack of expertise than of teh actual technology. As NMT expertise deepens and as the obvious challenges are worked out, we should expect that NMT  will become the preferred model even for Adaptive MT implementations.
  •  SMT took several years to mature and develop the ancillary infrastructure needed to enable MT deployments at scale. NMT will do this faster but it still does need some time for support infrastructure and key tools to be put in place. 
  • MT is a strategic technology that can provide long-term leverage but is most often unlikely to promise ROI on a single project, and this, plus the unwillingness to acknowledge the complexity of do-it-yourself options are key reasons that I think many LSPs will be left behind. 
 


Anyway, these are exciting times and look like things are about to get more exciting.

I am responsible for all text that is in bold in this post.

--------

2017 has been a year of reinvention. We thought we had it good and then, Neural MT came along.

Riding The Wave


I started in localization twenty years ago and I still feel like an outsider; I don’t have a translation degree, neither do I have a technical background; I am somebody who came to live in a foreign country, liked it and had to find a career path there in order to be able to stay. Localization was one of the options, I tried it and it worked for me. This business has had many twists and turns and has been forced to adapt and be flexible with each one of them. I think I have done the same, change and adapt to every new invention, I have tried to ride the wave.

There were already translation memories when I started, but I remember big changes in the way processes worked and, at each turn, more automation was embraced and implemented: I remember the jump from static translation dumps to on-demand localization and delivery, and the implementation of automatic sophisticated quality check-ups. I progressed and evolved mirroring the industry and, from a brief period as a translator, I moved on to work in different positions and departments within the localization workflow. This mobility has given me the opportunity to have a good understanding of the industry’s main needs and problems.

Six years ago, I stumbled upon Machine Translation (MT). At that time, it almost looked like chance, but having seen the evolution of the technology in this short period of time, now I know that I had it coming, we all did, we all do. It happened because a visionary head of localization requested the implementation of an MT program in their account. I was in the privileged position of being involved in that implementation and that meant that myself and my colleagues could experiment and experience Machine Translation output first hand. For somebody who can speak another language and who has a curious mind, this was a golden opportunity. For a couple of years, we evaluated MT output within an inch of its life: from a linguist point of view (error typology, human evaluation), using industry standards (Bleu, yes, Bleu, and others…), setting up productivity tests (how much more productive post-editing effort is when compared with translation effort), etc. We learned to deal with this new tool and we acquired experience that helped us estimate expectations.

It feels like a lifetime ago. During the last few years, industry research has zoomed in on Machine Translation; as a consequence, there has been a colossal amount of research and studies done by industry and academia on the subject ever since. As we all know.

And I still haven’t mentioned Neural MT (NMT).



The Wondrous NMT


Geeky as it sounds, from the point of view of Machine Translation, I can consider myself quite privileged, as I have experienced directly the change from Statistical Machine Translation (SMT) to Neural while working for a Machine Translation provider. Again, I was able to compare the linguistic output produced by the previous system (SMT) and the new one (NMT) and see the sometimes very subtle, but significant differences. 2017 was a very exciting year.

NMT has really begun to be commercially implemented the last year but, after all the media attention (including in blogs like this one) and focus on industry and research forums, it feels as if it has been here forever. Everything goes very quick these days, proof of it is that most (if not all) Machine Translation providers have adopted this new technology in one way or another.

Technology Steals The Show


Technology is all around us, and it is stealing the show. I would love to do an experiment and ask an outsider to read articles and blog posts related to the localization industry for a month and then ask them, based on what they had read, what the level of technology adoption is in their opinion. I think they would say that the level of adoption, let’s focus on MT, is very high.

I see a different reality though; from my lucky position, I see that many companies in the industry are still hesitant, and maybe one of the reasons for it is fear. Fear of not fully understanding the implications of the implementation, the logistics of it, and of course, fear of not really grasping how the technology works. Because it is easy to understand how Translation Memory (TM) leverage works, but Machine Translation is a different thing.

I have no doubt in my mind that in five years’ time the gap will be closed; but at the moment there is still a large, not so vocal, group of people who are still not sure of how to start. For them, it might feel a bit like a flu jab, it is painful, may not really work, but most people are adopting it, it kind of has to the done. All other companies seem to be adopting it, they feel they need to do the same, but how? And when we ask how it should include questions like how is this technology going to connect with my own workflow; do I use TMs as well, how do I make it profitable, what is my ROI going to be, how do I rate post-edited words, what if my trusted translators refuse to post-edit, how many engines do I need, one per language, one per language and vertical, one per language and domain…?

MT for Humans


Many of the humans I have worked and dealt with are putting on a brave face, but sometimes they struggle with the concepts; a few years ago it was Bleu, now it is perplexity, epochs… Concepts and terms change very fast. For the industry to fully embrace this new technology a bigger effort might need to be done to bring it to the human level. The head of a language company will probably know by now that NMT is the latest option, but might not really care to comprehend what the intrinsic differences between one type of MT and the others are. They might prefer to know what the output is like, how to implement it, how to train their workforce (translators and everybody else in the company) on the technology from a practical point of view; is it going to affect the final quality, what does a Quality Manager or a Language lead need to know about it, what about rates, can a Vendor Manager negotiate a blanket reduction for all languages and content types? How is it going to be incorporated into the production workflow?

I think 2018 is going to be the year of mass adoption and more and more professionals are going to try to figure out all these questions. Artificial intelligence is all around us, the new generations are growing with it, but today this new bridge created by progress is still being crossed by very many people. Not everybody is on the other side. Yet.


Dublin, 12.I.18


Laura Casanellas is a localization consultant specialised in the area of Machine Translation deployment. Originally from Spain, she has been living in Ireland for the last 20 years. During that time, Laura has worked in a variety of roles (Language Quality, Vendor Management, Content Management) and verticals (Games, Travel, IT, Automotive, Legal) and acquired extensive experience in all aspects related to Localization. Since 2011, Laura has specialized in Language Technology and Machine Translation; until last year, Laura worked as a Product Manager and head of Professional Services in KantanMT.

Outside of her professional life, she is interested in biodiversity, horticulture, apiculture, and sustainability.




The result of some of the evaluations mentioned on the blog are collected in a number of papers:

Empirical evaluation of NMT and PBSMT quality for large-scale translation production
(2017) Shterionov, D., Nagle, P., Casanellas, L., Superbo, R., and O’Dowd, T. https://www.researchgate.net/publication/317345978_Empirical_evaluation_of_NMT_and_PBSMT_quality_for_large-scale_translation_production.

Assumptions, expectations, and outliers in post-editing
 (2014) Laura Casanellas & Lena Marg: Assumptions, expectations, and outliers in post-editing. EAMT 2014, Dubrovnik

Connectivity, adaptability, productivity, quality, price: getting the MT recipe right
(2013) Laura Casanellas & Lena Marg: Connectivity, adaptability, productivity, quality, price: getting the MT recipe right XIV Machine Translation Summit, Nice

Saturday, December 30, 2017

Artificial Intelligence: And You, How Will You Raise Your AI?

This is the final post for the 2017 year, a guest post by Jean Senellart who has been a serious MT practitioner for around 40 years, with deep expertise in all the technology paradigms that have been used to do machine translation. SYSTRAN has recently been running tests building MT systems with different datasets and parameters to evaluate how data and parameter variation affect MT output quality. As Jean said:

" We are continuously feeding data to a collection of models with different parameters – and at each iteration, we change the parameters. We have systems that are being evaluated in this setup for about 2 months and we see that they continue to learn."

This is more of a vision statement about the future evolution of this (MT) technology, where they continue to learn and improve, rather than a direct reporting of experimental results, and I think is a fitting way to end the year in this blog.

It is very clear to most of us that deep learning based approaches are the way forward for continued MT technology evolution. However, skill with this technology will come with experimentation and understanding of data quality and control parameters. Babies learn by exploration and experimentation, and maybe we need to approach our continued learning, in the same way, learning from purposeful play. Is this not the way that intelligence evolves? Many experts say that AI is going to be driving learning and evolution in business practices in almost every sphere of business.




 ===================


Artificial Intelligence is the subject of all conversations nowadays. But do we really know what we are talking about? And if instead of looking at AI as a kind of software that is ready to use and potentially threatening for our jobs, what if it was thought to be an evolutionary digital entity, with an exceptional faculty of learning? Therefore, breaking with the current industrial scheme of the traditional software that requires code to be frozen until the next update of the system. AI could then disrupt not only technology applications but also economic models.

Artificial Intelligence does not have any difficulty to quickly handle exponential growing volumes of data, with exceptional precision and quality. It thus frees valuable time for employees to communicate internally with customers, and to invest in innovative projects. By allowing an analysis of all the information available for rapid decision-making, AI is truly the corollary of the Internet era, which is also the result of all threats, whether virtual or physical.




Deep Learning and Artificial Neural Networks: an AI that is constantly evolving


Deep Learning and artificial neural networks offer infinite potential and a unique ability to continually evolve in learning. By breaking with previous approaches such as the statistical data analysis approach which demonstrates a formidable, but trivial, memorization and calculation capacity, like databases and computers, the neuronal approach gives a new dimension to artificial intelligence. For example, in the field of automatic translation, artificial neural networks allow the "machine" to learn languages as we do when we are in an immersion program abroad. Thus, these neural networks have never finished learning, after their initial learning, they can then continue to evolve independently.


A Quasi "Genetic Selection"


Training a neural model is, therefore, more akin to a mechanism of genetic selection, such as those practiced in the agro-food industry, than to a deterministic programming process: in all the sectors used by neural networks, the AI is selected to keep only those learnings that progress best and the fastest or most adaptable to a given task. These AI techniques that are used for automatic translation are even customized according to customer needs - business, industry and specific vocabulary. Over time, some AI techniques will grow in use, and others will disappear because they will not demonstrate enough learning and will not be sufficiently efficient. DeepMind illustrates this ability to the extreme. It was at the origin of AlphaGo, the first algorithm to beat the human. AlphaGo had learned thousands of games played by human experts. The company then announced the birth of an even more powerful generation of AI. It managed to learn the game of Go without playing games played by humans, but "simply" by discovering, all alone and in practice, the strategies and subtleties of this game in a fraction of the time of the original process. Surprising, isn’t it?



Machines and Self-Learning Software


The next generation of neural translation engines will exploit this intrinsic ability of neural models to learn continuously. It will also build on the ability of two different networks to have unique pathways of progress. Specifically, from the same data, like two students, these models can improve by working together or competing against each other. This second generation of AI  is very different because not only does it have models taught from existing repositories (existing translations), but just like newborn ones, they also learn to… learn over time, placing them in a long perspective. This is the lifelong learning: once installed in production, for example in the customer information system, AI continue to learn and improve.



To each his own AI tomorrow?


Potentially, tomorrow's computer systems may be built, like a seed planted on your computer, or in everyday objects. They will evolve to better meet your needs. Even if current technologies, especially software, are customized, these technologies remain 90% similar from one user to another because they are built into bundled and standard products. They cost very dearly because their cycles of development are long. The new AI, which tends to tailored solutions, is the opposite of this. In the end, it is the technology that will adapt to man and not the opposite, as it is at present. Each company will have the specific technical means, "hand-sew" and you may have an AI at home that you can raise yourself!

Towards a new industrial revolution?


What will be the impact of this evolution on software vendors or on IT services companies? And beyond, over the entire industry? Will businesses need to reinvent themselves to bring value to one of the stages of the AI process, whether it be adaptation or quality of service? Will we see the emergence of a new profession, the "AI breeders"?


In any case, until the AI is seen as a total paradigm shift, it will continue to be seen as software 2.0 or 3.0. A vision that hinders innovation and could make us miss all of its promises, especially to free ourselves from repetitive and repetitive tasks to restore meaning and pleasure to work.


Jean Senellart, CTO, SYSTRAN SA


Wednesday, December 27, 2017

The Most Popular Posts of 2017

2017 was an especially big year for the Neural MT momentum on multiple fronts. We saw DeepL and Amazon introduce new generic NMT product offerings, each with a unique twist of their own. They are both impressive introductions but are limited to a small set of language pairs, and both these companies have made a big deal about superior MT quality on using some limited tests to document this and attract attention. For those who define better quality by BLEU scores, these new offerings do indeed have slightly higher scores than other generic MT solutions. But for those who wish to deploy an industrial scale MT solution, there are other things that matter more e.g. the extent of customization possibilities and the range of steering possibilities available to update and tune the base engine, the overall build platform capabilities, and the ability to secure and maintain data privacy are particularly important. The Asian market also saw several NMT initiatives build momentum, especially with Baidu and Naver.

All of the "private" MT vendors have also expanded their NMT offerings, focusing much more on the customization aspects and two vendors (Lilt & SDL) have also introduced Adaptive NMT for a few languages. These Adaptive NMT offerings may finally get many more translators using MT on a regular basis as more people realize that this is an improvement over existing TM technology. We should expect that these offerings will grow in sophistication and capability over the coming year.

SDL Quality Measurements on Various Types of Customization on SMT & NMT

While industry surveys suggest that open source Moses is the most widely used MT solution in professional settings, I still maintain that many, if not most of these systems will be sub-optimal as the model building and data preparation processes are much more complicated than most LSP practitioners expect or understand. NMT currently has four open source toolkit alternatives, and some private ones thus the complexity for do-it-yourself practitioners escalates. However, in some ways, NMT is simpler to build if you have the computing power, but it is much harder to steer as was described in this post. The alternatives available thus far include:

  • OpenNMT - Systran & Harvard
  • Tensorflow - Google
  • Nematus - University of Edinburgh
  • Fairseq - FaceBook
  • and now Sockeye from Amazon  which allows you to evaluate multiple options (for a price I am sure).



2018 looks like a year when MT deployment will continue to climb and build real momentum. Strategic Business advantage from MT only comes if you build very high-quality systems, remember that we now have great generic systems from DeepL, Google, Microsoft, and others. Translators and users can easily compare LSP systems with these public engines.

While the technology continues to advance relentlessly, the greatest successes still come from those who have the "best" data and understand how to implement optimal data preparation and review processes best for their deployments.

The most popular blog posts for the year (2017) are as follows:


1. A Closer Look at SDL's recent MT announcements : -- a detailed look at the SDL ETS system and their new NMT product announcements.  I initially set out with questions that were very focused on NMT, but the more I learned about SDL ETS, the more I felt that it was worth more attention. The SDL team were very forthcoming, and shared interesting material with me, thus allowing me to provide a clearer picture in this post of the substance behind this release, (which I think Enterprise Buyers, in particular, should take note of), and which I have summarized in this post.

2. Post Editing - What does it REALLY mean?  :-- This is a guest post by Mats Dannewitz Linder that carefully examines three very specific PEMT scenarios that a translator might face and view quite differently. There is an active discussion in the comments as well.


3. The Machine Translation Year in Review & Outlook for 2017  :-- A review of the previous year in MT (2016) and a prediction of the major trends in the coming year which turned out to be mostly true, except that Adaptive MT never really gained the momentum I had imagined it would with translators.

4. Data Security Risks with Generic and Free Machine Translation:--   An examination of the specific and tacit legal agreements made for data re-se when using "free" public MT and  possibilities on how this can be avoided or determined to be a non-issue.

5. Private Equity, The Translation Industry & Lionbridge :-- A closer examination of the Private Equity rationale which seems to be grossly misunderstood by many in the industry. PE firms typically buy controlling shares of private or public firms, often funded by debt, with the hope of later taking them public or selling them to another company in order to turn a profit. Private equity is generally considered to be buyout or growth equity investments in mature companies.



6. The Problem with BLEU and Neural Machine Translation :--  The reasons for the sometimes excessive exuberance around NMT are largely based on BLEU (not BLUE) score improvements on test systems which are sometimes validated by human quality assessments. However it has been understood by some that BLEU, which is still the most widely used measure of quality improvement, can be misleading in its indications when it is used to compare some kinds of MT systems. This post describes how BLEU can sometimes under-represent performance of NMT systems.

7. From Reasoning to Storytelling - The Future of the Translation Industry   :-- This is a guest post by Luigi Muzii, on the future of translation, written in what some may say is an irreverent tone. For those participants who add very little value in any business production chain, the future is always foreboding and threatening, because there is often a sense that a reckoning is at hand.  It is easier to automate low-value work than it is to automate high-value work. Technology is often seen as a demon, but for those who learn to use it and leverage it, it is also a means to increase their own value addition possibilities in a known process, and increase one's standing in a professional setting. While translating literature may be an art, most business translation work I think is business production chain work. These are words translated to help you sell stuff, or help customers better use stuff that has been sold to them, or now increasingly it is a way to understand what customers who bought your stuff think, about the user and customer experience.



8. Never Stop Bullshitting: Or Being Popular in the Translation Industry  :--  This is a guest post by Luigi Muzii (and his unedited post title). Luigi likes to knock down false idols and speak plainly, sometimes with obscure (to most Americans anyway) Italian literary references. I would characterize this post as an opinion on the lack of honest self-assessment and self-review that pervades the industry (at all levels) and thus slows evolution and progress. Thus, we see that industry groups struggle to promote "the industry", but in fact, the industry is still one that "gets no respect" or is "misunderstood" as we hear at many industry forums.  While change can be uncomfortable, real evolution also results in a higher and better position for some in the internationalization and globalization arena. Efficiency is always valuable, even in the arts.


9. The Ongoing Neural Machine Translation Momentum :--   This is largely a guest post by Manuel Herranz of Pangeanic, slightly abbreviated and edited from the original, to make it more informational and less promotional. Last year we saw FaceBook announce that they were going to shift all their MT infrastructure to a Neural MT foundation as rapidly as possible, this was later followed by NMT announcements from SYSTRAN, Google, and Microsoft.

10. Creative Destruction Engulfs the Translation Industry: Move Upmarket Now or Risk Becoming Obsolete :--  This is a guest post by Kevin Hendzel whose previous post on The Translation Market was second only to the Post-editing Compensation post, in terms of long-term popularity and wide readership on this blog. This new post is reprinted with permission and is also available on Kevin's blog with more photos. I am always interested to hear different perspectives on issues that I look at regularly as I believe that is how learning happens.



Wishing You all a Happy, Healthy and 
Prosperous New Year