What other industries can learn from the translation industry around the use of A.I and Humans working together.

Artificial Intelligence, or A.I, as it is known, is the buzz word of the late 2010s, and for good reason, as it will (along with robotics) be the defining technology of the 2020s and redefine the way we work, live and play.

A.I. is a function of machine learning, the ability to train machines to produce answers to questions using not just simple, “if-else” procedural programming (as in computer programming) statements but highly complex algorithms applied to large, specific data sets to produce contextual answers.

The first ‘en-mass’ machine learning that the general public got to experience was Google Translate, an engine for using machine learning to produce the translation of language. Machine learning for translation was around long before Google Translate but it was the ubiquitous nature of Google search that brought this technology to the masses. Since the 1950’s companies, governments, and universities have been playing with using machines to translate given that language (or more specifically ‘understanding’) is critical in the world of business and commerce.

The good thing about machines doing the work is they are fast, relatively inexpensive, can automate some quality and security tasks, and are highly scalable. The problem is that they can be inaccurate, lack context and even propagate errors when they occur due to automation paradox.  The bad thing about people is that they are slow, expensive and can make simple errors without noticing. The amazing thing about people is that they have skills and context machines cannot learn.

When we first started using machines and humans together in the translation process around 2010 there was a huge push back from the industry where many did not believe machines had any future in the industry.  We felt like Uber turning up to a taxi conference. Thankfully we stuck to our guns and didn’t listen to people entrenched in translation processes developed in the 1980s and drove the development of everything we did around humans and machines translating together.

The main thing we found was that the key to getting machines and humans to work in harmony was the interface where they came together. If you got this right, then you could do some amazing things and I’m sure this is true of hybrid applications we are seeing such as in semi-automated cars.  By ensuring that the primary focus of our development was in building a best-in-class translation workbench where humans and machines interacted we were able to wrap a range of other benefits around this.

Our Translation workbench

The first benefit was that we were able to put jobs through the platform and run live experiments all day, then analyze the data in terms of speed, quality, and cost and then look at ways to do gradual improvements;

What happens when you split content by paragraph or sentence on a particular domain subject – would this speed up the translator and produce better results?

What happens if you use one dataset over another to train the engine in a given language pair – will the machine give better results?

If you curate some of the data pairs before putting into the workbench would translators go faster for a net gain?

These types of questions, which are only answered and we have live data sets operating through a centralized workbench platform enabled us to quickly gain advances in efficiency, speed and cost benefits for our customers in a way a non-A.I. driven platform could not.

Another benefit from a centralized platform was the ability to centralize and automate quality control features. We could use the A.I to insert reserved words and glossaries into the text and make it easy for human translators to avoid changing these terms. We could also use translator data to improve the selection of the right translator the job.

Having the right interface for machines and humans drives all the inputs to get the right outcomes; once you know you have to break up content in a specific way you then know how to write the parser to split up unstructured content to ensure it can be accurately translated with the right context and this drives many other upstream and downstream developments.

The biggest gain from the right interface for A.I and Humans is the ability to collect data points you can use to continually refine the process for the benefit of your customers. Making sure your platform is collecting the right data points to a very granular level is critical given machines get more accurate the more (accurate) data they have to feed them.

Machines need context so machines and humans together will drive many of the innovations over the next decade. Understanding how to get the right platform and interface for this hybrid world may well determine the success of many ventures going forward.