TTN offers post-edited machine translations

TTN is expanding its range of services to include post-edited machine translation. TTN is the only translation agency in the world operating its own translation search engine, Keybot. By leveraging this advanced system, TTN can deliver high-quality AI translations at competitive prices. Below is a glossary of the key terms that explain how this successful model works.

CAT tool: TTN has been using computer-assisted translation programs since 1995. All files that you have translated by TTN are divided into segments that contain whole sentences or parts of sentences. Our translators do not work directly in Word or PowerPoint, but use a special tool that displays the original text on the left and the translation on the right. This CAT tool is connected to multiple databases that assist the translator in translating the texts. Each CAT tool has quality control functions, including checking whether the terms have been included in the client dictionary. After translation, the individual segments are stored in a central translation memory and in a terminology database so they can be retrieved for future jobs.

TTN translation memory: Translation Memory (TM) is a structured database of translations that your company or organisation has commissioned from us. The TM forms the main basis for automatic translations. Files that have already been translated are imported into this parallel database together with the original texts after post-editing, thus continuously improving the quality of automatic translations.

Keybot translation memory: Keybot is TTN's proprietary in-house translation search engine that searches your website and those of your suppliers. Using the Web to TM application specially developed by TTN, the translated web pages are downloaded from your website and converted into translation memories. A separate translation memory is created for each domain. TTN's translators can access these extensive memories via their CAT tools and use client-specific wording and terminology. Machine translation primarily leverages segments from the Keybot memory. Unlike services such as Google Translate, ChatGPT, DeepL and other translation providers that use generic language models for all users, TTN produces custom machine translations tailored specifically to the needs of each individual client.

MultiTerm database: Terminology is stored in a central MultiTerm server. Terms can be retrieved or added directly via the CAT tool. The database is also accessible online via a web browser. Normally, the translator adds terms while editing the text. In special cases, the database is compiled by a terminologist who arranges and documents the terms according to systematic criteria.

Keybot terminology database: This database is created using deep learning technology. Pattern recognition is used to analyse the TTN and Keybot translation memory. The process of compiling these custom databases often takes several days or weeks and must be repeated at regular intervals. After the translation models have been created, the data is exported to a MultiTerm database. This database usually contains the entire specialist vocabulary of an organisation and forms the basic terminology framework for automatic translations.

Translation provider: If you have a text translated automatically by TTN, a partial translation with gaps (known as a cloze text) is created from the four databases mentioned above. The text is not translated in full immediately, as it would be with Google Translate. The translation process often takes several minutes and even hours for larger orders. Normally, the gaps in the text that remain after this process are filled in by a human translator.

With machine translation, the partial translation created by TTN is sent to a translation provider who fills in the gaps. There are currently around 30 providers offering machine translation. TTN has developed interfaces to numerous providers who can take the text components and complete the text. The first few orders from a client are forwarded to two or three different suppliers. The post-editor then selects the translation provider that best pre-translates the texts.

Post-editing: A machine cannot understand the meaning of a text. A computer program generates the target text based on statistical correlations, which can lead to serious translation errors. Post-editing is the process of correcting the machine translation. In your settings, you can specify whether you want your texts post-edited by one or two post-editors. The first post-editor checks that the translation corresponds to the original text, while the second post-editor reads through the text at a greater distance and ensures that idiomatic expressions are used and that the text reads well. Prices for machine translation (MT): The quality of a machine translation (MT) depends to a large extent on the type of source text. A weather report can usually be automatically translated quite well, whereas an automatically translated federal court ruling can be useless. Experienced translators can often translate such texts faster and more accurately if they are not distracted by a misleading machine translation.

At TTN, post-editing is billed by time. For your first few orders, TTN will quote you half the regular price. On the last day of each month, the price is recalculated by dividing the number of lines corrected by the time spent. The new price, which may be higher or lower, will be applied the following month. Over time, the price will stabilise. You will be informed of the price for MT at the end of each month when you receive your bill.

AI cannibalisation: Swiss newspaper Neue Zürcher Zeitung reports that AI is very quickly getting dumber and dumber: "Artificial intelligence is cannibalising itself. Speech AI learns from texts. If the texts it learns from are not created by humans but generated by artificial intelligence, then the quality of the results decreases because more and more errors creep in through the AI-created texts."

Until a few years ago, translators used Google for terminology research, but that is now almost impossible as the search engine has been severely compromised due to SEO and advertising. Many websites use AI texts to attract search engines. Since all automatic translation providers obtain texts from the internet, a kind of 'inbreeding effect' arises that results in the continual degrading of text quality. Large providers even try to manipulate terminology so that users always end up on their pages. This leads to a vicious cycle that results in distorted language use and increasingly incomprehensible texts. It seems paradoxical: The better AI works, the faster it deteriorates.

TTN has developed a system in which company or organisation-specific data serves as the basis for automatic translations. These texts are checked by post-editors and stored in a database. This creates a self-reinforcing cycle where translation quality increases while becoming more cost-effective.