Bartek Dymek
By
December 07, 2023

How to Measure the Quality of Translation in 2024

businessman checking quality of translationIt is not the quality of your translation agency that is low, you just do not know how to measure it correctly. How would you feel if someone told you that?

 

Do you like to pay – with money or time – to be led up the garden path? Do you like changing your localization companies over and over again due to low translation quality? They all cannot be that bad, right?


It is safe to say that during over 15 years of existence on the localization market, we have seen at least more than enough in terms of better and worse ways of assessing translation quality. Every challenge posed by our clients shaped our adaptive skills and strategies as well as led us to the conclusion that some things are fluid, and some are not.

It is not good to tamper with a universal truth. On the other hand, you are unable to walk on water unless it is frozen.


In this article, we will describe digitizing what is in the eye of a beholder, showing you how to achieve an actual picture of translation quality, not a doodle.

 

This, in turn, will enable you to benefit from not losing time and resources in changing your language services providers. It will also enable your translation partner to tune to specific needs of your company. Not to mention eliminating frustration in your team.

 

 

What is Translation Quality?

 

 

 

translation quality badges

 

 


The Internet is full of many definitions of quality. Some of them are as old as Plato’s certain level of excellence which seems to be a good starting point. The newer definitions – closer to our times and needs – include statements such as:

 

  • ‘…all that can be improved’ (Masaaki Imai);

  • …compliance with requirements’ (P.B. Crosby);

  • ...the extent to which a set of inherent characteristics meets the requirements’ (ISO 9001:2000).

 

The two latter statements are particularly important because the first thing that needs to be acknowledged in terms of quality assessment is that beauty is in the eye of a beholder. But beauty cannot be measured – considering quality in an analog way, one might say that every person perceives quality differently – they like something (high quality) or not (low quality). And that is their right.

 

In terms of localization services (or any other kinds of services for that matter), being able to state that the quality is low or high, requires presenting some criteria making it possible to actually confirm the success of meeting them. Meeting the criteria equals high quality and failing to meet these equals low quality. There is no place for likes nor don’t likes.

 

Taking the above into consideration, it can be said that quality is a level of adherence to a service or project to the specific requirements of a client.

And to be able to measure or confirm anything, it is recommended to exclude the subjectivity of an above-mentioned beholder. And this is the point when you start freezing the water.

 

 

I Don't Like This Translation, It's Not Pretty

 

 

 

rejecting translation quality

 

 

 

Evaluating a translation in terms of its quality, especially without dedicated tools, can be a nightmare for both the client and their translation vendor.

 

One of the worst pitfalls for the client is delegating translation quality review to one of these two:

  • An employee or colleague that knows the subject matter/product and the languages of the content;

  • Marketing team.

Both of the above seem to be perfect resources for the job, i.e. they know the product, speak the target language, know the company tone, and they know what they want to see in translation, right?

 

Yes, however, they already know what they want to see in a translation (sometimes regardless of the original text!).

 

Moreover, without dedicated translation software, they can easily transform the review of a translation into creating a list of objections related to copy writing and creative writing which has nothing to do with quality.

Delegating the review to a person that does not share the same approach and linguistic experience as the translation team can lead to a situation that you will be notified about low quality whereas the actual situation may be totally different.

 

The reason for that usually lies in the fact that both parties use different quality-related tools and approaches.

 

Additionally, often it happens that the reviewer analyzes only the translated text and bases their assessment only on what they read in the target language without confronting the source (original) content. And this is the I don’t like this translation part.

 

 

 


 

 

 

Related content: 6 Reasons Your Team May Fail at Translation Quality Review

 

 

 


 

 



The good news is that this is not their fault! More often than not, they are not professional translators and daily, they are paid for focusing on something totally different.

 

Another good news is that equipping them with easy-to-use tools and showing them the way can solve the problem and spare you a lot of unneeded frustration and redundant back-and-forth.

Dedicated translation tools combined with a proper approach and linguistic expertise allow measuring the quality of a translation objectively and efficiently.

 

 

 

The Path of a Translation Reviewer is Not a Lonely Path

 

 

 

translation partnership

 



The first and the more important truth about translation is that there are two texts to be analyzed: Source (the original text to be translated), and target (the translation). There is no way of successfully evaluating the quality of a translation without checking the source text. This is the worst pitfall when it comes to the review.

 

From our own experience:


Negative feedback came from a client claiming that their reviewer was furious because the quality of the translation was horribly low. The analysis of the changes introduced by the reviewer revealed the following:

 

  • Out of 117 pages, only 6 pages contained changed sentences;

  • In 9 out of 12 changed sentences, the reviewer introduced amendments that were against the source;

  • There were 3 actual errors detected by the reviewer;

  • Almost all of the changed sentences contained preferential wording reported as errors.

 

Also, which is really important in terms of linguistic quality assessment (LQA), the reviewer expected that there would be no changes needed at all. While this is an understandable expectation, translation often requires some tuning and this is normal.

 

Even if a text is translated correctly when checked by another person, it is very likely that it will be changed to some extent – at least to express the individual’s approach or linguistic preferences.

 

This does not mean that the translation was poor, it simply shows the abundance of linguistic devices and possible ways of conveying the meaning of the original in the translation.

 

 

 


 

 

 

Related content: 8 Translation Review Problems That May Make Your Translation Company's Name a Cussword in Your Office

 

 

 


 

 


And yes, sometimes an error is overlooked. The review phase is so important – it is another pair of eyes, checking the translation from the perspective of an actual user.

 

The above-mentioned situation is pretty common - every change that the translation reviewer has to introduce is considered to be the failure of the translation team.

After the reviewer was confronted, which means that every single change was fully commented by the translation team proving that there was no error (except these 3 cases where we agreed with the reviewer), they admitted that not only have they not checked the source text but also they were not provided with one!

 

It turned out that they were not happy with the merits of the source, not the translation. Moreover, they have not been equipped with tools enabling them to easily and effectively communicate their point of view to the translation team.

In a way, even though the client’s reviewer is not a part of an LSP (language services provider) they become an external member of a translation team. And good teamwork requires communication.

 

 

Lima, Sierra, Papa. Over. - Rules of Communication With Your Translation Agency

 

 

 

communication with a translation company

 

 


Being able to communicate with the translation team does not mean phone calls or e-mail back-and-forth. It means finding a common ground in terms of measuring the quality of a translation and communicating errors/changes in a structured way.

 

And we have more good news for you, everything you need, is already here! And it is easy to use.


The only thing the client needs to do is to make sure they do not pay for being misled by the subjectivity of their own translation reviewer.

 

 

Digitizing the Beauty


As mentioned before, excluding subjectivity from linguistic quality assessment (LQA) requires the appropriate approach and dedicated tools.


The approach is simple: If it works, don’t fix it. (However, this does not mean that you cannot amend it).

 

This simple rule works well in translation quality aspects and should be understood as an encouragement to draw a distinction between a change and a correction.

Not every change corrects a real error. In many cases, the reviewer may introduce changes based on their preferences or in-company knowledge that the translator may not have. On the other hand, if a change corrects an actual error, the error has to be clearly identified in terms of:

 

• Category (a type of an error, e.g. Fluency or Terminology);

• Subcategory (a subtype of an error, e.g. Style or Wrong Term); and

• Weight (seriousness of an error, e.g. Neutral or Minor).

 

Combining the approach with a clear change categorization creates a common ground for both parties – the client’s reviewer and the translation team. Both parties work using the same metalanguage (Categories and Weights) to describe standardized concepts (factual errors Vs. preferential amendments).

 

 

This is the Client. Over. - Your Translation Review Options

 

 

 

translation review options

 

 


After the translation has been delivered, you have two ways of checking its quality:

 

  • Hiring another localization company (or a freelance language professional) to act as a third party, or

  • Using your own resources.

 

 

 


 

 

 

Related content: Freelance Translators Vs. a Translation Agency: An Honest Comparison

 

 

 


 

 

 

While hiring a third-party is a rather good idea (tools and approach already covered), many clients decide to use their own resources. And this is also a good idea because in-house resources know the product/service best, often even use it on daily basis or train others on using it.

 

They just need proper communication tools that will minimize or eliminate a potential struggle over the changes as it is certain that the negative feedback will be analyzed in detail by the translation team. And questions will be asked.

 

 

The Common Ground – A Remedy to All


Effective communication between the reviewer and the translation team requires a standardized approach and clearly stated requirements defining quality. The most common requirements are:

 

• Linguistic correctness,
• Stylistic consistency,
• Terminology consistency,
• Glossary adherence,
• Reference material consistency,
• UI convention.

 

Of course, apart from the obvious ones, it is recommended to invent your own quality requirements.


Once these are ready, it is time to actually measure the level of the translation’s compliance and the best way to do it is to use a standardized Quality Metric.


Basically, a Quality Metric (QM) is a set of error categories and weights allowing for describing every error/change detected during the review. When counted, these will show you the real picture of quality painted with numbers.

 

The QM framework is quite flexible, allowing for concentrating on the number of errors or their weights. The best-known examples of QM in case of translations are LISA and DQF.


DQF QM, unlike LISA, allows for registering preferential changes using a neutral weight (which means they are not counted as errors when calculating the quality score).

 

This approach is especially useful for localization companies using QM during revision (checking the translation by a second language professional for its merits and linguistic compliance as well as the adherence to the client’s requirements).

 

DQF Quality metric consists of 8 Categories, 35 (optional) Subcategories, and 4 Error Weights allowing for describing every possible error introduced to a translation.

 

And should there be no relevant Category, Other comes to the rescue.
While translators love to categorize and subcategorize changes, 8 Categories and 4 Error weights are enough to effectively communicate the nature of a change:

 

  • DQF
    • Categories
        • Accuracy (wrong meaning);
        • Fluency (language problem);
        • Locale Convention (unnatural format);
        • Verity (target-culture-specific issue);
        • Style (way of writing);
        • Design (layout-related issue);
        • Terminology (wrong term);
        • Other (everything else).

 

    • Weights
        • Neutral (It's OK, but I like it better my way);
        • Minor (Oops, but nothing serious);
        • Major (Ouch, this one may cause a significant misunderstanding);
        • Critical (Whoa, this one is definitely a huge deal!).

 

The Neutral weight is a straightforward application of the ‘don’t fix if it isn’t broken’ rule.

 

 

I've Got the Nail, Where Do I Put It?

 

 

 

tools for translation review compared to construction tools

 

 

Having the tool is only a half-way there. The Quality Metric shows the approach and gives your reviewer the means of communication with the translation team. Now, it is time to depict the actual quality in the form of clarity. Here, there is no place for scribbling.

 

The first approach that may come to mind is introducing changes on a clean version (i.e. the target version) of the translation. Usually, it is an MS Word (using Track Changes or Comments) or MS PPT (using comments) file.
While this is not a bad idea, it has some significant flaws:

 

  • In the case of many changes, it is hard to manually count them;

  • Replying to the comments in MS Word may be a bit troublesome;

  • Using Track Changes and Comments requires using two functions to introduce one change;

  • In a PPT file, a comment has to be placed in the exact place of a change. In case there are multiple changes in the near vicinity, the comment icons can obscure the text.

 

A good and easily deployable solution to this issue is to convert the MS Word or PPT file to a PDF format.

 

 

 


 

 

 

Related content: File Formats Vs. Technical Translation Turnaround Time: What's the Catch?

 

 

 


 

 

 

This way, the reviewer may use the highlight function to comment on the actual changes or propose something to be implemented.

 

For example:

 

Translation quality measurement tool example

 

Or:

 

Translation quality measurement tool display

(The Subcategory here would be Grammar.)

 

 

Preparing the feedback as described above, makes it clear for everyone where an actual error was corrected (Fluency; Minor) and where a preference was introduced (Terminology; Neutral).


If it comes to preferential changes, these are pretty significant ones – they show the translation team the preference of a client and should be taken into account in the future projects. Some clients follow the rule that if a preference (for example, in the case of terminology) from a previous translation project is not applied in the next translation, it becomes an error. It is not common, but it happens.


The proposed approach is pretty useful, however, it is still not quite easy to effectively measure the quality, although we managed to exclude subjectivity from the equation.

 

The next step would be preparing a register of detected errors along with their weights. It may look like this:

 

 

Translation quality measurement tool example

Comments in the translation quality measurement tool (The Comment is optional.)

 

The translated text with tracked changes along with the report on changed segments (sentences, phrases or terms) make the message 100% clear. In case of doubts, the translation team can easily identify the change that is disputable, may it be its weight or the change itself, and address the issue focusing on the merits, not a subjective point of view. The reviewers are not always right.

 

Bear in mind that there is a Source column as well. A reviewer always has to consider the source when applying any change.


Still, preparing such a report requires using at least two tools:

 

• Text Editor/PDF Reader,
• A form for registering the changes.

 

If only there was a tool covering both, right? There is.

 

 

One (Translation Tool) to Rule Them All


Most Computer-Aided Translation (CAT) tools offer such functionality. Some of them have already preconfigured Language Quality Assurance (LQA) models. Other may require the LQA model to be configured.

A good example of such LQA module is memoQ. It offers a web-based interface which does not require a potential reviewer to install any additional software. And what is even better, a translation company may keep a pool of spare licenses to be allocated to external reviewers.


The whole LQA (DQF-based) interface looks like this:

 

 

Display example of registering errors in memoq

 

 

After finishing the review, you can simply generate a report with a detailed error description and statistics:

 

 

translation quality measurement report example

 

DQF report example

 

And having the analog converted to numbers makes it possible to paint a picture of the quality of a translation – in photo quality (nomen omen).

 

 

What Can I Do With These Numbers From the Translation Quality Report?

 

 

 

businessman using a translation quality report

 

 


First of all, you can stop being misled by the subjective approach of your reviewers. Having clear, measurable data on quality makes it possible to agree on the quality thresholds with your professional translation service provider.


This, in turn, can be a great help in shaping the localization process by your localization partner. Not to mention that each report is invaluable in terms of showing your preferences towards the final shape of the translation.


Moreover, having the numbers makes it possible to present data graphically which can show you, that 120 changes in a 100K-word project does not mean bad quality.

 

 

The Choice Is Yours


In terms of being able to make an informed decision, which set of data would you prefer to possess:

 

This:

I do not like this translation!

 

 

Or this?

 

 

example of a translation quality report

 

translation quality report

 

example of translation quality report

 

 

What You Get from Translation Quality Measurement



Having the data and knowing how to present it is all you need to gain a full and reliable measurement of the quality of translations delivered by your translation agency.


Using proper quality metrics (the approach) combined with dedicated tools (the measurement) can show you areas that your localization company should improve. It is better to adjust the approach instead of changing language services providers all over again, right?

 

 

 


 

 

 

Related content: 5 Ways That Translation Companies Cause You to Overspend

 

 

 


 

 


Moreover, it will give you the possibility to ask actual quality-related questions not only to your localization company but also to your own reviewer, leaving no place for subjectivity.


You will not be led up the garden path anymore because now, instead of a diluted analog opinion (i.e. I do not like this translation), you have an ice bridge of communication between your own company and your translation company. From water to an ice bridge – it’s just physics.

 


Recommended articles:

7 Tips to Avoid Wasting Your Translation Budget

4 Reasons Brands Reject Translation Companies That Could Be Right for Them