In court rooms and other legal settings AI should never be used for translation or interpreting unless the outcome is checked by a qualified professional, proficient in source and target language. The expertise of a legal translator/interpreter is a guarantee for the equality of access to the law, as well as the right to a fair trial.
In EULITA we know that the quality of software products in the field of translation and interpreting is increasingly better, but we also know that the accuracy level of their output is not the same as that of human translators and interpreters. We strongly believe that in sensitive areas, such as legal, administrative, medical etc. focus should remain on the risks involved. Even the slightest inaccuracy in translation and/or interpreting can negatively impact fair trials, result in situations where the rule of law will not be upheld and full access to justice for all parties not be provided. We take the stance that such risk must not be taken.
The risks in using AI systems are rooted in the basic difference in the way human beings and chatbots communicate. Large Language Models (LLMs) use mathematic algorithms to generate the most probable next word in a given sequence of words. Translation and interpreting, however, are not a mathematical exercise, they are much more complex.
It is acknowledged that there are several deficiencies of AI systems in our field. When used for translation, such AI systems struggle with complex context, lack consistency and are not sufficiently accurate. Sometimes they convert affirmative statements to negative ones. They may add some phrases (hallucinations) or omit crucial parts of a sentence or an utterance, have problems with slang and dialects, not understand the emotional context in which an utterance is made etc. In the field of interpreting, risks inherent in a machine processing human speech are unnatural prosody, poor sentence segmentation, strange pacing; failure to grasp the contextual and emotional integrity of sensitive narratives; miscommunication among the persons participating in the communicative event etc. AI systems may also have difficulty understanding speakers with speech impairments (e.g. stuttering, dysarthria, apraxia of speech or other articulation disorders), as well as speech affected by strong emotional states such as crying, fear, stress, shock or panic, which can significantly affect speech production and intelligibility.
It also needs to be noted that the source of data for the AI output is the internet, where a lot of data is provided for some language pairs, in particular in combination with English, however data which could serve as a basis for other language combinations, in particular involving the languages of lesser diffusion, are scarce or even non-existent. In the case of some language pairs, AI can obtain no more than a rudimentary understanding of human communication in legal contexts.
Another problem with machine and AI-generated translation/interpreting is that the issue of liability remains unresolved. The companies developing such software are reluctant to take any liability for the generated output or they take it up to the amount of the licence fee paid for their product. The liability of a person (for example a minister) making the decision to use or deploy such software in a certain field is also not clear. In the case of human translation and interpreting, it is of course a qualified translator/interpreter who is responsible for the result.
In the EU, machine and AI-generated translation and interpreting pose risks under GDPR and other privacy laws, as well, as the issue of confidentiality remains unresolved.
In light of the aforementioned considerations, the EULITA’s ExCom is of the opinion that in high stakes settings like court/police/healthcare/migration qualified legal interpreters and translators should continue to provide their services, because they are keenly aware of their responsibility to understand:
• legislation concerning fundamental rights;
• the risks associated with errors in legal process;
• issues of accountability and liability;
• data protection and privacy issues;
• the need to avoid any discrimination;
• the need to adhere to ethical standards; and
• the need for continuing professional development.
AI tools can be used to support the work of human legal interpreters, for example, for the purposes of preparing for an assignment. We therefore highly recommend to the practitioners of legal translation and interpreting to upskill for the use of AI in the provision of our services.
We recommend the users of our services, policy makers and other stakeholders to make sure that a qualified legal interpreter/translator remains responsible for any translation or interpreting services provided in a legal setting.
In addition, we highly recommend that before putting any new proposals in practice, a pilot study is conducted for various language combinations relevant for a certain environment.
Another point to consider is that AI-generated translations may not be as convenient as expected, as revising them by a professional linguist also incurs cost.
The transparency of the process is also very important for the end user, i.e. it should be clear for what part of the process AI was used and who is the legal translator/interpreter responsible for the final outcome.
EULITA’s ExCom
15 March 2026
