AI hallucinations

AI hallucinations

Help, my AI is ƃuıʇɐuıɔnןןɐɥ!

  

What is an AI hallucination?

We speak of an AI hallucination when a large generative language model (LLM) generates false information or facts that do not correspond to reality. The hallucinations often appear plausible - at least at first glance - as fluent, coherent texts are created.

However, it is important to emphasize that LLMs do not intentionally lie, but simply have no awareness of the texts they generate.

„Large Language Models have a tendency to invent new (false) information very confidently.“

– Thora Markert, Head of AI Research and Governance at TÜVIT

Why do AI hallucinations occur?

The technical reasons for AI hallucinations can be manifold. Possible factors include, for example:

 

Outdated, poor or contradictory training data on which the LLM is based

Incorrect classification of data

Lack of context or unclear or inconsistent user input

Difficulties in recognizing colloquial language, sarcasm, etc.

Inadequate training and generation methods or programming

 

It is also possible for LLMs to generate hallucinations even though they are based on consistent and reliable data sets.

The reduction of hallucinations is therefore one of the fundamental challenges for AI operators and developers. That's because LLMs are usually a black box, which can make it difficult to determine why a particular hallucination was generated.

What are types of AI hallucinations?

The term AI hallucinations covers a broad spectrum: from minor inconsistencies to fictitious information. Types of AI hallucinations include: 
 

Sentence contradictions
Generated sentences contradict previous sentences or parts of the generated response.

Contradictions with the prompt
The generated response or parts of it do not match the user's prompt.

Factual contradictions
Information invented by the LLM is presented as a fact.

Random hallucinations
The LLM generates random information that has nothing to do with the actual prompt.

What are the risks of AI hallucinations?

If users rely too much on the results of an AI system because they look very convincing and reliable, they may not only believe the false information themselves, but also spread it further.

For companies that use LLM- based services for customer communication, there is also a potential risk that customers will be provided with false information. This, in turn, can have a negative impact on the company's reputation.

„LLMs are powerful tools, but they also come with challenges such as the phenomenon of AI hallucination.

Via comprehensive testing, we therefore support AI developers in identifying and minimizing existing risks in the best possible way and further strengthening confidence in the technology.“

– Vasilios Danos, Head of AI Security and Trustworthiness at TÜVIT

How do I recognize AI hallucinations?

The easiest way to recognize or unmask an AI hallucination is to carefully check the information provided for correctness. As a user of generative AI, you should therefore always bear in mind that it can also make mistakes and proceed according to the “four-eyes principle” of AI and human.

How can AI hallucinations be prevented?

In order to prevent AI hallucinations and other challenges of AI systems, corresponding tests by independent third parties are recommended. In the best case scenario, vulnerabilities can be identified and minimized before applications are officially deployed.

 

You have questions? I am happy to help!

  

Eric Behrendt

+49 160 8880296
e.behrendt@tuvit.de