21 May, 2025 Juan Fernando Marqués

HALLUCINATIONS IN ARTIFICIAL INTELLIGENCE: A GROWING LEGAL RISK

The term “hallucinations” in artificial intelligence (AI) refers to when a system generates false or invented statements that it presents as true. This phenomenon is particularly common in language models such as ChatGPT, DeepSeek, Grok, Queen, and Gemini, which produce text based on statistical patterns without verifying the accuracy of the content. As a result, an AI can provide incorrect data, fictitious legal citations, or erroneous legal interpretations with complete conviction.

While in some contexts these hallucinations may seem anecdotal, their use in legal, administrative, or technical settings can have serious legal consequences.

WHY DO THEY OCCUR?

AI models don’t reason or understand like humans. Instead of “knowing” something, they generate the most likely answer based on the data they were trained with. When the available information is insufficient, ambiguous or contradictory, the system tends to fill the gap with plausible but erroneous content. This lack of verification makes hallucinations a structural risk of generative AI.

ASSOCIATED LEGAL RISKS

  1. Liability:

When an AI offers false information that leads to harm (e.g., misdiagnosis, ill-advised investment, or failed legal action), questions arise about who should take responsibility: the developer, the provider, or the user. Although legislation is still being adapted today, there is already discussion about whether certain AI applications could be considered defective from the point of view of the product liability regime.

  1. Use in legal and administrative environments:

Cases have come to light of lawyers sanctioned for submitting AI-drafted briefs containing non-existent jurisprudence. The use of hallucinated information in judicial or administrative proceedings may cause nullity, procedural errors or even violations of the right to effective judicial protection.

  1. Defamation and the right to honor:

In some cases, AI has falsely attributed crimes they never committed or infamous acts to people. Such statements may constitute an illegitimate interference with the right to honour and give rise to civil liability or even sanctions for the protection of personal data, if the false information affects identifiable persons. 

  1. Intellectual property:

An AI can generate content derived from or similar to copyrighted works without intention or awareness of it. If a generated creation infringes the rights of third parties, who is responsible? Although the current legal frameworks have not yet provided a uniform response, many developers are beginning to offer legal guarantees to users against possible claims.

CAUTION AGAINST AUTOMATED LEGAL ADVICE

It is essential to note that, in industrial property, no matter how convincing an AI-generated answer on legal issues may seem, consultation with an Industrial Property Agent should never be replaced. Legal interpretation requires contextual analysis, up-to-date normative knowledge, and technical criteria that no automatic model can reliably reproduce.