AI Hallucination belongs to the categories Artificial Intelligence and Digital Transformation.
AI hallucination describes a phenomenon that can occur when using artificial intelligence (AI): The AI "hallucinates" and outputs information that is not true at all - it invents facts, so to speak. This can happen especially when an AI works with little or contradictory data. Even modern language models such as ChatGPT are affected by this.
An example: You ask an AI for the date of birth of a famous person. Instead of answering correctly, the AI gives an incorrect date that it has "invented" because it has incorrectly linked certain data or has not found any precise information about it. To the layperson, the information seems credible, but it is completely false.
The risk of AI hallucination exists wherever AI is used - whether in business reports, when creating summaries or when communicating with customers. It is therefore important to double-check AI answers and not blindly trust them.
By understanding the concept of AI hallucination, decision-makers are sensitised to dealing with AI critically and responsibly and review information once again.