The term hallucination detection originates from the fields of artificial intelligence, big data and smart data as well as cybercrime and cybersecurity. It describes methods with which artificial intelligence (AI) is checked to see whether it has produced false or invented information (so-called "hallucinations").
AI systems such as chatbots or text generators are supposed to provide reliable answers. Unfortunately, it can happen that they provide seemingly convincing but completely fabricated facts. Hallucination Detection helps to automatically recognise and avoid such errors.
An illustrative example: A company uses an AI to answer customer enquiries. However, when asked about opening hours, the system suddenly invents a location that doesn't even exist. With hallucination detection, such invented answers can be flagged and sorted out. This protects companies from false statements and customers from confusion.
Hallucination detection is becoming increasingly important as AI applications become more commonplace. It ensures that machine-generated data becomes more reliable - and thus protects companies, users and the credibility of digital services.