Explainable AI (XAI) is a term used in the fields of artificial intelligence, big data and smart data as well as cybercrime and cybersecurity. It describes artificial intelligence systems whose decisions are comprehensible to humans. The aim of Explainable AI is to create transparency and strengthen user trust.
AI decisions often seem like a "black box" - they deliver a result, but nobody understands how it came about. Explainable AI changes this by showing which rules or data an AI is acting on. This is particularly important when decisions are made about people, for example when granting loans or diagnosing illnesses.
An illustrative example: a bank uses AI software to assess loan applications. Thanks to Explainable AI, the employee can see which factors (such as income, credit report and occupation) contributed to the rejection or acceptance. This allows the bank to explain its decision and better advise applicants.
Explainable AI is therefore an important building block for using artificial intelligence safely and responsibly in everyday life.