The term interpretability is particularly at home in the fields of artificial intelligence, big data, smart data and digital transformation. It describes the traceability and comprehensibility of complex digital systems and algorithms - in other words, how well we as humans can understand their decisions and functions.
In practice, interpretability means that we can recognise why software or an AI has made a certain decision. For example, a bank uses artificial intelligence to make lending decisions. If this software refuses a customer a loan, it is important for the bank, but also for the customer, to be able to understand how the result came about. If the system can be interpreted, it becomes clear, for example, that income or previous payment behaviour were the reasons.
Good interpretability helps to strengthen trust in digital systems, find errors and ensure fair decisions. Traceable decisions play a central role, especially in sensitive business processes, so that we humans can really trust the results of algorithms.