The term model interpretability is particularly at home in the fields of artificial intelligence, big data, smart data and digital transformation. It describes how comprehensible the results and decisions of a complex AI or data model are for humans. Put simply, model interpretability ensures that we can understand why an artificial intelligence makes a certain assessment or recommendation, for example.
This is particularly important when decisions have a significant impact on people - for example when granting loans, making diagnoses in the healthcare sector or personalised advertising. An interpretable model helps those responsible to check the "logic" behind a decision or even recognise errors.
Imagine an automated system giving loan applicants an approval or rejection. Without model interpretability, it would be impossible to understand why someone was rejected. With an interpretable model, you could easily understand the reason, for example because the income was too low or because there were problems with repayments in the past.
Model interpretability therefore helps to create transparency and trust in digital decisions - an important building block for the broad acceptance of artificial intelligence in business and society.