Interpretable deep learning belongs to the category of artificial intelligence and is particularly important for areas such as automation and digital transformation. Deep learning describes a form of machine learning in which computers learn independently from large amounts of data. However, it often remains unclear exactly how a computer arrives at its decisions - this is known as a "black box".
Interpretable deep learning means that we can understand why an algorithm makes a certain decision. This is very important in medicine, for example: if an artificial intelligence recognises cancer in an image, doctors want to understand which features in the image played the decisive role. With interpretable deep learning, such decision paths can be made visible, for example through coloured markings in the image.
This increases trust in the technology and makes it easier to recognise errors. At the same time, interpretable deep learning helps companies to make transparent decisions - for example in the automated selection of applicants in HR. To summarise, it helps us to better understand artificial intelligence and use it responsibly.















