The concept of recognising out-of-distribution data is particularly at home in the fields of artificial intelligence, big data and smart data as well as cybercrime and cybersecurity.
Out-of-distribution data refers to information that is "out of the ordinary", so to speak - it is very different from the data on which an AI system or other data-driven system has been trained. Recognising out-of-distribution data therefore means that the system automatically recognises when it encounters unknown or unexpected data.
Why is this important? Many applications, for example in image recognition, autonomous driving or IT security, rely on models that have learnt from the past. However, if completely different, unseen data suddenly appears, this can lead to errors or even security gaps.
For example, an AI system trained to recognise cats and dogs suddenly receives a picture of an elephant. Thanks to the recognition of out-of-distribution data, the system realises: "Watch out, that's not a dog or a cat - I'd better not make a mistake!" In this way, such systems help to avoid risks and unforeseeable consequences.















