The term model robustness originates from the fields of artificial intelligence, big data and smart data as well as cybercrime and cybersecurity. It describes how resistant an AI model or a data-based system is to disruptions, errors or unknown data. In other words, a robust model functions reliably even when confronted with new, unfamiliar or incorrect inputs.
Why is model robustness important? In practice, many AI applications - for example in lending or image recognition - often work with data that may contain small deviations, errors or manipulations. A model that still makes the right decisions in such situations is considered robust.
An illustrative example: Imagine facial recognition software that recognises people correctly even if they are wearing glasses, are poorly lit or the photo is slightly blurred. Only a robust model can deal with such minor disturbances and still deliver reliable results. Model robustness is therefore a decisive factor for the safety, fairness and reliability of AI applications in the digital world.