The term general adversarial robustness is particularly at home in the fields of artificial intelligence, cybercrime and cybersecurity as well as digital transformation. It describes how resistant an AI system is to so-called "adversarial attacks". These are tricks or manipulations with which hackers try to trick or deceive an AI.
Imagine an AI that monitors surveillance cameras and automatically recognises suspicious people. An attacker can try to trick the AI into not recognising them by making small changes to their own appearance - such as certain patterns on their clothing. General Adversarial Robustness means that the AI continues to work reliably even if someone attempts such attacks.
Another example: In self-driving cars, traffic signs are recognised by an AI. If someone puts stickers on a stop sign, a weak AI could no longer correctly recognise the sign as a stop sign. A system with high general adversarial robustness, on the other hand, remains safe and continues to make the right decisions.
Anyone relying on artificial intelligence should therefore always check how "robust" their system is against such targeted attempts at deception.