Adversarial AI is a term from the categories of artificial intelligence, cybercrime, cybersecurity and digital transformation. It describes the use of artificial intelligence with the aim of tricking, outwitting or attacking other AI systems.
Imagine a company using an AI to filter out fraudulent emails. However, criminals can use adversarial AI to specifically modify emails so that they are not recognised by this filter AI. The malicious emails then reach the recipient unnoticed.
Adversarial AI therefore works like a digital trickster: it recognises weaknesses in the AI and exploits them in a targeted manner. This can cause problems in many areas - for example in autonomous driving, when traffic signs are manipulated so that they are incorrectly recognised by a vehicle AI.
This is why it is important for companies today to protect their systems not only against traditional hackers, but also against attacks using adversarial AI. This is the only way to ensure the long-term security of artificial intelligence and maintain user trust.