The term "backdoor attack in AI" is primarily used in the fields of artificial intelligence, cybercrime and cybersecurity as well as digital transformation.
A backdoor attack in AI describes a hidden attack on systems with artificial intelligence. Attackers deliberately introduce a "backdoor" into an AI model, often during the training process. The aim: under certain conditions, the model can later be manipulated without the operators realising it. Attackers could thus infiltrate foreign commands or make the AI behave in an undesirable way.
A simple example: A backdoor is placed in an image recognition system that is supposed to recognise malware in emails. Every time a certain symbol appears in the attachment, the AI overlooks the actually dangerous file. For the normal user, the system remains seemingly secure, while criminals use the right trick to trick the system.
Backdoor attacks in AI are particularly dangerous because they are difficult to recognise and can cause great damage. Companies should therefore pay attention to where their AI solutions come from and how they were developed in order to protect themselves against such attacks.