Model watermarking (watermarking for AI models) is a term used in the fields of artificial intelligence, cybersecurity and digital transformation. It describes a technique used by developers of AI models to incorporate invisible markings or "watermarks" into their algorithms. These watermarks help to identify an AI model as intellectual property and protect it against unauthorised use or theft.
An illustrative example: A company develops artificial intelligence to recognise counterfeit products. The developers add model watermarking, which acts like an invisible fingerprint in the code. If the model appears somewhere else, the developers can prove that it is their property.
Model watermarking is becoming increasingly important, as AI models are valuable business resources. This allows companies to protect their investments and prevent competitors from using illegally developed models. For IT security, this means additional protection against cyber attacks and data theft.
The principle is comparable to a watermark on banknotes: It is not visible, but clearly protects against counterfeiting. This means that intellectual property in the field of artificial intelligence remains better protected.















