The term loss function originates from the fields of artificial intelligence, big data, smart data and digital transformation. The loss function is a central concept in the training of AI systems and algorithms.
It helps to improve artificial intelligence. Specifically, it measures how big the difference is between the result of a model and the actual correct result. The smaller the difference, the better the model works. The loss function therefore shows the computer how far away it is from the optimum result.
A simple example: Imagine you are building an AI that is supposed to distinguish between photos of dogs and cats. If the AI answers "dog" but there is a cat in the picture, the error would be large - the loss function shows this as a high loss. If the AI correctly says "cat", the loss is very small. Through thousands of repetitions, the system learns to become more and more accurate because it tries to minimise the loss as much as possible.
To summarise: The loss function tells computers how well or badly they have solved a task and initiates improvements.