Stochastic gradient methods are at home in the fields of artificial intelligence, big data, smart data and digital transformation. This term describes a method that enables computers to learn quickly and efficiently from large amounts of data.
Imagine that a computer has to recognise whether a photo shows a cat or a dog. In order for the computer to learn this, it has to analyse many examples and improve its "decision" step by step. The stochastic gradient method helps here: it does not always take all the available data at once, but looks at randomly selected examples. This saves the computer a lot of time and computing power.
A simple example: Think of a large box full of letters that need to be sorted into "important" and "not important". Instead of checking all the letters at once, an employee looks at a few random letters at a time, learns from them and adapts their sorting method. Over many repetitions, they become better and better without ever having to see every single letter.
In AI and big data, the stochastic gradient method makes it possible to process huge data sets quickly and draw conclusions from them.















