The term model drift originates from the fields of artificial intelligence and big data and describes an important phenomenon in the use of learning computer models. Model drift occurs when the quality of predictions made by an AI model deteriorates over time because the data or conditions with which the model works change.
A simple example: an online shop uses artificial intelligence to predict which products customers might buy next. If purchasing behaviour shifts due to a trend or a social change, the original model suddenly works with "old" assumptions. The recommendations become less accurate - the model has "drifted", so to speak.
Model drift is important to keep an eye on because it shows decision-makers that artificial intelligence is not a sure-fire success. Models need to be regularly reviewed and, if necessary, retrained so that they continue to deliver reliable, usable results. In this way, the technology remains a useful tool in everyday working life - and effectively supports data-based decisions.