Low-rank adaptation (LoRA) is a term used in the fields of artificial intelligence, digital transformation and big data. It describes a smart method for adapting large AI language models to specific tasks more quickly and cheaply without having to completely retrain them.
Imagine a company is already using a large AI model to analyse customer enquiries. Now this model needs to be customised to the specific language and questions from its own area. This would be very cost-intensive and time-consuming using traditional methods. This is exactly where Low-Rank Adaptation (LoRA) comes into play: instead of completely changing the huge model, only small "adapter" layers are inserted and trained. This saves resources and time, and the AI still learns to adapt better to the specific application.
An illustrative example: a fashion retailer wants to optimise its existing AI, which processes general product enquiries, specifically for enquiries relating to sustainable fashion. With LoRA, the existing model can be adapted to this new task at lightning speed - without time-consuming retraining and with little additional memory requirement. This makes LoRA a real game changer for many areas of artificial intelligence.