The term "Chains of responsibility in AI systems" belongs to the categories Artificial Intelligence, Digital Society and Cybercrime & Cybersecurity.
Chains of responsibility in AI systems describe who is responsible for the development, use and possible errors or damage caused by artificial intelligence. As AI systems are often very complex, many different people and companies are usually involved: from the developers to the users to those who make decisions based on the AI.
An illustrative example: A company uses AI software that pre-sorts job applications. If someone is unjustly rejected due to discrimination, the question arises as to who is responsible. Is it the software manufacturer, the company using the AI or the person who made the decision? The chain of responsibility helps to clarify such questions and ensures that everyone involved knows their role.
It is therefore important to clearly define who is responsible for what during the development and deployment of AI systems. This allows risks to be better managed and trust in AI to be strengthened.















