The term machine ethics is particularly at home in the categories of artificial intelligence, robots and digital society. It deals with the question of how machines - for example robots or intelligent software - can be designed to act morally. Machine ethics is therefore concerned with the "right" and "wrong" behaviour of machines.
Imagine a self-driving car having to make a decision in an emergency: should it swerve out of the way, even if this endangers other people? Or should it brake so hard that the occupant is injured but passers-by remain safe? This is where Machine Ethics comes into play. The aim is to define clear rules and values so that machines make comprehensible, fair decisions in such situations.
Machine ethics is becoming increasingly important the more we integrate artificial intelligence and robots into our lives. This is not just about technology, but also about building trust: Only if we allow machines to "think" ethically can we use them safely in sensitive areas such as medicine, transport or care.