A guide to the ethical questions that arise from our use of industrial robots, robot companions, self-driving cars, and other robotic devices
Does a robot have moral agency? Can it be held responsible for its actions? Do humans owe robots anything? Will robots take our jobs? These are some of the ethical and moral quandaries that we should address now, as robots and other intelligent devices become more widely used and more technically sophisticated. In this volume in the MIT Press Essential Knowledge series, philosopher Mark Coeckelbergh does just that. He considers a variety of robotics technologies and applications—from robotic companions to military drones—and identifies the ethical implications of their use. Questions of robot ethics, he argues, are not just about robots but, crucially and importantly, are about humans as well.
Coeckelbergh examines industrial robots and their potential to take over tasks from humans; “social” robots and possible risks to privacy; and robots in health care and their effect on quality of care. He considers whether a machine can be moral, or have morality built in; how we ascribe moral status; and if machines should be allowed to make decisions about life and death. When we discuss robot ethics from a philosophical angle, Coeckelbergh argues, robots can function as mirrors for reflecting on the human. Robot ethics is more than applied ethics; it is a way of doing philosophy.