Machine Ethics is the emerging field that tries to understand how machines which consider the moral implications of their actions and act accordingly can be created. That is, how humanity can ensure that the minds created through AI can reason morally about other minds—thus creating Artificial Moral Agents (AMAs).
Historically, the earliest famous attempt at machine ethics was that by Issac Asimov in a 1942 short story, a set of rules known as the Three Laws of Robotics. The basis of many of his stories, they demonstrated how the law’s seeming impermeability could so often fail—even without the errors inevitable from machine comprehension.
Currently, machine ethics is a subject whose application is limited to simple machines programmed with narrow AI—various moral philosophies have been programmed, using many techniques and all with limited success. Despite that, it has been argued that as we approach the development of a superintelligence, humanity should focus on the task of developing machine ethics for an artificial general intelligence before that moment arrives.
As Wallach and Allen pose it, “even if full moral agency for machines is a long way off, it is already necessary to start building a kind of functional morality, in which artificial moral agents have some basic ethical sensitivity”.
Further Reading & References
Intelligence Explosion and Machine Ethics by Luke Muehlhauser and Louie Helm
Machine Ethics and Superintelligence by Carl Shulman, Henrik Jonsson, and Nick Tarleton
Moral Machines: Teaching Robots Right from Wrong by Wendell Wallach and Colin Allen
Lessons in Machine Ethics from the Perspective of Two Computational Models of Ethical Reasoning by Bruce M. McLaren
Prospects for a Kantian Machine by Thomas M. Powers
Granny and the robots: Ethical issues in robot care for the elderly by Amanda Sharkey and Noel Sharkeyture