“The possibility of creating thinking machines raises a host of ethical issues. These questions relate both to ensuring that such machines do not harm humans and other morally relevant beings, and to the moral status of the machines themselves. The first section discusses issues that may arise in the near future of AI. The second section outlines challenges for ensuring that AI operates safely as it approaches humans in its intelligence. The third section outlines how we might assess whether, and in what circumstances, AIs themselves have moral status. In the fourth section, we consider how AIs might differ from humans in certain basic respects relevant to our ethical assessment of them. The final section addresses the issues of creating AIs more intelligent than human, and ensuring that they use their advanced intelligence for good rather than ill.”
(I skimmed through the paper. It’s nice but I didn’t see any that struck as particularly novel.)
The abstract:
(I skimmed through the paper. It’s nice but I didn’t see any that struck as particularly novel.)
Yeah, like Chalmers’ paper, it’s survey article.