Isaac Asimov, history’s most prolifi writer and Mensa’s honorary president, attempted to formulate a more modest set of ethical precepts for robots and instead produced the blatantly suicidal three laws
The three laws were not intended to be bulletproof, just a starting point. And Asimov knew very well that they had holes in them; most of his robot stories were about holes in the three laws.
The Three Laws of Robotics are normally rendered as regular English words, but in-universe they are defined not by words but by mathematics. Asimov’s robots don’t have “thou shalt not hurt a human” chiseled into their positronic brain, but instead are built from the ground up to have certain moral precepts, summarized for laypeople as the three laws, so built into their cognition that robots with the three laws taken out or modified don’t work right, or at all.
Asimov actually gets the whole idea of making AI ethics being hard more than any other sci-fi author I can think of. although this stuff is mostly in the background since the plain English descriptions of the three laws are good enough for a story, but IIRC The Caves of Steel talks about this, and makes it abundantly clear that the Three Laws are made part of robots on the level of very complicated, very thorough coding—something that loads of futurists and philosopher alike often ignore if they think they’ve come up with some brilliant schema to create an ethical system, for AI or for humans.
The line where Vassar says that sci-fi authors haven’t improved on them looks like it is very probably incorrect. I don’t read sci if, but I bet in all the time since then authors have developed and worked on the ideas a fair bit.
The three laws were not intended to be bulletproof, just a starting point. And Asimov knew very well that they had holes in them; most of his robot stories were about holes in the three laws.
The Three Laws of Robotics are normally rendered as regular English words, but in-universe they are defined not by words but by mathematics. Asimov’s robots don’t have “thou shalt not hurt a human” chiseled into their positronic brain, but instead are built from the ground up to have certain moral precepts, summarized for laypeople as the three laws, so built into their cognition that robots with the three laws taken out or modified don’t work right, or at all.
Asimov actually gets the whole idea of making AI ethics being hard more than any other sci-fi author I can think of. although this stuff is mostly in the background since the plain English descriptions of the three laws are good enough for a story, but IIRC The Caves of Steel talks about this, and makes it abundantly clear that the Three Laws are made part of robots on the level of very complicated, very thorough coding—something that loads of futurists and philosopher alike often ignore if they think they’ve come up with some brilliant schema to create an ethical system, for AI or for humans.
The line where Vassar says that sci-fi authors haven’t improved on them looks like it is very probably incorrect. I don’t read sci if, but I bet in all the time since then authors have developed and worked on the ideas a fair bit.