Maybe I’m just dense but I have been around a while and searched, yet I haven’t stumbled upon a top level post or anything of the like here on the FHI, SIAI (other than ramblings about what AI could theoretically give us) OB or otherwise which either breaks it down or gives a general consensus.
Machines will probably do what they are told to do—and what they are told to do will probably depend a lot on who owns them and on who built them. Apart from that, I am not sure there is much of a consensus.
We have some books of the topic:
Moral Machines: Teaching Robots Right from Wrong—Wendell Wallach
Beyond AI: Creating The Conscience Of The Machine—J. Storrs Hall
...and probably hundreds of threads—perhaps search for “friendly” or “volition”.
Maybe I’m just dense but I have been around a while and searched, yet I haven’t stumbled upon a top level post or anything of the like here on the FHI, SIAI (other than ramblings about what AI could theoretically give us) OB or otherwise which either breaks it down or gives a general consensus.
Can you point me to where you are talking about?
Probably the median of such discussions was on http://www.sl4.org/
Machines will probably do what they are told to do—and what they are told to do will probably depend a lot on who owns them and on who built them. Apart from that, I am not sure there is much of a consensus.
We have some books of the topic:
Moral Machines: Teaching Robots Right from Wrong—Wendell Wallach
Beyond AI: Creating The Conscience Of The Machine—J. Storrs Hall
...and probably hundreds of threads—perhaps search for “friendly” or “volition”.