Maybe I’m just dense but I have been around a while and searched, yet I haven’t stumbled upon a top level post or anything of the like here on the FHI, SIAI (other than ramblings about what AI could theoretically give us) OB or otherwise which either breaks it down or gives a general consensus.
Machines will probably do what they are told to do—and what they are told to do will probably depend a lot on who owns them and on who built them. Apart from that, I am not sure there is much of a consensus.
We have some books of the topic:
Moral Machines: Teaching Robots Right from Wrong—Wendell Wallach
Beyond AI: Creating The Conscience Of The Machine—J. Storrs Hall
...and probably hundreds of threads—perhaps search for “friendly” or “volition”.
The topic of what the goals of the AI should be has been discussed an awful lot.
I think the combination of moral philosopher and machine intelligence expert must be appealing to some types of personality.
Maybe I’m just dense but I have been around a while and searched, yet I haven’t stumbled upon a top level post or anything of the like here on the FHI, SIAI (other than ramblings about what AI could theoretically give us) OB or otherwise which either breaks it down or gives a general consensus.
Can you point me to where you are talking about?
Probably the median of such discussions was on http://www.sl4.org/
Machines will probably do what they are told to do—and what they are told to do will probably depend a lot on who owns them and on who built them. Apart from that, I am not sure there is much of a consensus.
We have some books of the topic:
Moral Machines: Teaching Robots Right from Wrong—Wendell Wallach
Beyond AI: Creating The Conscience Of The Machine—J. Storrs Hall
...and probably hundreds of threads—perhaps search for “friendly” or “volition”.