Machines will probably do what they are told to do—and what they are told to do will probably depend a lot on who owns them and on who built them. Apart from that, I am not sure there is much of a consensus.
We have some books of the topic:
Moral Machines: Teaching Robots Right from Wrong—Wendell Wallach
Beyond AI: Creating The Conscience Of The Machine—J. Storrs Hall
...and probably hundreds of threads—perhaps search for “friendly” or “volition”.
Probably the median of such discussions was on http://www.sl4.org/
Machines will probably do what they are told to do—and what they are told to do will probably depend a lot on who owns them and on who built them. Apart from that, I am not sure there is much of a consensus.
We have some books of the topic:
Moral Machines: Teaching Robots Right from Wrong—Wendell Wallach
Beyond AI: Creating The Conscience Of The Machine—J. Storrs Hall
...and probably hundreds of threads—perhaps search for “friendly” or “volition”.