Edit: I skimmed through the bits that looked interesting; there’s an off-hand reference to “friendliness theory” but the difficult bits of getting a machine to have a correct morality seem glossed over (justified by the claim that that these are supposed to be special-purpose robots with a definite mission and orders to obey, not AGIs—though some of the stuff they describe sounds “AI hard” to me). There’s some mention of robots building other robots and running amok in the risks, and some references to Kurzweil.
The Department of Defense report “Autonomous Military Robotics: Risk, Ethics, and Design” linked looks interesting (it doesn’t seem to have been linked here before, though it’s from 2008). I’ll check it out.
Edit: I skimmed through the bits that looked interesting; there’s an off-hand reference to “friendliness theory” but the difficult bits of getting a machine to have a correct morality seem glossed over (justified by the claim that that these are supposed to be special-purpose robots with a definite mission and orders to obey, not AGIs—though some of the stuff they describe sounds “AI hard” to me). There’s some mention of robots building other robots and running amok in the risks, and some references to Kurzweil.