Suppose I am programming an AGI. If programming it to be friendly is bad, is programming it to be neutral any better? After all, in both cases you are “imposing” a position on the AI.
This reminds me of activists who claim that parents should not be allowed to tell their own children about their own political or religious views and other values, because it would force their children down a path… but by doing this they would also force the children down a path, just a different path.
Suppose I am programming an AGI. If programming it to be friendly is bad, is programming it to be neutral any better? After all, in both cases you are “imposing” a position on the AI.
This reminds me of activists who claim that parents should not be allowed to tell their own children about their own political or religious views and other values, because it would force their children down a path… but by doing this they would also force the children down a path, just a different path.