What features would make you more or less worried?
I’d worry about selfish institutional behavior, or explicit identification of the programmers’ goals with the nation/corporation’s selfish interests. Also, I guess, belief in the moral infallibility of some guru.
Otherwise I wouldn’t worry about motives, not unless I thought one programmer could feasibly deceive the others and tell the AI to look only at this person’s goals. Well, I have to qualify that—if everyone in the relevant subculture agreed on moral issues and we never saw any public disagreement on what the future of humanity should look like, then maybe I’d worry. That might give each of them a greater expectation of getting what they want if they go with a more limited goal than CEV.
I’d worry about selfish institutional behavior, or explicit identification of the programmers’ goals with the nation/corporation’s selfish interests. Also, I guess, belief in the moral infallibility of some guru.
Otherwise I wouldn’t worry about motives, not unless I thought one programmer could feasibly deceive the others and tell the AI to look only at this person’s goals. Well, I have to qualify that—if everyone in the relevant subculture agreed on moral issues and we never saw any public disagreement on what the future of humanity should look like, then maybe I’d worry. That might give each of them a greater expectation of getting what they want if they go with a more limited goal than CEV.