Skepticism about SIAI’s competence screens off skepticism about SIAI’s intentions, so of course that’s not the true rejection for the vast majority of people. But it genuinely troubles me if nobody’s thought of the latter question at all, beyond “Trust us, we have no incentive to implement anything but CEV”.
If I told you that a large government or corporation was working hard on AGI plus Friendliness content (and that they were avoiding the obvious traps), even if they claimed altruistic goals, wouldn’t you worry a bit about their real plan? What features would make you more or less worried?
I think the key point is that we’re not there yet. Whatever theoretical tools we shape now are either generally useful, or generally useless, irrespective of considerations of motive; currently relevant question is (potential) competence. Only at some point in the (moderately distant) future, conditional on current and future work bearing fruit, motive might become relevant.
What features would make you more or less worried?
I’d worry about selfish institutional behavior, or explicit identification of the programmers’ goals with the nation/corporation’s selfish interests. Also, I guess, belief in the moral infallibility of some guru.
Otherwise I wouldn’t worry about motives, not unless I thought one programmer could feasibly deceive the others and tell the AI to look only at this person’s goals. Well, I have to qualify that—if everyone in the relevant subculture agreed on moral issues and we never saw any public disagreement on what the future of humanity should look like, then maybe I’d worry. That might give each of them a greater expectation of getting what they want if they go with a more limited goal than CEV.
Skepticism about SIAI’s competence screens off skepticism about SIAI’s intentions, so of course that’s not the true rejection for the vast majority of people. But it genuinely troubles me if nobody’s thought of the latter question at all, beyond “Trust us, we have no incentive to implement anything but CEV”.
If I told you that a large government or corporation was working hard on AGI plus Friendliness content (and that they were avoiding the obvious traps), even if they claimed altruistic goals, wouldn’t you worry a bit about their real plan? What features would make you more or less worried?
I think the key point is that we’re not there yet. Whatever theoretical tools we shape now are either generally useful, or generally useless, irrespective of considerations of motive; currently relevant question is (potential) competence. Only at some point in the (moderately distant) future, conditional on current and future work bearing fruit, motive might become relevant.
I’d worry about selfish institutional behavior, or explicit identification of the programmers’ goals with the nation/corporation’s selfish interests. Also, I guess, belief in the moral infallibility of some guru.
Otherwise I wouldn’t worry about motives, not unless I thought one programmer could feasibly deceive the others and tell the AI to look only at this person’s goals. Well, I have to qualify that—if everyone in the relevant subculture agreed on moral issues and we never saw any public disagreement on what the future of humanity should look like, then maybe I’d worry. That might give each of them a greater expectation of getting what they want if they go with a more limited goal than CEV.