Bad according to whose priorities, though? Ours, or the AI’s? That was more the point of this article, whether our interests or the AI’s ought to take precedence, and whether we’re being objective in deciding that.
Note that most AIs would also be bad according to most other AIs’ priorities. The paperclip maximizer would not look kindly at the stamp maximizer.
Given the choice between the future governed by human values, and the future governed by a stamp maximizer, a paperclip maximizer would choose humanity, because that future at least contains some paperclips.
Bad according to whose priorities, though? Ours, or the AI’s? That was more the point of this article, whether our interests or the AI’s ought to take precedence, and whether we’re being objective in deciding that.
Note that most AIs would also be bad according to most other AIs’ priorities. The paperclip maximizer would not look kindly at the stamp maximizer.
Given the choice between the future governed by human values, and the future governed by a stamp maximizer, a paperclip maximizer would choose humanity, because that future at least contains some paperclips.
I suppose I was assuming non-wrapper AI, and should have specified that. The premise is that we’ve created an authentically conscious AI.