I’ve heard much about the problems of misaligned superhuman AI killing us all but the long view seems to imply that even a “well aligned” AI will prioritise inhuman instrumental goals.
I’m not quite understanding yet. Are you saying that an immortal AGI will prioritize preparing to fight an alien AGI, to the point that it won’t get anything else done? Or what?
Immortal expanding AGI is a part of classic alignment thinking, and we do assume it would either go to war or negotiate with an alien AGI if it encounters one, depending on the overlap in their alignment/goals.
I’m not clear on what you’re calling the “problem of superhuman AI”?
I’ve heard much about the problems of misaligned superhuman AI killing us all but the long view seems to imply that even a “well aligned” AI will prioritise inhuman instrumental goals.
I’m not quite understanding yet. Are you saying that an immortal AGI will prioritize preparing to fight an alien AGI, to the point that it won’t get anything else done? Or what?
Immortal expanding AGI is a part of classic alignment thinking, and we do assume it would either go to war or negotiate with an alien AGI if it encounters one, depending on the overlap in their alignment/goals.