I realize this isn’t your main point here, but I do want to flag I put ‘nice’ in quotes because I don’t mean the colloquial definition. The question here is ‘would a super intelligent system with control over the solar system spend a billionth or trillionth of its resources helping beings too weak to usefully trade with it, if it didn’t benefit directly from it?’
As I see it the question is agnostic to what sort of mind the AI is.
Noted. The problem remains—it’s just less obvious. This phrasing still conflates “intelligent system” with “optimizer”, a mistake that goes all the way back to Eliezer Yudkowsky’s 2004 paper on Coherent Extrapolated Volition.
For example, consider a computer system that, given a number N can (usually) produce the shortest computer program that will output N. Such a computer system is undeniably superintelligent, but it’s not a world optimizer at all.
“Far away, in the Levant, there are yogis who sit on lotus thrones. They do nothing, for which they are revered as gods,” said Socrates.
I realize this isn’t your main point here, but I do want to flag I put ‘nice’ in quotes because I don’t mean the colloquial definition. The question here is ‘would a super intelligent system with control over the solar system spend a billionth or trillionth of its resources helping beings too weak to usefully trade with it, if it didn’t benefit directly from it?’
As I see it the question is agnostic to what sort of mind the AI is.
Noted. The problem remains—it’s just less obvious. This phrasing still conflates “intelligent system” with “optimizer”, a mistake that goes all the way back to Eliezer Yudkowsky’s 2004 paper on Coherent Extrapolated Volition.
For example, consider a computer system that, given a number N can (usually) produce the shortest computer program that will output N. Such a computer system is undeniably superintelligent, but it’s not a world optimizer at all.