There’s also a big effect of motivation on intelligence even outside the small part of the possibilities that think the world exactly as it is, but without them in it, is optimal.
This is because some goals don’t require much intelligence (by the standards of self-improving AIs, that is—we’d think it was a lot) to implement, while other goals do.
EDIT: of course, what we’re examining in the op is causal relations the other way, intelligence-> goals.
There’s also a big effect of motivation on intelligence even outside the small part of the possibilities that think the world exactly as it is, but without them in it, is optimal.
This is because some goals don’t require much intelligence (by the standards of self-improving AIs, that is—we’d think it was a lot) to implement, while other goals do.
EDIT: of course, what we’re examining in the op is causal relations the other way, intelligence-> goals.