At the moment when you create the AIs, their motivation and intelligence could be independent. But if let them run for a while, some motivations will lead to changes in intelligence. Improving intelligence could be difficult, but I think it is obvious that motivation to self-destruct will on average decrease the intelligence.
So are you talking about orthogonality of motivation and intelligence in freshly created AIs, or in running AIs?
I think he’s looking for refutations of the statement “Improving intelligence will necessarily always change motivation to the same set of goals, regardless of the starting goal set.”
What I’d be really looking for is: “intelligence puts some constraints on motivation, but it can still vary in all sorts of directions, far beyond what we humans usually imagine”.
There’s also a big effect of motivation on intelligence even outside the small part of the possibilities that think the world exactly as it is, but without them in it, is optimal.
This is because some goals don’t require much intelligence (by the standards of self-improving AIs, that is—we’d think it was a lot) to implement, while other goals do.
EDIT: of course, what we’re examining in the op is causal relations the other way, intelligence-> goals.
When I looked at the puppy, I realized this:
At the moment when you create the AIs, their motivation and intelligence could be independent. But if let them run for a while, some motivations will lead to changes in intelligence. Improving intelligence could be difficult, but I think it is obvious that motivation to self-destruct will on average decrease the intelligence.
So are you talking about orthogonality of motivation and intelligence in freshly created AIs, or in running AIs?
I think he’s looking for refutations of the statement “Improving intelligence will necessarily always change motivation to the same set of goals, regardless of the starting goal set.”
What I’d be really looking for is: “intelligence puts some constraints on motivation, but it can still vary in all sorts of directions, far beyond what we humans usually imagine”.
There’s also a big effect of motivation on intelligence even outside the small part of the possibilities that think the world exactly as it is, but without them in it, is optimal.
This is because some goals don’t require much intelligence (by the standards of self-improving AIs, that is—we’d think it was a lot) to implement, while other goals do.
EDIT: of course, what we’re examining in the op is causal relations the other way, intelligence-> goals.