Your argument would be stronger if you provided a citation. I’ve only skimmed CEV, for instance, so I’m not fully familiar with Eliezer strongest arguments in favour of goal structure tending to be preserved (though I know he did argue for that) in the course of intelligence growth. For that matter, I’m not sure what your arguments for goal stability under intelligence improvement are. Nevertheless, consider the following:
In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.
Yudkowsky, E. (2004). Coherent Extrapolated Volition. Singularity Institute for Artificial Intelligence
(Bold mine.) See that bolded part above? Those are TODOs. They would be good to have, but they’re not guaranteed. The goals of a more intelligent AI might diverge from those of its previous self; it may extrapolate differently; it may interpret differently; its desires may, at higher levels of intelligence, interfere with ours rather than cohere.
If I want X, and I’m considering an improvement to my systems that would make me not want X, then I’m not going to get X if I take that improvement, so I’m going to look for some other improvement to my systems to try instead.
A more intelligent AI might:
find a new way to fulfill its goals, e.g. Eliezer’s example of distancing your grandmother from the fire by detonating a nuke under her;
discover a new thing it could do, compatible with its goal structure, that it did not see before, and that, if you’re unlucky, takes priority over the other things it could be doing, e.g. you tell it “save the seals” and it starts exterminating orcas; see also Lumifer’s post.
just decide to do things on its own. This is merely a suspicion I have, call it a mind projection, but: I think it will be challenging to design an intelligent agent with no “mind of its own”, metaphorically speaking. We might succeed in that, we might not.
Sorry for not citing; I was talking with people who would not need such a citation, but I do have a wider audience. I don’t have time to hunt it up now, but I’ll edit it in later. If I don’t, poke me.
If at higher intelligence it finds that the volition diverges rather than converges, or vice versa, or that it goes in a different direction, that is a matter of improvements in strategy rather than goals. No one ever said that it would or should not change its methods drastically with intelligence increases.
Your argument would be stronger if you provided a citation. I’ve only skimmed CEV, for instance, so I’m not fully familiar with Eliezer strongest arguments in favour of goal structure tending to be preserved (though I know he did argue for that) in the course of intelligence growth. For that matter, I’m not sure what your arguments for goal stability under intelligence improvement are. Nevertheless, consider the following:
Yudkowsky, E. (2004). Coherent Extrapolated Volition. Singularity Institute for Artificial Intelligence
(Bold mine.) See that bolded part above? Those are TODOs. They would be good to have, but they’re not guaranteed. The goals of a more intelligent AI might diverge from those of its previous self; it may extrapolate differently; it may interpret differently; its desires may, at higher levels of intelligence, interfere with ours rather than cohere.
A more intelligent AI might:
find a new way to fulfill its goals, e.g. Eliezer’s example of distancing your grandmother from the fire by detonating a nuke under her;
discover a new thing it could do, compatible with its goal structure, that it did not see before, and that, if you’re unlucky, takes priority over the other things it could be doing, e.g. you tell it “save the seals” and it starts exterminating orcas; see also Lumifer’s post.
just decide to do things on its own. This is merely a suspicion I have, call it a mind projection, but: I think it will be challenging to design an intelligent agent with no “mind of its own”, metaphorically speaking. We might succeed in that, we might not.
Sorry for not citing; I was talking with people who would not need such a citation, but I do have a wider audience. I don’t have time to hunt it up now, but I’ll edit it in later. If I don’t, poke me.
If at higher intelligence it finds that the volition diverges rather than converges, or vice versa, or that it goes in a different direction, that is a matter of improvements in strategy rather than goals. No one ever said that it would or should not change its methods drastically with intelligence increases.