I didn’t reply to this originally, probably because I think it’s all pretty reasonable.
That’s why I distinguished between the hypotheses of “human utility” and CEV. It is my vague understanding (and I could be wrong) that some alignment researchers see it as their task to align AGI with current humans and their values, thinking the “extrapolation” less important or that it will take care of itself, while others consider extrapolation an important part of the alignment problem.
My thinking on this is pretty open. In some sense, everything is extrapolation (you don’t exactly “currently” have preferences, because every process is expressed through time...). But OTOH there may be a strong argument for doing as little extrapolation as possible.
My intuitions tend to agree, but I’m also inclined to ask “why not?” e.g. even if my preferences are absurdly cyclical, but we get AGI to imitate me perfectly (or me + faster thinking + more information)
Well, imitating you is not quite right. (EG, the now-classic example introduced with the CIRL framework: you want the AI to help you make coffee, not learn to drink coffee itself.) Of course maybe it is imitating you at some level in its decision-making, like, imitating your way of judging what’s good.
under what sense of the word is it “unaligned” with me?
I’m thinking things like: will it disobey requests which it understands and is capable of? Will it fight you? Not to say that those things are universally wrong to do, but they could be types of alignment we’re shooting for, and inconsistencies do seem to create trouble there. Presumably if we know that it might fight us, we would want to have some kind of firm statement about what kind of “better” reasoning would make it do so (e.g., it might temporarily fight us if we were severely deluded in some way, but we want pretty high standards for that).
I didn’t reply to this originally, probably because I think it’s all pretty reasonable.
My thinking on this is pretty open. In some sense, everything is extrapolation (you don’t exactly “currently” have preferences, because every process is expressed through time...). But OTOH there may be a strong argument for doing as little extrapolation as possible.
Well, imitating you is not quite right. (EG, the now-classic example introduced with the CIRL framework: you want the AI to help you make coffee, not learn to drink coffee itself.) Of course maybe it is imitating you at some level in its decision-making, like, imitating your way of judging what’s good.
I’m thinking things like: will it disobey requests which it understands and is capable of? Will it fight you? Not to say that those things are universally wrong to do, but they could be types of alignment we’re shooting for, and inconsistencies do seem to create trouble there. Presumably if we know that it might fight us, we would want to have some kind of firm statement about what kind of “better” reasoning would make it do so (e.g., it might temporarily fight us if we were severely deluded in some way, but we want pretty high standards for that).