This plan, as currently worded, has me somewhat concerned.
I think that using AI to solve alignment should be possible, but to me this relies on it not being agentic (i.e., not making decisions based on satisfying preferences about the future of the world).
This “superalignment” plan, the way it is currently worded
doesn’t seem to show any visible recognition of the importance of avoiding the AI being agentic (e.g. it calls for an “automated alignment researcher” instead of a “research assistant” or something), and
is proposing to do things that seem to me might pose a risk of causing or enhancing agency (e.g. it looks like it amounts to recursive self-improvement plus extra safety steps)
But hey, maybe the safety stuff works and avoids the researcher being agentic.
Even then, in order for the ultimate aligned AI to wind up “aligned”, it does have to care about the future at least indirectly (via human preferences about the future). Even an aligned AI doesn’t have to (and IMO cannot, to be *really* aligned) care about the future directly (i.e. for any reason other than human preferences about the future), but if designed without an understanding of this, so that people just try to instill good-looking-to-humans preferences and cause good-looking-to-humans-in-the-short-run actions, then it will end up with these independent preferences about the future which inevitably won’t be perfectly the same as humans’ preferences.
This plan, as currently worded, has me somewhat concerned.
I think that using AI to solve alignment should be possible, but to me this relies on it not being agentic (i.e., not making decisions based on satisfying preferences about the future of the world).
This “superalignment” plan, the way it is currently worded
doesn’t seem to show any visible recognition of the importance of avoiding the AI being agentic (e.g. it calls for an “automated alignment researcher” instead of a “research assistant” or something), and
is proposing to do things that seem to me might pose a risk of causing or enhancing agency (e.g. it looks like it amounts to recursive self-improvement plus extra safety steps)
But hey, maybe the safety stuff works and avoids the researcher being agentic.
Even then, in order for the ultimate aligned AI to wind up “aligned”, it does have to care about the future at least indirectly (via human preferences about the future). Even an aligned AI doesn’t have to (and IMO cannot, to be *really* aligned) care about the future directly (i.e. for any reason other than human preferences about the future), but if designed without an understanding of this, so that people just try to instill good-looking-to-humans preferences and cause good-looking-to-humans-in-the-short-run actions, then it will end up with these independent preferences about the future which inevitably won’t be perfectly the same as humans’ preferences.