It’s a disagreement about the value of the concept of “preserving alignment”. To vastly oversimplify Paul’s idea, the AI A[n] will check that A[n+1] is still aligned with human preferences; meanwhile, A[n-1] will be checking that A[n] is still aligned with human preferences, all the way down to A[0] and an initial human H that checks on it.
Intuitively, this seems doable—A[n] is “nice”, so it seems that it can reasonably check that A[n+1] is also nice, and so on.
But, as I pointed out in this post, it’s very possible that A[n] is “nice” only because it lacks power/can’t do certain things/hasn’t thought of certain policies. So niceness—in the sense of behaving sensibly as an autonomous agent—does not go through the inductive step in this argument.
Instead, Paul confirmed that “alignment” means “won’t take unaligned actions, and will assess the decisions of a higher agent in a way that preserves alignment (and preserves the preservation of alignment, and so on)”.
This concept does induct properly, but seems far less intuitive to me. It relies on humans, for example, being able to ensure that A[0] will be aligned, that any more powerful copies it assesses will be aligned, that any more powerful copies those copies assess are also aligned, and so on.
Intuitively, for any concept C of alignment for H and A[0], I expect one of four things will happen, with the first three being more likely:
The C does not induct.
The C already contains all of the friendly utility function; induction works, but does nothing.
The C does induct non-trivially, but is incomplete: it’s very narrow, and doesn’t define a good candidate for a friendly utility function.
The C does induct in a non-trivial way, the result is friendly, but only one or two steps of the induction are actually needed.
Hopefully, further research should clarify if my intuitions are correct.
Disagreement with Paul: alignment induction
I had a discussion with Paul Christiano, about his Iterated Amplification and Distillation scheme. We had a disagreement, a disagreement that I believe points to something interesting, so I’m posting this here.
It’s a disagreement about the value of the concept of “preserving alignment”. To vastly oversimplify Paul’s idea, the AI A[n] will check that A[n+1] is still aligned with human preferences; meanwhile, A[n-1] will be checking that A[n] is still aligned with human preferences, all the way down to A[0] and an initial human H that checks on it.
Intuitively, this seems doable—A[n] is “nice”, so it seems that it can reasonably check that A[n+1] is also nice, and so on.
But, as I pointed out in this post, it’s very possible that A[n] is “nice” only because it lacks power/can’t do certain things/hasn’t thought of certain policies. So niceness—in the sense of behaving sensibly as an autonomous agent—does not go through the inductive step in this argument.
Instead, Paul confirmed that “alignment” means “won’t take unaligned actions, and will assess the decisions of a higher agent in a way that preserves alignment (and preserves the preservation of alignment, and so on)”.
This concept does induct properly, but seems far less intuitive to me. It relies on humans, for example, being able to ensure that A[0] will be aligned, that any more powerful copies it assesses will be aligned, that any more powerful copies those copies assess are also aligned, and so on.
Intuitively, for any concept C of alignment for H and A[0], I expect one of four things will happen, with the first three being more likely:
The C does not induct.
The C already contains all of the friendly utility function; induction works, but does nothing.
The C does induct non-trivially, but is incomplete: it’s very narrow, and doesn’t define a good candidate for a friendly utility function.
The C does induct in a non-trivial way, the result is friendly, but only one or two steps of the induction are actually needed.
Hopefully, further research should clarify if my intuitions are correct.