A.3 We need to get alignment right on the ‘first critical try’ at operating at a ‘dangerous’ level of intelligence, where unaligned operation at a dangerous level of intelligence kills everybody on Earth and then we don’t get to try again.
B.1.10 On anything like the standard ML paradigm, you would need to somehow generalize optimization-for-alignment you did in safe conditions, across a big distributional shift to dangerous conditions.
My understanding is that interpretability is currently tackling the second one. But what about the first one?
It seems a bit tricky because it is a powerful outside view argument. It is incredibly rare for software to work on the first test. ML makes it even more difficult since it isn’t well suited to formal verification. Even defense in depth seems unlikely to work (on the first critical try, there is likely only one system that is situational aware). The only thing I can think of is making the AGI smart enough to takeover the world with the help of its creators but not smart enough to do so on its own or to solve it’s own alignment problem (i.e. it does not know how to improve without goal drift). I also suppose non-critical tries give some data, but is it enough?
What does the playing field for the first critical try look like?
Do we have a plan for the “first critical try” problem?
In list of lethalities, it seems that the two biggest ones are:
A.3 We need to get alignment right on the ‘first critical try’ at operating at a ‘dangerous’ level of intelligence, where unaligned operation at a dangerous level of intelligence kills everybody on Earth and then we don’t get to try again.
B.1.10 On anything like the standard ML paradigm, you would need to somehow generalize optimization-for-alignment you did in safe conditions, across a big distributional shift to dangerous conditions.
My understanding is that interpretability is currently tackling the second one. But what about the first one?
It seems a bit tricky because it is a powerful outside view argument. It is incredibly rare for software to work on the first test. ML makes it even more difficult since it isn’t well suited to formal verification. Even defense in depth seems unlikely to work (on the first critical try, there is likely only one system that is situational aware). The only thing I can think of is making the AGI smart enough to takeover the world with the help of its creators but not smart enough to do so on its own or to solve it’s own alignment problem (i.e. it does not know how to improve without goal drift). I also suppose non-critical tries give some data, but is it enough?
What does the playing field for the first critical try look like?