Just added some more detail on this to the slides. The idea is that we have various advantages over the model during the training process: we can restart the search, examine and change beliefs and goals using interpretability techniques, choose exactly what data the model sees, etc.
I think that skews it somewhat but not very much. We only have to “win” once in the sense that we only need to build an aligned Sovereign that ends the acute risk period once, similarly to how we only have to “lose” once in the sense that we only need to build a misaligned superintelligence that kills everyone once.
Depends what the aligned sovereign does! Also depends what you mean by a pivotal act!
In practice, during the period of time where biological humans are still doing a meaningful part of alignment work, I don’t expect us to build an aligned sovereign, nor do I expect to build a single misaligned AI that takes over: I instead expect there to be a large number of AI systems, that could together obtain a decisive strategic advantage, but could not do so individually.
if it’s possible to build a single AI system that executes a catastrophic takeover (via self-bootstrap or whatever), it’s also probably possible to build a single aligned sovereign, and so in this situation winning once is sufficient
if it is not possible to build a single aligned sovereign, then it’s probably also not possible to build a single system that executes a catastrophic takeover and so the proposition that the model only has to win once is not true in any straightforward way
in this case, we might be able to think of “composite AI systems” that can catastrophically take over or end the acute risk period, and for similar reasons as in the first scenario, winning once with a composite system is sufficient, but such systems are not built from single acts
and you think the second scenario is more likely than the first.
Could you explain why you think “The game is skewed in our favour.”?
Just added some more detail on this to the slides. The idea is that we have various advantages over the model during the training process: we can restart the search, examine and change beliefs and goals using interpretability techniques, choose exactly what data the model sees, etc.
While the model has the advantage of only having to “win” once.
I think that skews it somewhat but not very much. We only have to “win” once in the sense that we only need to build an aligned Sovereign that ends the acute risk period once, similarly to how we only have to “lose” once in the sense that we only need to build a misaligned superintelligence that kills everyone once.
(I like the discussion on similar points in the strategy-stealing assumption.)
Is building an aligned sovereign to end the acute risk period different to a pivotal act in your view?
Depends what the aligned sovereign does! Also depends what you mean by a pivotal act!
In practice, during the period of time where biological humans are still doing a meaningful part of alignment work, I don’t expect us to build an aligned sovereign, nor do I expect to build a single misaligned AI that takes over: I instead expect there to be a large number of AI systems, that could together obtain a decisive strategic advantage, but could not do so individually.
So, if I’m understanding you correctly:
if it’s possible to build a single AI system that executes a catastrophic takeover (via self-bootstrap or whatever), it’s also probably possible to build a single aligned sovereign, and so in this situation winning once is sufficient
if it is not possible to build a single aligned sovereign, then it’s probably also not possible to build a single system that executes a catastrophic takeover and so the proposition that the model only has to win once is not true in any straightforward way
in this case, we might be able to think of “composite AI systems” that can catastrophically take over or end the acute risk period, and for similar reasons as in the first scenario, winning once with a composite system is sufficient, but such systems are not built from single acts
and you think the second scenario is more likely than the first.
Yes, that’s right, though I’d say “probable” not “possible” (most things are “possible”).