I think this comment is lumping together the following assumptions under the “continuity” label, as if there is a reason to believe that either they are all correct or all incorrect (and I don’t see why):
There is large distance in model space between models that behave very differently.
Takeoff will be slow.
It is feasible to create models that are weak enough to not pose an existential risk yet able to sufficiently help with alignment.
I bet more on scenarios where we get AGI when politics is very different compared to today.
I agree that just before “super transformative” ~AGI systems are first created, the world may look very differently than it does today. This is one of the reasons I think Eliezer has too much credence on doom.
I think this comment is lumping together the following assumptions under the “continuity” label, as if there is a reason to believe that either they are all correct or all incorrect (and I don’t see why):
There is large distance in model space between models that behave very differently.
Takeoff will be slow.
It is feasible to create models that are weak enough to not pose an existential risk yet able to sufficiently help with alignment.
I agree that just before
“super transformative”~AGI systems are first created, the world may look very differently than it does today. This is one of the reasons I think Eliezer has too much credence on doom.