Your question seems to focus mainly on timeline model and not alignment model, so I shall focus on explaining how my model of the timeline has changed.
My timeline shortened from about four years (mean probability) to my current timeline of about 2.5 years (mean probability) since the GPT-4 release. This was because of two reasons:
gut-level update on GPT-4′s capability increases: we seem quite close to human-in-the-loop RSI.
a more accurate model for bounds on RSI. I had previously thought that RSI would be more difficult than I think it is now.
The latter is more load-bearing than the former, although my predictions for how soon AI labs will achieve human-in-the-loop RSI creates an upper bound on how much time we have (assuming no slowdown), which is quite useful when making your timeline.
Your question seems to focus mainly on timeline model and not alignment model, so I shall focus on explaining how my model of the timeline has changed.
My timeline shortened from about four years (mean probability) to my current timeline of about 2.5 years (mean probability) since the GPT-4 release. This was because of two reasons:
gut-level update on GPT-4′s capability increases: we seem quite close to human-in-the-loop RSI.
a more accurate model for bounds on RSI. I had previously thought that RSI would be more difficult than I think it is now.
The latter is more load-bearing than the former, although my predictions for how soon AI labs will achieve human-in-the-loop RSI creates an upper bound on how much time we have (assuming no slowdown), which is quite useful when making your timeline.