Since you marked as a crux the fragment “absent acceleration they are likely to die some time over the next 40ish years” I wanted to share two possibly relevant Metaculus questions. Both of these seem to suggest numbers longer than your estimates (and these are presumably inclusive of the potential impacts of AGI/TAI and ASI, so these don’t have the “absent acceleration” caveat).
I’m more certain about ASI being 1-2 years after TAI than about TAI in 2-5 years from now, as the latter could fail if the current training setups can’t make LLMs long-horizon capable at a scale that’s economically feasible absent TAI. But probably 20 years is sufficient to get TAI in any case, absent civilization-scale disruptions like an extremely deadly pandemic.
A model can update on discussion of its gears. Given predictions that don’t cite particular reasons, I can only weaken it as a whole, not improve it in detail (when I believe the predictions know better, without me knowing what specifically they know). So all I can do is mirror this concern by citing particular reasons that shape my own model.
Since you marked as a crux the fragment “absent acceleration they are likely to die some time over the next 40ish years” I wanted to share two possibly relevant Metaculus questions. Both of these seem to suggest numbers longer than your estimates (and these are presumably inclusive of the potential impacts of AGI/TAI and ASI, so these don’t have the “absent acceleration” caveat).
I’m more certain about ASI being 1-2 years after TAI than about TAI in 2-5 years from now, as the latter could fail if the current training setups can’t make LLMs long-horizon capable at a scale that’s economically feasible absent TAI. But probably 20 years is sufficient to get TAI in any case, absent civilization-scale disruptions like an extremely deadly pandemic.
A model can update on discussion of its gears. Given predictions that don’t cite particular reasons, I can only weaken it as a whole, not improve it in detail (when I believe the predictions know better, without me knowing what specifically they know). So all I can do is mirror this concern by citing particular reasons that shape my own model.