Not for those who think AGI/TAI plausible within 2-5 years, and ASI 1-2 years after. Accelerating even further than whatever feasible caution can hopefully slow it down a bit and shape it more carefully would mostly increase doom, not personal survival. Also, there’s cryonics.
OK, agreed that this depends on your views of whether cryonics will work in your lifetime, and of “baseline” AGI/ASI timelines absent your finger on the scale. As you noted, it also depends on the delta between p(doom while accelerating) and baseline p(doom).
I’m guessing there’s a decent number of people who think current (and near future) cryonics don’t work, and that ASI is further away than 3-7 years (to use your range). Certainly the world mostly isn’t behaving as if it believed ASI was 3-7 years away, which might be a total failure of people acting on their beliefs, or it may just reflect that their beliefs are for further out numbers.
My model is that the current scaling experiment isn’t done yet but will be mostly done in a few years, and LLMs can plausibly surpass the data they are training on. Also, LLMs are digital and 100x faster than humans. Then once there are long-horizon task capable AIs that can do many jobs (the AGI/TAI milestone), even if the LLM scaling experiment failed and it took 10-15 years instead, we get another round of scaling and significant in-software improvement of AI within months that fixes all remaining crippling limitations, making them cognitively capable of all jobs (rather than only some jobs). At that point growth of industry goes off the charts, closer to biological anchors of say doubling in fruit fly biomass every 1.5 days than anything reasonable in any other context. This quickly gives the scale sufficient for ASI even if for some unfathomable reason it’s not possible to create with less scale.
Unclear what cryonics not yet working could mean, even highly destructive freezing is not a cryptographically secure method for erasing data, redundant clues about everything relevant will endure. A likely reason to expect cryonics not to work is not believing that ASI is possible, with actual capabilities of a superintelligence. This is similar to how economists project “reasonable” levels of post-TAI growth by not really accepting the premise of AIs actually capable of all jobs, including all new jobs their introduction into the economy creates. More practical issues are unreliability of arrangements that make cryopreservation happen for a given person and of subsequent storage all the way until ASI, through all the pre-ASI upheaval.
Since you marked as a crux the fragment “absent acceleration they are likely to die some time over the next 40ish years” I wanted to share two possibly relevant Metaculus questions. Both of these seem to suggest numbers longer than your estimates (and these are presumably inclusive of the potential impacts of AGI/TAI and ASI, so these don’t have the “absent acceleration” caveat).
I’m more certain about ASI being 1-2 years after TAI than about TAI in 2-5 years from now, as the latter could fail if the current training setups can’t make LLMs long-horizon capable at a scale that’s economically feasible absent TAI. But probably 20 years is sufficient to get TAI in any case, absent civilization-scale disruptions like an extremely deadly pandemic.
A model can update on discussion of its gears. Given predictions that don’t cite particular reasons, I can only weaken it as a whole, not improve it in detail (when I believe the predictions know better, without me knowing what specifically they know). So all I can do is mirror this concern by citing particular reasons that shape my own model.
Not for those who think AGI/TAI plausible within 2-5 years, and ASI 1-2 years after. Accelerating even further than whatever feasible caution can hopefully slow it down a bit and shape it more carefully would mostly increase doom, not personal survival. Also, there’s cryonics.
OK, agreed that this depends on your views of whether cryonics will work in your lifetime, and of “baseline” AGI/ASI timelines absent your finger on the scale. As you noted, it also depends on the delta between p(doom while accelerating) and baseline p(doom).
I’m guessing there’s a decent number of people who think current (and near future) cryonics don’t work, and that ASI is further away than 3-7 years (to use your range). Certainly the world mostly isn’t behaving as if it believed ASI was 3-7 years away, which might be a total failure of people acting on their beliefs, or it may just reflect that their beliefs are for further out numbers.
My model is that the current scaling experiment isn’t done yet but will be mostly done in a few years, and LLMs can plausibly surpass the data they are training on. Also, LLMs are digital and 100x faster than humans. Then once there are long-horizon task capable AIs that can do many jobs (the AGI/TAI milestone), even if the LLM scaling experiment failed and it took 10-15 years instead, we get another round of scaling and significant in-software improvement of AI within months that fixes all remaining crippling limitations, making them cognitively capable of all jobs (rather than only some jobs). At that point growth of industry goes off the charts, closer to biological anchors of say doubling in fruit fly biomass every 1.5 days than anything reasonable in any other context. This quickly gives the scale sufficient for ASI even if for some unfathomable reason it’s not possible to create with less scale.
Unclear what cryonics not yet working could mean, even highly destructive freezing is not a cryptographically secure method for erasing data, redundant clues about everything relevant will endure. A likely reason to expect cryonics not to work is not believing that ASI is possible, with actual capabilities of a superintelligence. This is similar to how economists project “reasonable” levels of post-TAI growth by not really accepting the premise of AIs actually capable of all jobs, including all new jobs their introduction into the economy creates. More practical issues are unreliability of arrangements that make cryopreservation happen for a given person and of subsequent storage all the way until ASI, through all the pre-ASI upheaval.
Since you marked as a crux the fragment “absent acceleration they are likely to die some time over the next 40ish years” I wanted to share two possibly relevant Metaculus questions. Both of these seem to suggest numbers longer than your estimates (and these are presumably inclusive of the potential impacts of AGI/TAI and ASI, so these don’t have the “absent acceleration” caveat).
I’m more certain about ASI being 1-2 years after TAI than about TAI in 2-5 years from now, as the latter could fail if the current training setups can’t make LLMs long-horizon capable at a scale that’s economically feasible absent TAI. But probably 20 years is sufficient to get TAI in any case, absent civilization-scale disruptions like an extremely deadly pandemic.
A model can update on discussion of its gears. Given predictions that don’t cite particular reasons, I can only weaken it as a whole, not improve it in detail (when I believe the predictions know better, without me knowing what specifically they know). So all I can do is mirror this concern by citing particular reasons that shape my own model.