I survive AGI but die because we never solve aging: 11%
I survive AGI but die before aging is solved: 1%
I can forsee a few scenarios where we have AGI’ish. Maybe something based on imitating humans that can’t get much beyond human intelligence. Maybe we decide not to make it smarter for safety reasons. Maybe we are dealing with vastly superhuman AI’s that are programmed to do one small thing and then turn off.
In these scenarios, there is still a potential risk (and benefit) from sovereign superintelligence in our future. Ie good ASI, bad ASI and no ASI are all possibilities.
What does your “never solve aging” universe look like. Is this full bio-conservative deathist superintelligence?
Or are you seriously considering a sovereign superintelligence searching for solutions and failing.
Also, why are you assuming solving aging happens after AGI.
I think the probabilities are around 50⁄50.
we’re likely to see advances in medicine before AGI, but nuclear and biorisk roughly counteract that
I am pretty sure taking two entirely different processes and just declaring them to cancel is not on as a good modeling assumption.
I can forsee a few scenarios where we have AGI’ish. Maybe something based on imitating humans that can’t get much beyond human intelligence. Maybe we decide not to make it smarter for safety reasons. Maybe we are dealing with vastly superhuman AI’s that are programmed to do one small thing and then turn off.
In these scenarios, there is still a potential risk (and benefit) from sovereign superintelligence in our future. Ie good ASI, bad ASI and no ASI are all possibilities.
What does your “never solve aging” universe look like. Is this full bio-conservative deathist superintelligence?
Or are you seriously considering a sovereign superintelligence searching for solutions and failing.
Also, why are you assuming solving aging happens after AGI.
I think the probabilities are around 50⁄50.
I am pretty sure taking two entirely different processes and just declaring them to cancel is not on as a good modeling assumption.