I think the odds of success (epistemic status: I went to medical school but dropped out) are low if you mean “humans without help from any system more capable than current software” are researching aging and cryonics alone.
They are both extremely difficult problems.
So the tradeoff is “everyone currently alive and probably their children” vs “future people who might exist”.
I obviously lean one way but this is what the choice is between. Certain death for everyone alive (by not improving AGI capabilities) in exchange for preventing possible death for everyone alive sooner and preventing the existence of future people who may never exist no matter the timeline.
I think the odds of success (epistemic status: I went to medical school but dropped out) are low if you mean “humans without help from any system more capable than current software” are researching aging and cryonics alone.
They are both extremely difficult problems.
So the tradeoff is “everyone currently alive and probably their children” vs “future people who might exist”.
I obviously lean one way but this is what the choice is between. Certain death for everyone alive (by not improving AGI capabilities) in exchange for preventing possible death for everyone alive sooner and preventing the existence of future people who may never exist no matter the timeline.