AGI is likely closer than an anti-aging intervention that adds decades discovered without AGI. I used to believe that AGI results in either death or approximately immediate perfect cure for aging and other forms of mortality (depending on how AI alignment and judgement of morality work out), and that is a reason to mostly ignore anti-aging. Recently I began to see less powerful/general (by design) AGI as a plausible way of controlling AI risk, that isn’t easy to make more generally useful safely. If that works out, immediate cure for aging doesn’t follow, even after AI risk is no longer imminent. This makes current anti-aging research not as pointless. (In one partial failure mode, with an anti-goodharting non-corrigible AI, straightforward AI development might even become permanently impossible, thwarted by the AGI that controls AI risk but can’t be disabled. In that case any anti-aging must be developed “manually”.)
AGI is likely closer than an anti-aging intervention that adds decades discovered without AGI. I used to believe that AGI results in either death or approximately immediate perfect cure for aging and other forms of mortality (depending on how AI alignment and judgement of morality work out), and that is a reason to mostly ignore anti-aging. Recently I began to see less powerful/general (by design) AGI as a plausible way of controlling AI risk, that isn’t easy to make more generally useful safely. If that works out, immediate cure for aging doesn’t follow, even after AI risk is no longer imminent. This makes current anti-aging research not as pointless. (In one partial failure mode, with an anti-goodharting non-corrigible AI, straightforward AI development might even become permanently impossible, thwarted by the AGI that controls AI risk but can’t be disabled. In that case any anti-aging must be developed “manually”.)