I get the impression that people in the transhumanist/rationalist space sometimes have the mindset that ageing is such an intractable problem that it isn’t even worth trying to unravel it with our pitiful human meat-brains, to the point where it would be easier to just invent God first and then get God to do it for us.
The point is not tractability or preference for either outcome, but timelines. It’s possible to solve ageing “manually”. But it’s looking like AGI will very likely get here first, as a matter of forecasting, not because it’s a good or safe thing for this to happen, and not because it needs to happen in order to invent anti-ageing. Nanotech is in the same place as anti-ageing, it’s also possible to figure out “manually”, but it’s also going to take more time than AGI, and therefore it’s AGI that gets to figure it out.
Point taken. I guess the main difference for me is that with age reversal, we’ve got a framework that makes sense and de-mystifies it, and it implies that if we just do this and this and this and this then the problem should be solved. We can actually see a path between where we are right now and where we want to be. Do we have something like that with AGI, or is it more a matter of “these language models are starting to look pretty smart”? I’m not saying I would bet against AGI, the way things are going… but I wouldn’t want to rely on it without a rational model of it, either.
For what it’s worth, as someone with incredibly short time expectation until we get the big one over in AI land, I don’t think “ask agi to figure it out” is going to be how it feels to solve aging using agi. I expect it to look more like supercharging exactly this research plan you outline with extremely advanced bio simulation. And I expect that AI safety will look like biosecurity, in large part because I don’t think it could possibly look another way—as ai advances it is looking more and more like a field of silicon biology.
Try this post. Basically, everything is too easy recently, with many roads leading to progress. And very recently, there are plausible roadmaps that don’t require any new discoveries (which would otherwise take unknown amount of time), just some engineering. It’s no longer insane (though not yet likely, I’d give it 8%) for AGI to be done in two years even without secret projects or unexpected technical breakthroughs. (By AGI I mean a point where AI becomes able to teach itself any skills that don’t fall out of its low level learning algorithm on their own, but it hasn’t yet learned much in this way.)
Alignment looks relatively hopeless, but at the same time if AGI is something along the lines of ChatGPT, it’s more likely to be somewhat human-like and possibly won’t cause outright extinction, even if it takes away most of the resources in future lightcone that humanity could’ve gotten for itself if it was magically much better at coordination and took its time to figure out alignment or uploading.
The point is not tractability or preference for either outcome, but timelines. It’s possible to solve ageing “manually”. But it’s looking like AGI will very likely get here first, as a matter of forecasting, not because it’s a good or safe thing for this to happen, and not because it needs to happen in order to invent anti-ageing. Nanotech is in the same place as anti-ageing, it’s also possible to figure out “manually”, but it’s also going to take more time than AGI, and therefore it’s AGI that gets to figure it out.
Point taken. I guess the main difference for me is that with age reversal, we’ve got a framework that makes sense and de-mystifies it, and it implies that if we just do this and this and this and this then the problem should be solved. We can actually see a path between where we are right now and where we want to be. Do we have something like that with AGI, or is it more a matter of “these language models are starting to look pretty smart”? I’m not saying I would bet against AGI, the way things are going… but I wouldn’t want to rely on it without a rational model of it, either.
For what it’s worth, as someone with incredibly short time expectation until we get the big one over in AI land, I don’t think “ask agi to figure it out” is going to be how it feels to solve aging using agi. I expect it to look more like supercharging exactly this research plan you outline with extremely advanced bio simulation. And I expect that AI safety will look like biosecurity, in large part because I don’t think it could possibly look another way—as ai advances it is looking more and more like a field of silicon biology.
Try this post. Basically, everything is too easy recently, with many roads leading to progress. And very recently, there are plausible roadmaps that don’t require any new discoveries (which would otherwise take unknown amount of time), just some engineering. It’s no longer insane (though not yet likely, I’d give it 8%) for AGI to be done in two years even without secret projects or unexpected technical breakthroughs. (By AGI I mean a point where AI becomes able to teach itself any skills that don’t fall out of its low level learning algorithm on their own, but it hasn’t yet learned much in this way.)
Alignment looks relatively hopeless, but at the same time if AGI is something along the lines of ChatGPT, it’s more likely to be somewhat human-like and possibly won’t cause outright extinction, even if it takes away most of the resources in future lightcone that humanity could’ve gotten for itself if it was magically much better at coordination and took its time to figure out alignment or uploading.