There are plenty of people who have AGI timelines that suggest to them that either AGI will kill them before they reach their natural mortality or AGI will be powerful enough to prevent their natural mortality by that point.
Even without having direct access to AGI new machine learning advances for protein folding and protein design might be more central to longevity than the research that’s billed as longevity research.
That said, I do agree that anti-aging is an important topic. One problem of how people who prescribe to fight it often seem to be into seeking the key under the lightbulb.
The SENS paradigm seems insular to me. I don’t have a charitable explanation of why fascia getting tenser as people age isn’t on their list of aging damage.
Attributing magical capabilities to AGI seems to be a common cognitive failure mode :( is there not some way we can encourage people to be more grounded in their expectations?
AI need not be magical for its development to have a profound effect on the progress of science and technology. It is worth understanding the mechanisms that some people have proposed. Here’s a blog post series that explains one potential route.
Those posts are a prime example of the magical thinking I’m talking about: the assumption that scaling real world processes is like Factorio. That kind of seamless scaling is only approached in the highly controlled world of software, and even then any software engineer worth their salt can tell you just how unreliable immature automation can be. The real world is extremely messy, stochastic, and disordered, and doesn’t map well into the type of problems that recent advances in AI have been good at solving.
We may soon get to the point where an AGi is able to construct a monumental plan for developing nanotech capabilities… only for that plan to not survive its first contact with the real world. At best we can hope for AI assistants helping to offload certain portions of the R&D effort, like we are currently seeing with AlphaFold. However the problem domains where AI can be effective in finding such useful models are limited. And while I can think of some other areas that would benefit from the same AlphaFold treatment (better molecular dynamics codes, for example), it’s not the kind of stuff that would lead to revolutionary super-exponential advances. The singletarian thinking which pervades the AI x-risk crowd just isn’t reflective of practical reality.
AI development increases by constant factors the rate at which technology is advanced. That is good and valuable. But right now the rate at which molecular nanotechnology or longevity are being advanced is effectively nil for reasons that have nothing to do with the technical capabilities AI would advance. So there is a strong argument to be made that attacking these problems head on—like how Elon Musk attacked electric cars and space launch capability—would have more of an impact that the meta-level work on AI.
The real world is extremely messy, stochastic, and disordered, and doesn’t map well into the type of problems that recent advances in AI have been good at solving.
The recent advances in AI have not produced AGIs.
AlphaFold is essentially a tool. It’s not a replacement for the current scientists in the way an AGI that’s much smarter then the current scientists would be.
You misunderstood my intent of that statement. I was saying that AGI wouldn’t be smarter or more capable than the current scientists in solving these particular problems for a very long time, even if architecturally it is able to attack the same problems more efficiently. It’s not enough of a constrained problem that a computer running in a box is able to replace the role of humans, not at least until it has human-level effectors to allow it to embody itself in the real world.
AGI wouldn’t be categorically different from present day AI. It’s just an AI for writing AI (hence, “general”), but the AI’s it writes are still constrained in much the same way as the AI that we write today. If there is some reason for not believing this would be the case, it is so-far unstated.
There are plenty of people who have AGI timelines that suggest to them that either AGI will kill them before they reach their natural mortality or AGI will be powerful enough to prevent their natural mortality by that point.
True but there’s also plenty of people who think otherwise, other comments being an example.
I’m not a biologist, but I’m reasonably sure that fascia getting tenser would be downstream of the hallmarks of aging, if that’s what you’re talking about. It’s kinda like asking why “going to a boardgame party in San Francisco” isn’t on the list of covid transmission vectors. And in any case, SENS is far from being the only organization, there’s many others with different approaches and focus areas, probably one of them covers fascia even if SENS doesn’t.
I’m not a biologist, but I’m reasonably sure that fascia getting tenser would be downstream of the hallmarks of aging, if that’s what you’re talking about.
I don’t think there’s a good reason to make that assumption. There are various factors that lead to fascia getting tense. Substances like fibrin keep the fascia contracted and don’t get automatically cleared.
SENS is certainly not the only organization and there are plenty of people who don’t believe that aging is as easy as just curing the hallmarks.
I would be very surprised if inflammation or loss of proteostasis did not have any effect on fascia, if only because they have negative effect on ~everything. But more importantly, I don’t think there’s any significant number of people dying from fascia stiffness? That’s one of the main ideas behind the hallmarks of aging, that you don’t have to solve the entire problem in its every minuscule aspect at once. If you could just forestall all these hallmarks or even just some of them, you could probably increase lifespan and healthspan significantly, thus buying more time to fix other problems (or develop completely knew approaches like mind uploading or regenerative medicine or whatever else).
There are plenty of people who have AGI timelines that suggest to them that either AGI will kill them before they reach their natural mortality or AGI will be powerful enough to prevent their natural mortality by that point.
Even without having direct access to AGI new machine learning advances for protein folding and protein design might be more central to longevity than the research that’s billed as longevity research.
That said, I do agree that anti-aging is an important topic. One problem of how people who prescribe to fight it often seem to be into seeking the key under the lightbulb.
The SENS paradigm seems insular to me. I don’t have a charitable explanation of why fascia getting tenser as people age isn’t on their list of aging damage.
Attributing magical capabilities to AGI seems to be a common cognitive failure mode :( is there not some way we can encourage people to be more grounded in their expectations?
AI need not be magical for its development to have a profound effect on the progress of science and technology. It is worth understanding the mechanisms that some people have proposed. Here’s a blog post series that explains one potential route.
Those posts are a prime example of the magical thinking I’m talking about: the assumption that scaling real world processes is like Factorio. That kind of seamless scaling is only approached in the highly controlled world of software, and even then any software engineer worth their salt can tell you just how unreliable immature automation can be. The real world is extremely messy, stochastic, and disordered, and doesn’t map well into the type of problems that recent advances in AI have been good at solving.
We may soon get to the point where an AGi is able to construct a monumental plan for developing nanotech capabilities… only for that plan to not survive its first contact with the real world. At best we can hope for AI assistants helping to offload certain portions of the R&D effort, like we are currently seeing with AlphaFold. However the problem domains where AI can be effective in finding such useful models are limited. And while I can think of some other areas that would benefit from the same AlphaFold treatment (better molecular dynamics codes, for example), it’s not the kind of stuff that would lead to revolutionary super-exponential advances. The singletarian thinking which pervades the AI x-risk crowd just isn’t reflective of practical reality.
AI development increases by constant factors the rate at which technology is advanced. That is good and valuable. But right now the rate at which molecular nanotechnology or longevity are being advanced is effectively nil for reasons that have nothing to do with the technical capabilities AI would advance. So there is a strong argument to be made that attacking these problems head on—like how Elon Musk attacked electric cars and space launch capability—would have more of an impact that the meta-level work on AI.
The recent advances in AI have not produced AGIs.
AlphaFold is essentially a tool. It’s not a replacement for the current scientists in the way an AGI that’s much smarter then the current scientists would be.
You misunderstood my intent of that statement. I was saying that AGI wouldn’t be smarter or more capable than the current scientists in solving these particular problems for a very long time, even if architecturally it is able to attack the same problems more efficiently. It’s not enough of a constrained problem that a computer running in a box is able to replace the role of humans, not at least until it has human-level effectors to allow it to embody itself in the real world.
AGI wouldn’t be categorically different from present day AI. It’s just an AI for writing AI (hence, “general”), but the AI’s it writes are still constrained in much the same way as the AI that we write today. If there is some reason for not believing this would be the case, it is so-far unstated.
True but there’s also plenty of people who think otherwise, other comments being an example.
I’m not a biologist, but I’m reasonably sure that fascia getting tenser would be downstream of the hallmarks of aging, if that’s what you’re talking about. It’s kinda like asking why “going to a boardgame party in San Francisco” isn’t on the list of covid transmission vectors. And in any case, SENS is far from being the only organization, there’s many others with different approaches and focus areas, probably one of them covers fascia even if SENS doesn’t.
I don’t think there’s a good reason to make that assumption. There are various factors that lead to fascia getting tense. Substances like fibrin keep the fascia contracted and don’t get automatically cleared.
SENS is certainly not the only organization and there are plenty of people who don’t believe that aging is as easy as just curing the hallmarks.
I would be very surprised if inflammation or loss of proteostasis did not have any effect on fascia, if only because they have negative effect on ~everything. But more importantly, I don’t think there’s any significant number of people dying from fascia stiffness? That’s one of the main ideas behind the hallmarks of aging, that you don’t have to solve the entire problem in its every minuscule aspect at once. If you could just forestall all these hallmarks or even just some of them, you could probably increase lifespan and healthspan significantly, thus buying more time to fix other problems (or develop completely knew approaches like mind uploading or regenerative medicine or whatever else).