Attributing magical capabilities to AGI seems to be a common cognitive failure mode :( is there not some way we can encourage people to be more grounded in their expectations?
AI need not be magical for its development to have a profound effect on the progress of science and technology. It is worth understanding the mechanisms that some people have proposed. Here’s a blog post series that explains one potential route.
Those posts are a prime example of the magical thinking I’m talking about: the assumption that scaling real world processes is like Factorio. That kind of seamless scaling is only approached in the highly controlled world of software, and even then any software engineer worth their salt can tell you just how unreliable immature automation can be. The real world is extremely messy, stochastic, and disordered, and doesn’t map well into the type of problems that recent advances in AI have been good at solving.
We may soon get to the point where an AGi is able to construct a monumental plan for developing nanotech capabilities… only for that plan to not survive its first contact with the real world. At best we can hope for AI assistants helping to offload certain portions of the R&D effort, like we are currently seeing with AlphaFold. However the problem domains where AI can be effective in finding such useful models are limited. And while I can think of some other areas that would benefit from the same AlphaFold treatment (better molecular dynamics codes, for example), it’s not the kind of stuff that would lead to revolutionary super-exponential advances. The singletarian thinking which pervades the AI x-risk crowd just isn’t reflective of practical reality.
AI development increases by constant factors the rate at which technology is advanced. That is good and valuable. But right now the rate at which molecular nanotechnology or longevity are being advanced is effectively nil for reasons that have nothing to do with the technical capabilities AI would advance. So there is a strong argument to be made that attacking these problems head on—like how Elon Musk attacked electric cars and space launch capability—would have more of an impact that the meta-level work on AI.
The real world is extremely messy, stochastic, and disordered, and doesn’t map well into the type of problems that recent advances in AI have been good at solving.
The recent advances in AI have not produced AGIs.
AlphaFold is essentially a tool. It’s not a replacement for the current scientists in the way an AGI that’s much smarter then the current scientists would be.
You misunderstood my intent of that statement. I was saying that AGI wouldn’t be smarter or more capable than the current scientists in solving these particular problems for a very long time, even if architecturally it is able to attack the same problems more efficiently. It’s not enough of a constrained problem that a computer running in a box is able to replace the role of humans, not at least until it has human-level effectors to allow it to embody itself in the real world.
AGI wouldn’t be categorically different from present day AI. It’s just an AI for writing AI (hence, “general”), but the AI’s it writes are still constrained in much the same way as the AI that we write today. If there is some reason for not believing this would be the case, it is so-far unstated.
Attributing magical capabilities to AGI seems to be a common cognitive failure mode :( is there not some way we can encourage people to be more grounded in their expectations?
AI need not be magical for its development to have a profound effect on the progress of science and technology. It is worth understanding the mechanisms that some people have proposed. Here’s a blog post series that explains one potential route.
Those posts are a prime example of the magical thinking I’m talking about: the assumption that scaling real world processes is like Factorio. That kind of seamless scaling is only approached in the highly controlled world of software, and even then any software engineer worth their salt can tell you just how unreliable immature automation can be. The real world is extremely messy, stochastic, and disordered, and doesn’t map well into the type of problems that recent advances in AI have been good at solving.
We may soon get to the point where an AGi is able to construct a monumental plan for developing nanotech capabilities… only for that plan to not survive its first contact with the real world. At best we can hope for AI assistants helping to offload certain portions of the R&D effort, like we are currently seeing with AlphaFold. However the problem domains where AI can be effective in finding such useful models are limited. And while I can think of some other areas that would benefit from the same AlphaFold treatment (better molecular dynamics codes, for example), it’s not the kind of stuff that would lead to revolutionary super-exponential advances. The singletarian thinking which pervades the AI x-risk crowd just isn’t reflective of practical reality.
AI development increases by constant factors the rate at which technology is advanced. That is good and valuable. But right now the rate at which molecular nanotechnology or longevity are being advanced is effectively nil for reasons that have nothing to do with the technical capabilities AI would advance. So there is a strong argument to be made that attacking these problems head on—like how Elon Musk attacked electric cars and space launch capability—would have more of an impact that the meta-level work on AI.
The recent advances in AI have not produced AGIs.
AlphaFold is essentially a tool. It’s not a replacement for the current scientists in the way an AGI that’s much smarter then the current scientists would be.
You misunderstood my intent of that statement. I was saying that AGI wouldn’t be smarter or more capable than the current scientists in solving these particular problems for a very long time, even if architecturally it is able to attack the same problems more efficiently. It’s not enough of a constrained problem that a computer running in a box is able to replace the role of humans, not at least until it has human-level effectors to allow it to embody itself in the real world.
AGI wouldn’t be categorically different from present day AI. It’s just an AI for writing AI (hence, “general”), but the AI’s it writes are still constrained in much the same way as the AI that we write today. If there is some reason for not believing this would be the case, it is so-far unstated.