We only need to model quantum chemistry and higher levels.
As someone with years of practical experience in quantum chemistry simulation, you can’t understate how much heavy lifting that “only” is doing here. We are not close, not even remotely close, not even we-can-at-least-see-it-on-the-horizon close to the level of completeness required here. For a very basic example, we can’t even reliably and straightforwardly predict via quantum simulations whether a material will be a superconductor. Even guessing what quantum mechanics does to the dynamics of atomic nuclei is crazy hard and expensive, I’m talking days and days of compute on hundreds of cores thrown at a single cube of 1 nm side.
The problem here is that the reason why we’d want ASI is because we expect it might see patterns where we don’t, and thread the needle of discovery in the hyperdimensional configuration space of possibility without having to brute force its way through it. But we have to brute force it, right now. If ASI found a way to make nanomachines that relies on more exotic principles than basic organic chemistry, or is subtly influenced by some small effect of dispersion forces that can’t be reliably simulated with our usual approximations, then we’d need to be able to simulate theory to at least that level to get at that point of understanding. We need ASI to interpret what ASI is doing efficiently...
My immediate impression is that this doesn’t blast the whole plan open. I think you can reasonably decouple the social, economical and moral aspect of the model and the scientific one. The first one is also hard to pin down, but for very different reasons, and I think we might make some progress in that sense. It’s also more urgent, because current LLMs aren’t particularly smart at doing science, but they’re already very expert talkers (and bullshitters). Then we just don’t let the AI directly perform scientific experiments. Instead, we have it give us recipes, together with a description of what they are expected to do, and the AI’s best guess of their effect on society and why they would be beneficial. If the AI is properly aligned to the social goals, which it should be at this point if it has been developed iteratively within the bounds of this model, it shouldn’t straight up lie. Any experiments are then to be performed with high levels of security, airgaps, lockdown protocols, the works. As we go further, we might then incorporate “certified” ASIs in the governance system to double-check any other proposals from different ASIs, and so on so forth.
IMO that’s as good as it gets. If the values and the world model of the AI are reliable, then it shouldn’t just create grey goo and pass it as a new energy technology. It shouldn’t do it out of malice, and shouldn’t do it by mistake, especially early on when its scientific capabilities would still be relatively limited. At that point of course developing AI tools to e.g. solve the quantum structure and dynamics of a material without having to muck about with DFT, quantum Monte Carlo or coupled cluster simulations would have to be a priority (both for the model’s sake and because it would be mighty useful). And if it turns out that’s just not possible, then no ASI should be able to come up with anything so wild we can’t double check it either.
“Solving quantum chemistry” is not the domain of ASI, it’s a task for a specialised model, such as AlphaFold. An ASI, it if need to solve quantum chemistry, would not “cognise” it directly (or “see patterns” in it) but rather develop an equivalent of AlphaFold for quantum chemistry, potentially including quantum computers into its R&D program plan.
As someone with years of practical experience in quantum chemistry simulation, you can’t understate how much heavy lifting that “only” is doing here. We are not close, not even remotely close, not even we-can-at-least-see-it-on-the-horizon close to the level of completeness required here. For a very basic example, we can’t even reliably and straightforwardly predict via quantum simulations whether a material will be a superconductor. Even guessing what quantum mechanics does to the dynamics of atomic nuclei is crazy hard and expensive, I’m talking days and days of compute on hundreds of cores thrown at a single cube of 1 nm side.
The problem here is that the reason why we’d want ASI is because we expect it might see patterns where we don’t, and thread the needle of discovery in the hyperdimensional configuration space of possibility without having to brute force its way through it. But we have to brute force it, right now. If ASI found a way to make nanomachines that relies on more exotic principles than basic organic chemistry, or is subtly influenced by some small effect of dispersion forces that can’t be reliably simulated with our usual approximations, then we’d need to be able to simulate theory to at least that level to get at that point of understanding. We need ASI to interpret what ASI is doing efficiently...
My immediate impression is that this doesn’t blast the whole plan open. I think you can reasonably decouple the social, economical and moral aspect of the model and the scientific one. The first one is also hard to pin down, but for very different reasons, and I think we might make some progress in that sense. It’s also more urgent, because current LLMs aren’t particularly smart at doing science, but they’re already very expert talkers (and bullshitters). Then we just don’t let the AI directly perform scientific experiments. Instead, we have it give us recipes, together with a description of what they are expected to do, and the AI’s best guess of their effect on society and why they would be beneficial. If the AI is properly aligned to the social goals, which it should be at this point if it has been developed iteratively within the bounds of this model, it shouldn’t straight up lie. Any experiments are then to be performed with high levels of security, airgaps, lockdown protocols, the works. As we go further, we might then incorporate “certified” ASIs in the governance system to double-check any other proposals from different ASIs, and so on so forth.
IMO that’s as good as it gets. If the values and the world model of the AI are reliable, then it shouldn’t just create grey goo and pass it as a new energy technology. It shouldn’t do it out of malice, and shouldn’t do it by mistake, especially early on when its scientific capabilities would still be relatively limited. At that point of course developing AI tools to e.g. solve the quantum structure and dynamics of a material without having to muck about with DFT, quantum Monte Carlo or coupled cluster simulations would have to be a priority (both for the model’s sake and because it would be mighty useful). And if it turns out that’s just not possible, then no ASI should be able to come up with anything so wild we can’t double check it either.
“Solving quantum chemistry” is not the domain of ASI, it’s a task for a specialised model, such as AlphaFold. An ASI, it if need to solve quantum chemistry, would not “cognise” it directly (or “see patterns” in it) but rather develop an equivalent of AlphaFold for quantum chemistry, potentially including quantum computers into its R&D program plan.