This paper is about simulating current (very weak, very noisy) quantum computers using (large, powerful) classical computers. It arguably improves the state of the art for this task.
Virtually no expert believes you can efficiently simulate actual quantum systems (even approximately) using a classical computer. There are some billon-dollar bounties on this (e.g. if you could simulate any quantum system of your choice, you could run Shor’s algorithm, break RSA, break the signature scheme of bitcoin, and steal arbitrarily many bitcoins).
It remains to be seen whether it’s easier. It could also be harder (the nanobot interacts with a chaotic environment which is hard to predict).
“Also a big unsolved problem like P=NP might be easy to an ASI.”
I don’t understand what this means. The ASI may indeed be good at proving that P does not equal NP, in which case it has successfully proven that it itself cannot do certain tasks (the NP complete ones). Similarly, if the ASI is really good at complexity theory, it could prove that BQP is not equal to BPP, at which point is has proven that it itself cannot simulate quantum computation on a classical computer. But that still does not let it simulate quantum computation on a classical computer!
The reason for the exponential term is that a quantum computer uses a superposition of exponentially many states. A well functioning nanomachine doesn’t need to be in a superposition of exponentially many states.
For that matter, the AI can make its first nanomachines using designs that are easy to reason about. This is a big hole in any complexity theory based argument. Complexity theory only applies in the worst case. The AI can actively optimize its designs to be easy to simulate.
Its possible the AI shows P!=NP, but also possible the AI shows P=NP, and finds a fast algorithm. Maybe the AI realizes that BQP=BPP.
Maybe the AI can make its first nanomachines easy to reason about… but maybe not. We humans cannot predict the outcome of even relatively simple chemical interactions (resorting to the lab to see what happens). That’s because these chemical interactions are governed by the laws of quantum mechanics, and yes, they involve superpositions of a large number of states.
“Its possible the AI shows P!=NP, but also possible the AI shows P=NP, and finds a fast algorithm. Maybe the AI realizes that BQP=BPP.”
It’s also possible the AI finds a way to break the second law of thermodynamics and to travel faster than light, if we’re just gonna make things up. (I have more confidence in P!=NP than in just about any phsyical law.) If we only have to fear an AI in a world where P=NP, then I’m personally not afraid.
Not sure why you are quite so confident P!=NP. But that doesn’t really matter.
Consider bond strength. Lets say the energy taken to break a C-C bond varies by ±5% based on all sorts of complicated considerations involving the surrounding chemical structure. An AI designing a nanomachine can just apply 10% more energy than needed.
A quantum computer doesn’t just have a superposition of many states, its a superposition carefully chosen such that all the pieces destructively and constructively interfere in exactly the right way. Not that the AI needs exact predictions anyway.
Also, the AI can cheat. As well as fundamental physics, it has access to a huge dataset of experiments conducted by humans. It doesn’t need to deduce everything from QED, it can deduce things from random chemistry papers too.
This paper is about simulating current (very weak, very noisy) quantum computers using (large, powerful) classical computers. It arguably improves the state of the art for this task.
Virtually no expert believes you can efficiently simulate actual quantum systems (even approximately) using a classical computer. There are some billon-dollar bounties on this (e.g. if you could simulate any quantum system of your choice, you could run Shor’s algorithm, break RSA, break the signature scheme of bitcoin, and steal arbitrarily many bitcoins).
Simulating a nanobot is a lower bar than simulating a quantum computer. Also a big unsolved problem like P=NP might be easy to an ASI.
It remains to be seen whether it’s easier. It could also be harder (the nanobot interacts with a chaotic environment which is hard to predict).
“Also a big unsolved problem like P=NP might be easy to an ASI.”
I don’t understand what this means. The ASI may indeed be good at proving that P does not equal NP, in which case it has successfully proven that it itself cannot do certain tasks (the NP complete ones). Similarly, if the ASI is really good at complexity theory, it could prove that BQP is not equal to BPP, at which point is has proven that it itself cannot simulate quantum computation on a classical computer. But that still does not let it simulate quantum computation on a classical computer!
The reason for the exponential term is that a quantum computer uses a superposition of exponentially many states. A well functioning nanomachine doesn’t need to be in a superposition of exponentially many states.
For that matter, the AI can make its first nanomachines using designs that are easy to reason about. This is a big hole in any complexity theory based argument. Complexity theory only applies in the worst case. The AI can actively optimize its designs to be easy to simulate.
Its possible the AI shows P!=NP, but also possible the AI shows P=NP, and finds a fast algorithm. Maybe the AI realizes that BQP=BPP.
Maybe the AI can make its first nanomachines easy to reason about… but maybe not. We humans cannot predict the outcome of even relatively simple chemical interactions (resorting to the lab to see what happens). That’s because these chemical interactions are governed by the laws of quantum mechanics, and yes, they involve superpositions of a large number of states.
“Its possible the AI shows P!=NP, but also possible the AI shows P=NP, and finds a fast algorithm. Maybe the AI realizes that BQP=BPP.”
It’s also possible the AI finds a way to break the second law of thermodynamics and to travel faster than light, if we’re just gonna make things up. (I have more confidence in P!=NP than in just about any phsyical law.) If we only have to fear an AI in a world where P=NP, then I’m personally not afraid.
Not sure why you are quite so confident P!=NP. But that doesn’t really matter.
Consider bond strength. Lets say the energy taken to break a C-C bond varies by ±5% based on all sorts of complicated considerations involving the surrounding chemical structure. An AI designing a nanomachine can just apply 10% more energy than needed.
A quantum computer doesn’t just have a superposition of many states, its a superposition carefully chosen such that all the pieces destructively and constructively interfere in exactly the right way. Not that the AI needs exact predictions anyway.
Also, the AI can cheat. As well as fundamental physics, it has access to a huge dataset of experiments conducted by humans. It doesn’t need to deduce everything from QED, it can deduce things from random chemistry papers too.