It feels like the actual implemented physics shouldn’t much affect how computation works. After all, we live in a quantum universe and classical physics is still simpler to compute.
If I understand Scott Aaronson correctly (his site seeks to correct public understanding of quantum computing), any quantum system can be simulated with a classical system, but with exponential slowdown (as far as we know—it’s not proven whether someone will be able to come up with a clever way to do it with less of a slowdown). But everything computable via a quantum system is computable via a classical system, just not necessarily as fast.
The formal term for the question of whether classical computers can work as efficiently as quantum computers is that of “Is BQP = P?”. (BQP = problems solvable in polynomial time using a quantum computer; P = problems solvable in polynomial time with classic.)
To be concrete, see Aaronson’s works here and here for more specific analysis. I think that just as Silas said, it is the fact that you must worry about an exponentially larger set of classical states in quantum mechanics. When(/if) we build quantum computers, part of the point will be that they natively avoid this slowdown by virtue of being able to produce computations that are themselves superpositions of different states. If you then carefully engineer the way in which you make a measurement, you can provide probability bounds on the likelihood that the observer version of yourself happens to be in the Everett branch corresponding to the correct answer. By repeating the computation, you can make the probability that “you” are not in the “right” Everett branch become smaller and smaller, at least if you subscribe to the many worlds interpretation.
You may also like to watch Aaronson’s 2011 Buhl lecture as linked from this post. Among other things, it highlights reasons why we might regard computability and complexity theory as being more fundamental than physics, i.e. that complexity theory would actually enable you to predict what physics should be like in some sense.
If I understand Scott Aaronson correctly (his site seeks to correct public understanding of quantum computing), any quantum system can be simulated with a classical system, but with exponential slowdown (as far as we know—it’s not proven whether someone will be able to come up with a clever way to do it with less of a slowdown). But everything computable via a quantum system is computable via a classical system, just not necessarily as fast.
The formal term for the question of whether classical computers can work as efficiently as quantum computers is that of “Is BQP = P?”. (BQP = problems solvable in polynomial time using a quantum computer; P = problems solvable in polynomial time with classic.)
To be concrete, see Aaronson’s works here and here for more specific analysis. I think that just as Silas said, it is the fact that you must worry about an exponentially larger set of classical states in quantum mechanics. When(/if) we build quantum computers, part of the point will be that they natively avoid this slowdown by virtue of being able to produce computations that are themselves superpositions of different states. If you then carefully engineer the way in which you make a measurement, you can provide probability bounds on the likelihood that the observer version of yourself happens to be in the Everett branch corresponding to the correct answer. By repeating the computation, you can make the probability that “you” are not in the “right” Everett branch become smaller and smaller, at least if you subscribe to the many worlds interpretation.
You may also like to watch Aaronson’s 2011 Buhl lecture as linked from this post. Among other things, it highlights reasons why we might regard computability and complexity theory as being more fundamental than physics, i.e. that complexity theory would actually enable you to predict what physics should be like in some sense.