is there any chance it’s possible to build a physical device that answers questions a Turing machine cannot answer?
I don’t think computability is the right framework to think about this question. Mostly this is because, for somewhat silly reasons, there exists a Turing machine that answers any fixed finite sequence of questions. It’s just the Turing machine which prints out the answers to those questions. “But which Turing machine is that?” Dunno, this is an existence proof. But the point is that you can’t just say “you can’t build a device that solves the halting problem” because, for any fixed finite set of Turing machines, you can trivially build such a device, you just can’t recognize it once you’ve built it...
It follows that in the framework of computability, to distinguish a “real halting oracle” from a rock with stuff scribbled on it, you need to ask infinitely many questions. And that’s pretty hard to do.
So I think the correct framework for thinking about this problem is not computability theory but complexity theory. And here it genuinely is an open problem whether, for example, it’s possible to build a physical computer capable of solving some problems efficiently that a classical computer can’t (e.g. a quantum computer). Scott Aaronson has written about this subject; see, for example, NP-complete problems and Physical Reality as well as Closed Timelike Curves Make Quantum and Classical Computing Equivalent.
In the framework of complexity theory, one can now also ask the following question: if someone built a device and claimed that it could solve some problems more efficiently than our devices can, how do we efficiently verify that their answers are right? This question is addressed by, for example, the framework of interactive proofs. As Scott Aaronson says, this allows us to make statements like the following:
We now know that, if an alien with enormous computational powers came to Earth, it could prove to us whether White or Black has the winning strategy in chess. To be convinced of the proof, we would not have to trust the alien or its exotic technology, and we would not have to spend billions of years analyzing one move sequence after another. We’d simply have to engage in a short conversation with the alien about the sums of certain polynomials over finite fields.
Edit: I should clarify that when I said
here it genuinely is an open problem whether, for example, it’s possible to build a physical computer capable of solving some problems efficiently that a classical computer can’t (e.g. a quantum computer).
there are actually two open problems here, one practical and one theoretical. The practical question is whether you can build quantum computers (and physics is relevant to this question). The theoretical question is whether a quantum computer, if you could build one, actually lets you solve more problems efficiently than you could with a classical computer; this is the question of whether P = BQP.
I don’t think computability is the right framework to think about this question. Mostly this is because, for somewhat silly reasons, there exists a Turing machine that answers any fixed finite sequence of questions. It’s just the Turing machine which prints out the answers to those questions. “But which Turing machine is that?” Dunno, this is an existence proof. But the point is that you can’t just say “you can’t build a device that solves the halting problem” because, for any fixed finite set of Turing machines, you can trivially build such a device, you just can’t recognize it once you’ve built it...
It follows that in the framework of computability, to distinguish a “real halting oracle” from a rock with stuff scribbled on it, you need to ask infinitely many questions. And that’s pretty hard to do.
I think Toby Ord actually has a good response to this objection:
It is worth responding, at this point, to a persistent argument that is made against the coherence of physical hypercomputation. It states that a machine must perform an infinite number of computations for it to count as hypercomputational. After all, for any function f on the integers, there is a Turing machine that computes f(n) for an arbitrarily large finite number of values of n. It is thus claimed either that we could not know that a prospective machine was a hypermachine after witnessing finitely many computations or that it simply would not be a hypermachine since its behaviour could be simulated by a Turing machine. Hypercomputation is thus claimed to be on shaky ground. However, it is easy to see that this argument cannot achieve what it hopes to since it does not just collapse hypercomputation to classical computation, but instead it collapses all computation on the integers down to that of finite state machines. This is because for any function f on the integers, there is also a finite state machine that computes f(n) on an arbitrarily large finite domain. Thus, as explained in Copeland [10], whatever force this argument is supposed to have with respect to hypercomputation, it must also have with respect to refuting the existence of classical (general recursive) computation. Since the latter is agreed to be on firm ground, it is difficult to see why this argument should count against the former.
Recall that a physical computer is technically a finite state machine (it doesn’t have infinite memory). This is another reason I don’t think Turing machines are a good formalism for talking about computation in the real world.
It is thus claimed either that we could not know that a prospective machine was a hypermachine after witnessing finitely many computations or that it simply would not be a hypermachine since its behaviour could be simulated by a Turing machine. Hypercomputation is thus claimed to be on shaky ground.
The former suggestion seems like the more important point here. While true that the hypercomputer’s behavior can be simulated on a Turing machine, this is only true of the single answer given and received, not of the abstractly defined problem being solved. The hypercomputer still cannot be proven by a Turing machine to have knowledge of a fact that a Turing machine could not.
And so the words “shaky ground” are used loosely here. The argument doesn’t refute the theoretic “existence” of recursive computation any more than it refutes the existence of theoretic hypercomputation. That finite state machines are the only realizable form of Turing machine is hardly a point in their generalized disfavor.
There’s another issue also worth pointing out: The classical analog of BQP isn’t P. The classical analog is BPP). It is widely believed that P=BPP, but if P!= BPP then the relevant question in your context will be whether BPP=BQP.
if you could build one, actually lets you solve more problems efficiently than you could with a classical computer; this is the question of whether P = BQP.
This isn’t quite what this question asks. We know that some things are faster with a quantum computer than what can be one in the best case classical situation. Grover’s algorithm is one example. The question of whether P=BQP is one formulation of that for yes and no questions. But even this isn’t quite accurate. For example, it could be that quantum computers can do some extremely high degree polynomial speed-up even while P=BQP. But this is essentially a minor nitpick.
But the point is that you can’t just say “you can’t build a device that solves the halting problem” because, for any fixed finite set of Turing machines, you can trivially build such a device, you just can’t recognize it once you’ve built it...
Yes, but the distinction between the two isn’t captured by computability theory (unless you are allowed to pose infinitely many problems). Also, if the universe is spatially infinite, it can solve the halting problem in a deeply silly way, namely there could be an infinite string of bits somewhere, each a fixed distance from the next, that just hardcodes the solution to the halting problem.
This is obviously unsatisfying, but part of the reason it’s unsatisfying is that even if such an infinite string of bits existed (and even if we somehow verified that it was trustworthy), it would constitute a horribly inefficient algorithm for solving the halting problem: moving at constant speed, it would take time exponential in the length of the code for a Turing machine to go to the place in the universe where the bit determining whether that Turing machine halts is stored. But this is a complexity consideration, not a computability consideration.
unless you are allowed to pose infinitely many problems
Or one selected at random from an infinite class of problems.
Also, if the universe is spatially infinite, it can solve the halting problem in a deeply silly way, namely there could be an infinite string of bits somewhere, each a fixed distance from the next, that just hardcodes the solution to the halting problem.
That’s why both computability theory and complexity theory require algorithms to have finite sized sourcecode.
I don’t think computability is the right framework to think about this question. Mostly this is because, for somewhat silly reasons, there exists a Turing machine that answers any fixed finite sequence of questions. It’s just the Turing machine which prints out the answers to those questions. “But which Turing machine is that?” Dunno, this is an existence proof. But the point is that you can’t just say “you can’t build a device that solves the halting problem” because, for any fixed finite set of Turing machines, you can trivially build such a device, you just can’t recognize it once you’ve built it...
It follows that in the framework of computability, to distinguish a “real halting oracle” from a rock with stuff scribbled on it, you need to ask infinitely many questions. And that’s pretty hard to do.
So I think the correct framework for thinking about this problem is not computability theory but complexity theory. And here it genuinely is an open problem whether, for example, it’s possible to build a physical computer capable of solving some problems efficiently that a classical computer can’t (e.g. a quantum computer). Scott Aaronson has written about this subject; see, for example, NP-complete problems and Physical Reality as well as Closed Timelike Curves Make Quantum and Classical Computing Equivalent.
In the framework of complexity theory, one can now also ask the following question: if someone built a device and claimed that it could solve some problems more efficiently than our devices can, how do we efficiently verify that their answers are right? This question is addressed by, for example, the framework of interactive proofs. As Scott Aaronson says, this allows us to make statements like the following:
Edit: I should clarify that when I said
there are actually two open problems here, one practical and one theoretical. The practical question is whether you can build quantum computers (and physics is relevant to this question). The theoretical question is whether a quantum computer, if you could build one, actually lets you solve more problems efficiently than you could with a classical computer; this is the question of whether P = BQP.
I think Toby Ord actually has a good response to this objection:
Recall that a physical computer is technically a finite state machine (it doesn’t have infinite memory). This is another reason I don’t think Turing machines are a good formalism for talking about computation in the real world.
The former suggestion seems like the more important point here. While true that the hypercomputer’s behavior can be simulated on a Turing machine, this is only true of the single answer given and received, not of the abstractly defined problem being solved. The hypercomputer still cannot be proven by a Turing machine to have knowledge of a fact that a Turing machine could not.
And so the words “shaky ground” are used loosely here. The argument doesn’t refute the theoretic “existence” of recursive computation any more than it refutes the existence of theoretic hypercomputation. That finite state machines are the only realizable form of Turing machine is hardly a point in their generalized disfavor.
There’s another issue also worth pointing out: The classical analog of BQP isn’t P. The classical analog is BPP). It is widely believed that P=BPP, but if P!= BPP then the relevant question in your context will be whether BPP=BQP.
This isn’t quite what this question asks. We know that some things are faster with a quantum computer than what can be one in the best case classical situation. Grover’s algorithm is one example. The question of whether P=BQP is one formulation of that for yes and no questions. But even this isn’t quite accurate. For example, it could be that quantum computers can do some extremely high degree polynomial speed-up even while P=BQP. But this is essentially a minor nitpick.
solving is not the same as providing the answer
Yes, but the distinction between the two isn’t captured by computability theory (unless you are allowed to pose infinitely many problems). Also, if the universe is spatially infinite, it can solve the halting problem in a deeply silly way, namely there could be an infinite string of bits somewhere, each a fixed distance from the next, that just hardcodes the solution to the halting problem.
This is obviously unsatisfying, but part of the reason it’s unsatisfying is that even if such an infinite string of bits existed (and even if we somehow verified that it was trustworthy), it would constitute a horribly inefficient algorithm for solving the halting problem: moving at constant speed, it would take time exponential in the length of the code for a Turing machine to go to the place in the universe where the bit determining whether that Turing machine halts is stored. But this is a complexity consideration, not a computability consideration.
Or one selected at random from an infinite class of problems.
That’s why both computability theory and complexity theory require algorithms to have finite sized sourcecode.