Why do you believe infinite memory is incompatible with the fundamental laws of physics? In any case, if it is, then Turing machines are physically impossible as well, the only physically possible computational systems are finite state automata, and there is no halting problem. You are right that in this case the infinite run-time provided by M-H spacetimes will not give any computational advantage, since every finite state machine either halts or loops anyway. But in discussions of computability theory I always assume that we are idealizing away memory constraints (unless explicitly stated otherwise).
As for the infinite energy requirement, there are three things to say. First, a Turing machine with infinite tape does not need to perform any irreversible computations (such as erasure). Second, even if it does perform computationally irreversible steps, this does not mean they are thermodynamically irreversible unless the Second Law of Thermodynamics holds, and this is not a fundamental law of physics (see this comment). Third, even if infinite energy is required, I don’t think the assumption of an infinite energy source is incompatible with any of the fundamental laws of physics.
The durability concern also seems to depend on thermodynamic considerations, although perhaps there is some genuine inconsistency with fundamental law here once quantum mechanics enters the picture.
Well, an infinite memory store or an infinite energy source would have infinite mass. So it would either take up the entire universe and have nowhere external to send its results to or, if it had finite size, it would be inside its own Schwarzchild radius, and there would be no way to send a signal out through its event horizon.
So yeah, I’d call infinite storage or power sources (as politely as possible) “unphysical”.
And I don’t see why you think the halting problem goes away just because you can’t put infinite tape in your Turing machine, or because you use finite state automata instead. You still can’t set an upper bound on the size of computation needed to determine whether any algorithm in general will terminate, and I kind of thought the point of the halting problem was that it doesn’t only apply to actual Turing machines.
So it would either take up the entire universe and have nowhere external to send its results to or, if it had finite size, it would be inside its own Schwarzchild radius, and there would be no way to send a signal out through its event horizon.
These are not the only options. Infinite sets have infinite proper subsets, so an object in a spatially infinite universe could have infinite size without taking up the entire universe. In a universe with an infinite amount of matter, a computational process could requisition an infinite proper subset of that matter as memory while still leaving plenty of matter to build other stuff. Or (if you don’t take the cosmic censorship hypothesis as a constraint on physical possibility) you could have a naked singularity functioning as a white hole (the time reverse of a black hole, allowed by the time reversal invariant Einstein Field Equations) disgorging matter as needed for the computation.
That said, I am concerned about the fact that making the computational device too large would significantly modify the background metric, so (as shminux pointed out) one can’t glibly consider a M-H spacetime, put a massive (perhaps infinitely large) Turing machine in it, and still assume that it is the same spacetime. It’s not obvious to me that it would be impossible to have a device of this sort in an M-H spacetime, but neither is it obvious that it would be possible (and FWIW, I would bet against the possibility).
I think the right response is that infinite memory is always an idealization in discussions of computability. When we talk about the Church-Turing thesis as limning the notion of “computable”, we are ignoring spatial constraints. Computability is a pure mathematical concept, not an engineering concept. When a theorist says that X is computable, she is not committing herself to the claim that the universe contains physical resources sufficient for the construction of a computer that implements X. Why should this usual standard become more rigid when we consider M-H spacetimes?
And I don’t see why you think the halting problem goes away just because you can’t put infinite tape in your Turing machine, or because you use finite state automata instead. You still can’t set an upper bound on the size of computation needed to determine whether any algorithm in general will terminate, and I kind of thought the point of the halting problem was that it doesn’t only apply to actual Turing machines.
Every finite state machine will either halt or start repeating itself in finite time. This is guaranteed. To figure out whether a particular machine halts, simply wait until it either halts or enters a state it has entered before. One of these will happen within finite time, so you will always be able to determine within finite time whether or not the machine halts. It’s true that if you want a single halting oracle that works for finite state machines of arbitrary size, it cannot itself be a finite state machine, it would have to be a Turing machine. Is this what you mean by the halting problem for FSAs? If so, then I agree, but that is a different sort of problem from the halting problem for Turing machines. My point was just that in the case of FSA’s (unlike Turing machines) there’s no computation that is ruled out solely due to the lack of infinite runtime; allowing infinite runtime doesn’t increase the power of FSAs.
Why do you believe infinite memory is incompatible with the fundamental laws of physics? In any case, if it is, then Turing machines are physically impossible as well, the only physically possible computational systems are finite state automata, and there is no halting problem. You are right that in this case the infinite run-time provided by M-H spacetimes will not give any computational advantage, since every finite state machine either halts or loops anyway. But in discussions of computability theory I always assume that we are idealizing away memory constraints (unless explicitly stated otherwise).
As for the infinite energy requirement, there are three things to say. First, a Turing machine with infinite tape does not need to perform any irreversible computations (such as erasure). Second, even if it does perform computationally irreversible steps, this does not mean they are thermodynamically irreversible unless the Second Law of Thermodynamics holds, and this is not a fundamental law of physics (see this comment). Third, even if infinite energy is required, I don’t think the assumption of an infinite energy source is incompatible with any of the fundamental laws of physics.
The durability concern also seems to depend on thermodynamic considerations, although perhaps there is some genuine inconsistency with fundamental law here once quantum mechanics enters the picture.
Well, an infinite memory store or an infinite energy source would have infinite mass. So it would either take up the entire universe and have nowhere external to send its results to or, if it had finite size, it would be inside its own Schwarzchild radius, and there would be no way to send a signal out through its event horizon.
So yeah, I’d call infinite storage or power sources (as politely as possible) “unphysical”.
And I don’t see why you think the halting problem goes away just because you can’t put infinite tape in your Turing machine, or because you use finite state automata instead. You still can’t set an upper bound on the size of computation needed to determine whether any algorithm in general will terminate, and I kind of thought the point of the halting problem was that it doesn’t only apply to actual Turing machines.
These are not the only options. Infinite sets have infinite proper subsets, so an object in a spatially infinite universe could have infinite size without taking up the entire universe. In a universe with an infinite amount of matter, a computational process could requisition an infinite proper subset of that matter as memory while still leaving plenty of matter to build other stuff. Or (if you don’t take the cosmic censorship hypothesis as a constraint on physical possibility) you could have a naked singularity functioning as a white hole (the time reverse of a black hole, allowed by the time reversal invariant Einstein Field Equations) disgorging matter as needed for the computation.
That said, I am concerned about the fact that making the computational device too large would significantly modify the background metric, so (as shminux pointed out) one can’t glibly consider a M-H spacetime, put a massive (perhaps infinitely large) Turing machine in it, and still assume that it is the same spacetime. It’s not obvious to me that it would be impossible to have a device of this sort in an M-H spacetime, but neither is it obvious that it would be possible (and FWIW, I would bet against the possibility).
I think the right response is that infinite memory is always an idealization in discussions of computability. When we talk about the Church-Turing thesis as limning the notion of “computable”, we are ignoring spatial constraints. Computability is a pure mathematical concept, not an engineering concept. When a theorist says that X is computable, she is not committing herself to the claim that the universe contains physical resources sufficient for the construction of a computer that implements X. Why should this usual standard become more rigid when we consider M-H spacetimes?
Every finite state machine will either halt or start repeating itself in finite time. This is guaranteed. To figure out whether a particular machine halts, simply wait until it either halts or enters a state it has entered before. One of these will happen within finite time, so you will always be able to determine within finite time whether or not the machine halts. It’s true that if you want a single halting oracle that works for finite state machines of arbitrary size, it cannot itself be a finite state machine, it would have to be a Turing machine. Is this what you mean by the halting problem for FSAs? If so, then I agree, but that is a different sort of problem from the halting problem for Turing machines. My point was just that in the case of FSA’s (unlike Turing machines) there’s no computation that is ruled out solely due to the lack of infinite runtime; allowing infinite runtime doesn’t increase the power of FSAs.