There are spacetime models in general relativity (i.e. solutions to the Einstein equations) that permit hypercomputation. In a Malament-Hogarth spacetime, there are wordlines such that an object traveling along the worldline will experience infinite time, but for an observer at some point p outside the wordline, the time it takes for the object to traverse the worldline is finite, and the wordline is entirely within p’s causal past. So if you had a Turing machine traveling along this wordline, it could send a signal to p if and onliy if it halted, and the observer at p is guaranteed to receive the signal if the machine ever halts. No infinite-precision measurements are involved (unless perhaps you believe that a Turing machine operating reliably for an indefinite period of time is tantamount to an infinite-precision measurement).
Are these spacetimes physically possible? Well, like I said, they satisfy the basic laws of GR. However, they are not globally hyperbolic, which means there is is no space-like surface (analogous to an “instant of time”) such that providing all data on that surface fully determines the data over the rest of space-time uniquely. In other words, determinism of a particularly strong variety fails.
The strong version of the cosmic censorship hypothesis essentially states that a physically reasonable spacetime must be globally hyperbolic, so if you take it as a criterion of physical possibility, then Malament-Hogarth spacetimes are not physically possible. I guess this just brings up a certain amount of vagueness in the phrase “physically possible”. It is usually taken to mean “possible according to the physical laws”, but what exactly delimits what counts as a physical law? Suppose our universe is globally hyperbolic. Would it then be a law that space-time is globally hyperbolic? Anyway, if you have a more restrictive notion of physical law, such that only laws of temporal evolution count, then the laws of general relativity at least appear to permit hypercomputation.
At the end of your post you suggest that the computability of physical law might rule out hypercomputation. But the M-H spacetime argument does not require that the laws are uncomputable. There is a simple argument from the computability of the physical laws to the impossibility of hypercomputation if you assume full determinism, but that is an additional assumption. You can easily prove that hypercompuation is physically impossible if you make the following four assumptions (presented in reverse order of plausibility, according to my own metric):
(1) The laws of physics are computable.
(2) The laws of physics are complete (i.e. there are no phenomena that are not covered by the laws).
(3) Spacetime must be globally hyperbolic (i.e. there must be a space-like surface of the kind described above).
(4) Finite-precision data over any space-like surface is sufficient for accurately determining the data everywhere on that surface’s domain of dependence.
In a Malament-Hogarth spacetime, there are wordlines such that an object traveling along the worldline will experience infinite time, but for an observer at some point p outside the wordline, the time it takes for the object to traverse the worldline is finite, and the wordline is entirely within p’s causal past.
Is it an actual spacetime, or just a class of every spacetime with this property? I’m having trouble locating a paper describing the metric that is not just the inside of the Kerr metric.
Are these spacetimes physically possible? Well, like I said, they satisfy the basic laws of GR.
As a self-appointed resident GR expert, I would like to caution against this misapplication of GR. You cannot simply “place a Turing machine into a spacetime”. The Turing machine is not a test particle. It has mass, it computes stuff and hence radiates heat, it increases the entropy of the universe. As a result, the spacetime with the Turing machine in it is different from the spacetime without. The change is tiny and can be neglected in many circumstances, but most emphatically not in this case.
The reason that placing a material object at non-zero temperature into a spacetime which maps an infinite affine-parameter geodesic into a finite one is that you get infinite amount of radiation and entropy in a finite time. As a result, your spacetime blows up into bits. This is a modification of Hawking’s argument in favor of his famous Chronology Protection Conjecture (he used CTCs and virtual particles, not real objects).
This argument is quite general, but not often appreciated and not formalized, except in a few cases. It also applies to any attempts to use a CTC as an actual trajectory of a warm material object: you cannot hope to match up every microstate exactly after a complete loop simply by evolution. Hence the Novikov’s disclaimer about his self-consistency principle: it’s vacuous unless you presume “new physics” beyond GR.
Is it an actual spacetime, or just a class of every spacetime with this property? I’m having trouble locating a paper describing the metric that is not just the inside of the Kerr metric.
It’s the class of every spacetime with the property. Examples besides the Kerr spacetime are the universal covering of anti-de Sitter spacetime, the Reissner-Nodstrom spacetime, even a simple Minkowski spacetime rolled up along the temporal axis (or in fact any spacetime with CTCs).
As a self-appointed resident GR expert, I would like to caution against this misapplication of GR. You cannot simply “place a Turing machine into a spacetime”. The Turing machine is not a test particle. It has mass, it computes stuff and hence radiates heat, it increases the entropy of the universe. As a result, the spacetime with the Turing machine in it is different from the spacetime without. The change is tiny and can be neglected in many circumstances, but most emphatically not in this case.
Fair point. The spacetime structure will indeed indefinitely amplify even the tiniest bit of thermal radiation. And it is also true that Landauer’s principle tells us that a computational process must radiate heat.
But Landauer’s principle is a consequence of the Second Law of Thermodynamics, and the Second Law is not, to the best of our knowledge, a fundamental law. It holds in our universe because of special boundary conditions, but it is entirely possible to construct universes with the same fundamental laws and different boundary conditions so that entropy stops increasing at some point in time and begins decreasing, or where entropy does not exhibit any significant monotonic tendency at all.
Does rigging boundary conditions in this manner take us outside the realm of physical possibility? Again, that depends on what the OP means by “physically possible”. If all he means is “consistent with the fundamental laws of temporal evolution” then no, choosing special boundary conditions which negate the Second Law does not violate physical possibility. Of course, one would need very specifically (and implausibly) rigged boundary conditions in order to get a universe with a M-H setup that does not blow up, but astronomical unlikelihood is not the same as impossibility.
ETA: If you’re interested, here’s a nice paper showing that a Malament-Hogarth spacetime can be constructed that satisfies various criteria of physical reasonableness (energy conditions, stable causality, etc.).
It’s the class of every spacetime with the property. Examples besides the Kerr spacetime are the universal covering of anti-de Sitter spacetime, the Reissner-Nodstrom spacetime, even a simple Minkowski spacetime rolled up along the temporal axis (or in fact any spacetime with CTCs).
Thanks for the examples, that’s what I suspected, though I find the CTC examples dubious at best, as you appeal to a much stronger impossibility to justify a weaker one. I am not a stickler for global hyperbolicity, I can certainly imagine topological and/or geometric instantons “magically” appearing and disappearing. These don’t cause infinite backreaction the way CTCs do.
If you’re interested, here’s a nice paper showing that a Malament-Hogarth spacetime can be constructed that satisfies various criteria of physical reasonableness (energy conditions, stable causality, etc.).
It does indeed attempts to address most of the issues, but not the divergent emissions one, which seems mutually exclusive with non-divergent red shift. I am even fine with the “requires infinite energy” issue, since I can certainly imagine pumping energy through a whitehole from some other inaccessible spacetime (or some other instanton-like event).
Does rigging boundary conditions in this manner take us outside the realm of physical possibility?
My interest is whether some hypercomputational construct can be embedded into our universe (which is roughly of the expanding FRW-dS type), not whether some other universe where entropy can decrease can perform these tricks. The reason, again, is that if you use much stronger assumptions to justify something weaker, the argument becomes much less interesting. In an extreme case “because DM decided so” would trivially support anything you want.
But Landauer’s principle is a consequence of the Second Law of Thermodynamics, and the Second Law is not, to the best of our knowledge, a fundamental law. It holds in our universe because of special boundary conditions, but it is entirely possible to construct universes with the same fundamental laws and different boundary conditions so that entropy stops increasing at some point in time and begins decreasing, or where entropy does not exhibit any significant monotonic tendency at all.
What about the Drescher/Barbour argument that the Second Law is an artifact of observers’ ability to record time histories? That is, the only states that will contain “memories” (however implemented) of past states are the ones where entropy is higher than in the “remembering” state, because all processes of recording increase entropy.
So even in those thought experiments where you “reverse time” of the chaotic billiards-ball-world back to a low-entropy t = 0 and keep going so that entropy increases in the negative time direction, the observers in that “negative time” state will still regard t = 0 to be in their past. Furthermore, any scenario you could set up where someone is only entangled with stuff that you deliberately decrease the entropy of (by increasing entropy outside the “bubble”), will result in that person thinking that the flow of time was the opposite of what you think.
I don’t know how well this arguments meshes with the possibility of such GR solutions.
Truly infinite runtime has some problems, though they’re different ones than infinite precision. What would you make your computer out of, and how would you ensure it didn’t simply decay into a sphere of iron by quantum tunneling?
I’m not sure which of that vs voltage precision is more of a “mere engineering detail”; they both seem problematic at a level that, say, atom-perfect construction does not.
So if you had a Turing machine traveling along this wordline, it could send a signal to p if and onliy if it halted, and the observer at p is guaranteed to receive the signal if the machine ever halts. No infinite-precision measurements are involved (unless perhaps you believe that a Turing machine operating reliably for an indefinite period of time is tantamount to an infinite-precision measurement).
Even if such spacetimes were possible, in order to exploit them for hypercomptation you would require a true Turing machine with infinite tape, infinite energy supply (unless it was perfectly reversible) and enough durability to run for a literally infinite amount of proper time without breaking. Such requirements seem inconsistent with the known laws of physics.
Why do you believe infinite memory is incompatible with the fundamental laws of physics? In any case, if it is, then Turing machines are physically impossible as well, the only physically possible computational systems are finite state automata, and there is no halting problem. You are right that in this case the infinite run-time provided by M-H spacetimes will not give any computational advantage, since every finite state machine either halts or loops anyway. But in discussions of computability theory I always assume that we are idealizing away memory constraints (unless explicitly stated otherwise).
As for the infinite energy requirement, there are three things to say. First, a Turing machine with infinite tape does not need to perform any irreversible computations (such as erasure). Second, even if it does perform computationally irreversible steps, this does not mean they are thermodynamically irreversible unless the Second Law of Thermodynamics holds, and this is not a fundamental law of physics (see this comment). Third, even if infinite energy is required, I don’t think the assumption of an infinite energy source is incompatible with any of the fundamental laws of physics.
The durability concern also seems to depend on thermodynamic considerations, although perhaps there is some genuine inconsistency with fundamental law here once quantum mechanics enters the picture.
Well, an infinite memory store or an infinite energy source would have infinite mass. So it would either take up the entire universe and have nowhere external to send its results to or, if it had finite size, it would be inside its own Schwarzchild radius, and there would be no way to send a signal out through its event horizon.
So yeah, I’d call infinite storage or power sources (as politely as possible) “unphysical”.
And I don’t see why you think the halting problem goes away just because you can’t put infinite tape in your Turing machine, or because you use finite state automata instead. You still can’t set an upper bound on the size of computation needed to determine whether any algorithm in general will terminate, and I kind of thought the point of the halting problem was that it doesn’t only apply to actual Turing machines.
So it would either take up the entire universe and have nowhere external to send its results to or, if it had finite size, it would be inside its own Schwarzchild radius, and there would be no way to send a signal out through its event horizon.
These are not the only options. Infinite sets have infinite proper subsets, so an object in a spatially infinite universe could have infinite size without taking up the entire universe. In a universe with an infinite amount of matter, a computational process could requisition an infinite proper subset of that matter as memory while still leaving plenty of matter to build other stuff. Or (if you don’t take the cosmic censorship hypothesis as a constraint on physical possibility) you could have a naked singularity functioning as a white hole (the time reverse of a black hole, allowed by the time reversal invariant Einstein Field Equations) disgorging matter as needed for the computation.
That said, I am concerned about the fact that making the computational device too large would significantly modify the background metric, so (as shminux pointed out) one can’t glibly consider a M-H spacetime, put a massive (perhaps infinitely large) Turing machine in it, and still assume that it is the same spacetime. It’s not obvious to me that it would be impossible to have a device of this sort in an M-H spacetime, but neither is it obvious that it would be possible (and FWIW, I would bet against the possibility).
I think the right response is that infinite memory is always an idealization in discussions of computability. When we talk about the Church-Turing thesis as limning the notion of “computable”, we are ignoring spatial constraints. Computability is a pure mathematical concept, not an engineering concept. When a theorist says that X is computable, she is not committing herself to the claim that the universe contains physical resources sufficient for the construction of a computer that implements X. Why should this usual standard become more rigid when we consider M-H spacetimes?
And I don’t see why you think the halting problem goes away just because you can’t put infinite tape in your Turing machine, or because you use finite state automata instead. You still can’t set an upper bound on the size of computation needed to determine whether any algorithm in general will terminate, and I kind of thought the point of the halting problem was that it doesn’t only apply to actual Turing machines.
Every finite state machine will either halt or start repeating itself in finite time. This is guaranteed. To figure out whether a particular machine halts, simply wait until it either halts or enters a state it has entered before. One of these will happen within finite time, so you will always be able to determine within finite time whether or not the machine halts. It’s true that if you want a single halting oracle that works for finite state machines of arbitrary size, it cannot itself be a finite state machine, it would have to be a Turing machine. Is this what you mean by the halting problem for FSAs? If so, then I agree, but that is a different sort of problem from the halting problem for Turing machines. My point was just that in the case of FSA’s (unlike Turing machines) there’s no computation that is ruled out solely due to the lack of infinite runtime; allowing infinite runtime doesn’t increase the power of FSAs.
There are spacetime models in general relativity (i.e. solutions to the Einstein equations) that permit hypercomputation. In a Malament-Hogarth spacetime, there are wordlines such that an object traveling along the worldline will experience infinite time, but for an observer at some point p outside the wordline, the time it takes for the object to traverse the worldline is finite, and the wordline is entirely within p’s causal past. So if you had a Turing machine traveling along this wordline, it could send a signal to p if and onliy if it halted, and the observer at p is guaranteed to receive the signal if the machine ever halts.
Don’t you have the problem of the signal getting blue-shifted out of the range of your detector?
Infinite time is not enough to do hypercomputation. You also need infinite memory. If you only have n bits of memory, you only have 2^n possible states, so after time 2^n, your computation must terminate or have entered a loop, so more time is not useful.
Also, your and wikipedia’s description is pretty vague. Deutsch proposed a more precise model of computation along a CTC. It assumes that you only have finitely many bits of memory (and thus are computable), but it avoids the problems that shminux mentions. John Watrous and Scott Aaronson proved that in this model, you can take full advantage of the time and compute full PSPACE problems. While such problems are not supposed to be tractable in short time without a CTC, that’s a far cry from halting. Also, reversible computing should make PSPACE problems practical in some sense.
I’m not sure I understand why determinism would be a necessary assumption in ruling out hypercomputation. As I understand it, Turing showed that probabilistic machines aren’t any more powerful, computationally, than deterministic ones. But you word (4) to include finite precision data being determinitive. How far could that assumption be weakened? Could it, perhaps, be replaced by an assumption that you can’t exploit infinite precision in building your device—maybe there’s some chaotic behavior that stops you from predicting the behavior of a system from your data, but it’s no more useful than true randomness for building any computing device?
I should clarify: Assumptions (1)-(4) in my comment were only supposed to be sufficient conditions for the impossibility of hypercomputation, not necessary conditions. The basic idea is this. If hypercomputation were physically possible, there would be some universe governed by our laws that contains a hypercomputational process. If the laws are computable, deterministic and only require finite precision input data, we could simulate this universe accurately on a computer if we provided it with the initial state. This would allow us to simulate a hypercomputational process on a Turing machine, and that is impossible. Therefore hypercomputation is incompatible with those assumptions.
The assumptions of determinism and finite precision are crucial for this argument because without them accurate simulation for an arbitrary length of time need not be possible. But there may be other arguments for the physical impossibility of hypercomputation that do not rely on these assumptions.
I do think assumptions (1) and (2) by themselves are insufficient, and the Malament-Hogarth setup is supposed to be an existence proof of this insufficiency. The laws of GR are computable, yet they seem to allow hypercomputation (perhaps the M-H setup is incompatible with some other fundamental law, as some of the posters are arguing, but that is irrelevant to this particular point).
ETA: It’s also worth keeping in mind that non-deterministic, in this context, is not the same as probabilistic. If a spacetime is not globally hyperbolic, then field values on any spatial surface do not determine field values everywhere else. But this doesn’t mean that the laws give you probabilities about what will happen outside the domain of dependence. The general relativistic laws are non-probabilistic.
There are spacetime models in general relativity (i.e. solutions to the Einstein equations) that permit hypercomputation. In a Malament-Hogarth spacetime, there are wordlines such that an object traveling along the worldline will experience infinite time, but for an observer at some point p outside the wordline, the time it takes for the object to traverse the worldline is finite, and the wordline is entirely within p’s causal past. So if you had a Turing machine traveling along this wordline, it could send a signal to p if and onliy if it halted, and the observer at p is guaranteed to receive the signal if the machine ever halts. No infinite-precision measurements are involved (unless perhaps you believe that a Turing machine operating reliably for an indefinite period of time is tantamount to an infinite-precision measurement).
Are these spacetimes physically possible? Well, like I said, they satisfy the basic laws of GR. However, they are not globally hyperbolic, which means there is is no space-like surface (analogous to an “instant of time”) such that providing all data on that surface fully determines the data over the rest of space-time uniquely. In other words, determinism of a particularly strong variety fails.
The strong version of the cosmic censorship hypothesis essentially states that a physically reasonable spacetime must be globally hyperbolic, so if you take it as a criterion of physical possibility, then Malament-Hogarth spacetimes are not physically possible. I guess this just brings up a certain amount of vagueness in the phrase “physically possible”. It is usually taken to mean “possible according to the physical laws”, but what exactly delimits what counts as a physical law? Suppose our universe is globally hyperbolic. Would it then be a law that space-time is globally hyperbolic? Anyway, if you have a more restrictive notion of physical law, such that only laws of temporal evolution count, then the laws of general relativity at least appear to permit hypercomputation.
At the end of your post you suggest that the computability of physical law might rule out hypercomputation. But the M-H spacetime argument does not require that the laws are uncomputable. There is a simple argument from the computability of the physical laws to the impossibility of hypercomputation if you assume full determinism, but that is an additional assumption. You can easily prove that hypercompuation is physically impossible if you make the following four assumptions (presented in reverse order of plausibility, according to my own metric):
(1) The laws of physics are computable.
(2) The laws of physics are complete (i.e. there are no phenomena that are not covered by the laws).
(3) Spacetime must be globally hyperbolic (i.e. there must be a space-like surface of the kind described above).
(4) Finite-precision data over any space-like surface is sufficient for accurately determining the data everywhere on that surface’s domain of dependence.
Is it an actual spacetime, or just a class of every spacetime with this property? I’m having trouble locating a paper describing the metric that is not just the inside of the Kerr metric.
As a self-appointed resident GR expert, I would like to caution against this misapplication of GR. You cannot simply “place a Turing machine into a spacetime”. The Turing machine is not a test particle. It has mass, it computes stuff and hence radiates heat, it increases the entropy of the universe. As a result, the spacetime with the Turing machine in it is different from the spacetime without. The change is tiny and can be neglected in many circumstances, but most emphatically not in this case.
The reason that placing a material object at non-zero temperature into a spacetime which maps an infinite affine-parameter geodesic into a finite one is that you get infinite amount of radiation and entropy in a finite time. As a result, your spacetime blows up into bits. This is a modification of Hawking’s argument in favor of his famous Chronology Protection Conjecture (he used CTCs and virtual particles, not real objects).
This argument is quite general, but not often appreciated and not formalized, except in a few cases. It also applies to any attempts to use a CTC as an actual trajectory of a warm material object: you cannot hope to match up every microstate exactly after a complete loop simply by evolution. Hence the Novikov’s disclaimer about his self-consistency principle: it’s vacuous unless you presume “new physics” beyond GR.
It’s the class of every spacetime with the property. Examples besides the Kerr spacetime are the universal covering of anti-de Sitter spacetime, the Reissner-Nodstrom spacetime, even a simple Minkowski spacetime rolled up along the temporal axis (or in fact any spacetime with CTCs).
Fair point. The spacetime structure will indeed indefinitely amplify even the tiniest bit of thermal radiation. And it is also true that Landauer’s principle tells us that a computational process must radiate heat.
But Landauer’s principle is a consequence of the Second Law of Thermodynamics, and the Second Law is not, to the best of our knowledge, a fundamental law. It holds in our universe because of special boundary conditions, but it is entirely possible to construct universes with the same fundamental laws and different boundary conditions so that entropy stops increasing at some point in time and begins decreasing, or where entropy does not exhibit any significant monotonic tendency at all.
Does rigging boundary conditions in this manner take us outside the realm of physical possibility? Again, that depends on what the OP means by “physically possible”. If all he means is “consistent with the fundamental laws of temporal evolution” then no, choosing special boundary conditions which negate the Second Law does not violate physical possibility. Of course, one would need very specifically (and implausibly) rigged boundary conditions in order to get a universe with a M-H setup that does not blow up, but astronomical unlikelihood is not the same as impossibility.
ETA: If you’re interested, here’s a nice paper showing that a Malament-Hogarth spacetime can be constructed that satisfies various criteria of physical reasonableness (energy conditions, stable causality, etc.).
Thanks for the examples, that’s what I suspected, though I find the CTC examples dubious at best, as you appeal to a much stronger impossibility to justify a weaker one. I am not a stickler for global hyperbolicity, I can certainly imagine topological and/or geometric instantons “magically” appearing and disappearing. These don’t cause infinite backreaction the way CTCs do.
It does indeed attempts to address most of the issues, but not the divergent emissions one, which seems mutually exclusive with non-divergent red shift. I am even fine with the “requires infinite energy” issue, since I can certainly imagine pumping energy through a whitehole from some other inaccessible spacetime (or some other instanton-like event).
My interest is whether some hypercomputational construct can be embedded into our universe (which is roughly of the expanding FRW-dS type), not whether some other universe where entropy can decrease can perform these tricks. The reason, again, is that if you use much stronger assumptions to justify something weaker, the argument becomes much less interesting. In an extreme case “because DM decided so” would trivially support anything you want.
What about the Drescher/Barbour argument that the Second Law is an artifact of observers’ ability to record time histories? That is, the only states that will contain “memories” (however implemented) of past states are the ones where entropy is higher than in the “remembering” state, because all processes of recording increase entropy.
So even in those thought experiments where you “reverse time” of the chaotic billiards-ball-world back to a low-entropy t = 0 and keep going so that entropy increases in the negative time direction, the observers in that “negative time” state will still regard t = 0 to be in their past. Furthermore, any scenario you could set up where someone is only entangled with stuff that you deliberately decrease the entropy of (by increasing entropy outside the “bubble”), will result in that person thinking that the flow of time was the opposite of what you think.
I don’t know how well this arguments meshes with the possibility of such GR solutions.
Truly infinite runtime has some problems, though they’re different ones than infinite precision. What would you make your computer out of, and how would you ensure it didn’t simply decay into a sphere of iron by quantum tunneling?
I’m not sure which of that vs voltage precision is more of a “mere engineering detail”; they both seem problematic at a level that, say, atom-perfect construction does not.
Even if such spacetimes were possible, in order to exploit them for hypercomptation you would require a true Turing machine with infinite tape, infinite energy supply (unless it was perfectly reversible) and enough durability to run for a literally infinite amount of proper time without breaking. Such requirements seem inconsistent with the known laws of physics.
Why do you believe infinite memory is incompatible with the fundamental laws of physics? In any case, if it is, then Turing machines are physically impossible as well, the only physically possible computational systems are finite state automata, and there is no halting problem. You are right that in this case the infinite run-time provided by M-H spacetimes will not give any computational advantage, since every finite state machine either halts or loops anyway. But in discussions of computability theory I always assume that we are idealizing away memory constraints (unless explicitly stated otherwise).
As for the infinite energy requirement, there are three things to say. First, a Turing machine with infinite tape does not need to perform any irreversible computations (such as erasure). Second, even if it does perform computationally irreversible steps, this does not mean they are thermodynamically irreversible unless the Second Law of Thermodynamics holds, and this is not a fundamental law of physics (see this comment). Third, even if infinite energy is required, I don’t think the assumption of an infinite energy source is incompatible with any of the fundamental laws of physics.
The durability concern also seems to depend on thermodynamic considerations, although perhaps there is some genuine inconsistency with fundamental law here once quantum mechanics enters the picture.
Well, an infinite memory store or an infinite energy source would have infinite mass. So it would either take up the entire universe and have nowhere external to send its results to or, if it had finite size, it would be inside its own Schwarzchild radius, and there would be no way to send a signal out through its event horizon.
So yeah, I’d call infinite storage or power sources (as politely as possible) “unphysical”.
And I don’t see why you think the halting problem goes away just because you can’t put infinite tape in your Turing machine, or because you use finite state automata instead. You still can’t set an upper bound on the size of computation needed to determine whether any algorithm in general will terminate, and I kind of thought the point of the halting problem was that it doesn’t only apply to actual Turing machines.
These are not the only options. Infinite sets have infinite proper subsets, so an object in a spatially infinite universe could have infinite size without taking up the entire universe. In a universe with an infinite amount of matter, a computational process could requisition an infinite proper subset of that matter as memory while still leaving plenty of matter to build other stuff. Or (if you don’t take the cosmic censorship hypothesis as a constraint on physical possibility) you could have a naked singularity functioning as a white hole (the time reverse of a black hole, allowed by the time reversal invariant Einstein Field Equations) disgorging matter as needed for the computation.
That said, I am concerned about the fact that making the computational device too large would significantly modify the background metric, so (as shminux pointed out) one can’t glibly consider a M-H spacetime, put a massive (perhaps infinitely large) Turing machine in it, and still assume that it is the same spacetime. It’s not obvious to me that it would be impossible to have a device of this sort in an M-H spacetime, but neither is it obvious that it would be possible (and FWIW, I would bet against the possibility).
I think the right response is that infinite memory is always an idealization in discussions of computability. When we talk about the Church-Turing thesis as limning the notion of “computable”, we are ignoring spatial constraints. Computability is a pure mathematical concept, not an engineering concept. When a theorist says that X is computable, she is not committing herself to the claim that the universe contains physical resources sufficient for the construction of a computer that implements X. Why should this usual standard become more rigid when we consider M-H spacetimes?
Every finite state machine will either halt or start repeating itself in finite time. This is guaranteed. To figure out whether a particular machine halts, simply wait until it either halts or enters a state it has entered before. One of these will happen within finite time, so you will always be able to determine within finite time whether or not the machine halts. It’s true that if you want a single halting oracle that works for finite state machines of arbitrary size, it cannot itself be a finite state machine, it would have to be a Turing machine. Is this what you mean by the halting problem for FSAs? If so, then I agree, but that is a different sort of problem from the halting problem for Turing machines. My point was just that in the case of FSA’s (unlike Turing machines) there’s no computation that is ruled out solely due to the lack of infinite runtime; allowing infinite runtime doesn’t increase the power of FSAs.
Don’t you have the problem of the signal getting blue-shifted out of the range of your detector?
Not necessarily. For an example of an M-H setup without divergent blueshift see the paper I linked at the end of this comment.
Infinite time is not enough to do hypercomputation. You also need infinite memory. If you only have n bits of memory, you only have 2^n possible states, so after time 2^n, your computation must terminate or have entered a loop, so more time is not useful.
Also, your and wikipedia’s description is pretty vague. Deutsch proposed a more precise model of computation along a CTC. It assumes that you only have finitely many bits of memory (and thus are computable), but it avoids the problems that shminux mentions. John Watrous and Scott Aaronson proved that in this model, you can take full advantage of the time and compute full PSPACE problems. While such problems are not supposed to be tractable in short time without a CTC, that’s a far cry from halting. Also, reversible computing should make PSPACE problems practical in some sense.
I’m not sure I understand why determinism would be a necessary assumption in ruling out hypercomputation. As I understand it, Turing showed that probabilistic machines aren’t any more powerful, computationally, than deterministic ones. But you word (4) to include finite precision data being determinitive. How far could that assumption be weakened? Could it, perhaps, be replaced by an assumption that you can’t exploit infinite precision in building your device—maybe there’s some chaotic behavior that stops you from predicting the behavior of a system from your data, but it’s no more useful than true randomness for building any computing device?
I should clarify: Assumptions (1)-(4) in my comment were only supposed to be sufficient conditions for the impossibility of hypercomputation, not necessary conditions. The basic idea is this. If hypercomputation were physically possible, there would be some universe governed by our laws that contains a hypercomputational process. If the laws are computable, deterministic and only require finite precision input data, we could simulate this universe accurately on a computer if we provided it with the initial state. This would allow us to simulate a hypercomputational process on a Turing machine, and that is impossible. Therefore hypercomputation is incompatible with those assumptions.
The assumptions of determinism and finite precision are crucial for this argument because without them accurate simulation for an arbitrary length of time need not be possible. But there may be other arguments for the physical impossibility of hypercomputation that do not rely on these assumptions.
I do think assumptions (1) and (2) by themselves are insufficient, and the Malament-Hogarth setup is supposed to be an existence proof of this insufficiency. The laws of GR are computable, yet they seem to allow hypercomputation (perhaps the M-H setup is incompatible with some other fundamental law, as some of the posters are arguing, but that is irrelevant to this particular point).
ETA: It’s also worth keeping in mind that non-deterministic, in this context, is not the same as probabilistic. If a spacetime is not globally hyperbolic, then field values on any spatial surface do not determine field values everywhere else. But this doesn’t mean that the laws give you probabilities about what will happen outside the domain of dependence. The general relativistic laws are non-probabilistic.