I should clarify: Assumptions (1)-(4) in my comment were only supposed to be sufficient conditions for the impossibility of hypercomputation, not necessary conditions. The basic idea is this. If hypercomputation were physically possible, there would be some universe governed by our laws that contains a hypercomputational process. If the laws are computable, deterministic and only require finite precision input data, we could simulate this universe accurately on a computer if we provided it with the initial state. This would allow us to simulate a hypercomputational process on a Turing machine, and that is impossible. Therefore hypercomputation is incompatible with those assumptions.
The assumptions of determinism and finite precision are crucial for this argument because without them accurate simulation for an arbitrary length of time need not be possible. But there may be other arguments for the physical impossibility of hypercomputation that do not rely on these assumptions.
I do think assumptions (1) and (2) by themselves are insufficient, and the Malament-Hogarth setup is supposed to be an existence proof of this insufficiency. The laws of GR are computable, yet they seem to allow hypercomputation (perhaps the M-H setup is incompatible with some other fundamental law, as some of the posters are arguing, but that is irrelevant to this particular point).
ETA: It’s also worth keeping in mind that non-deterministic, in this context, is not the same as probabilistic. If a spacetime is not globally hyperbolic, then field values on any spatial surface do not determine field values everywhere else. But this doesn’t mean that the laws give you probabilities about what will happen outside the domain of dependence. The general relativistic laws are non-probabilistic.
I should clarify: Assumptions (1)-(4) in my comment were only supposed to be sufficient conditions for the impossibility of hypercomputation, not necessary conditions. The basic idea is this. If hypercomputation were physically possible, there would be some universe governed by our laws that contains a hypercomputational process. If the laws are computable, deterministic and only require finite precision input data, we could simulate this universe accurately on a computer if we provided it with the initial state. This would allow us to simulate a hypercomputational process on a Turing machine, and that is impossible. Therefore hypercomputation is incompatible with those assumptions.
The assumptions of determinism and finite precision are crucial for this argument because without them accurate simulation for an arbitrary length of time need not be possible. But there may be other arguments for the physical impossibility of hypercomputation that do not rely on these assumptions.
I do think assumptions (1) and (2) by themselves are insufficient, and the Malament-Hogarth setup is supposed to be an existence proof of this insufficiency. The laws of GR are computable, yet they seem to allow hypercomputation (perhaps the M-H setup is incompatible with some other fundamental law, as some of the posters are arguing, but that is irrelevant to this particular point).
ETA: It’s also worth keeping in mind that non-deterministic, in this context, is not the same as probabilistic. If a spacetime is not globally hyperbolic, then field values on any spatial surface do not determine field values everywhere else. But this doesn’t mean that the laws give you probabilities about what will happen outside the domain of dependence. The general relativistic laws are non-probabilistic.