It occurs to me that we can’t really say, since we only have access to the time of the program, which may or may not reflect the actual computational resources expended.
Imagine you were living in a game, and trying to judge the game’s hardware requirements. If you did that by looking at a clock in the game, you’d need to assume that the clock is synchronized to the actual system time. If you had a counter you increased, you wouldn’t be able to say from inside the program every which step you get to that counter++ instruction.
The problem being that we don’t have access to anything external, we aren’t watching the Turing Machine compute, we are inside the Turing Machine “watching” the effects of other parts of the program, such as a folding protein (observing whenever it’s our turn to be simulated). We don’t, however, see the Turing Machine compute, we only see the output. The raw computing power / requirements “behind the scenes”, even if such a behind the scenes is only a non-existent abstraction, is impossible to judge with certainty, similar to a map-territory divide. Since there is no access in principle, we cannot observe anything but the “output”, we have no way of verifying any assumptions about a correspondence between “game timer” and “system timer” we may make, or of devising any experiments.
Even the recently linked “test the computational limits” doesn’t break the barrier, since for all we know the program may stall, and the next “frame” it outputs may still seem consistent, with no stalling, when viewed from inside the program, which we are. We wouldn’t subjectively realize the stall. If such an experiment did find something, it would be akin to a bug, not to a measurement of computational resources expended.
It occurs to me that we can’t really say, since we only have access to the time of the program, which may or may not reflect the actual computational resources expended.
That’s a valid point, but it does presuppose exotic new physics to make that substrate, in which “our” time passes arbitrarily slowly compared to the really real time, so that it can solve NP-hard problems between our clock ticks. We would, in effect be in a simulation. Evidence of NP-hard problems actually being solved in P could be taken as evidence that we are in one.
we only have access to the time of the program [...] we are inside the Turing Machine “watching” the effects of other parts of the program, such as a folding protein
If we assume that protein folding occurs according to the laws of quantum mechanics, then it shouldn’t tell us anything about the computational complexity of our universe besides what quantum mechanics tells us, right?
Well, yea that’s what I’m leaning towards. The laws of physics themselves need not govern the machine (Turing or otherwise), they are effects we observe, us being other effects. The laws of physics and the observers both are part of the output.
Like playing an online roleplaying game and inferring what the program can actually do or what resources it takes, when all you can access is “how high can my character jump” and other in-game rules. The rules regarding the jumping, and any limits the program chose to confer to the jumping behavior are not indicative of the resource requirements and efficiency of the underlying system. Is calculating the jumping easy or hard for the computer? How would you know as a character? The output, again, is a bad judge, take this example:
Imagine using an old Intel 386 system which you rigged into running the latest FPS shooter. It may only output one frame every few hours, but as a sentient character inside that game you wouldn’t notice. Things would be “smooth” for you because the rules would be unchanged from your point of view.
We can only say that given our knowledge of the laws of physics, the TM running the universe doesn’t output anything which seems like an efficient NP-problem solver, whether the program contains one, or the correct hardware abstraction running it uses one, is anyone’s guess. (The “contains one” probably isn’t anyone’s guess because of Occam’s Razor considerations.)
If this is all confused (it may well be, was mostly a stray thought), I’d appreciate a refutation.
If I understand correctly you’re saying that what is efficiently computable within a universe is not necessarily the same as what is efficiently computable on a computer simulating that universe. That is a good point.
Exactly. Thanks for succinctly expressing my point better than I could.
The question is whether assuming a correspondence as a somewhat default case (implied by the “not necessarily”) is even a good default assumption.
Why would the rules inherent in what we see inside the universe be any more indicative of the rules of the computer simulating that universe than the rules inside a computer game are reflective of the instruction set of the CPU running it (they are not)?
I am aware that the reference class “computer running super mario brother / kirby’s dream land” implies for the rules to be different, but on what basis would we choose any reference class which implies a correspondence?
Also, I’m not advocating simulationism with this per se, the “outer” computer can be strictly an abstraction.
It occurs to me that we can’t really say, since we only have access to the time of the program, which may or may not reflect the actual computational resources expended.
Imagine you were living in a game, and trying to judge the game’s hardware requirements. If you did that by looking at a clock in the game, you’d need to assume that the clock is synchronized to the actual system time. If you had a counter you increased, you wouldn’t be able to say from inside the program every which step you get to that counter++ instruction.
The problem being that we don’t have access to anything external, we aren’t watching the Turing Machine compute, we are inside the Turing Machine “watching” the effects of other parts of the program, such as a folding protein (observing whenever it’s our turn to be simulated). We don’t, however, see the Turing Machine compute, we only see the output. The raw computing power / requirements “behind the scenes”, even if such a behind the scenes is only a non-existent abstraction, is impossible to judge with certainty, similar to a map-territory divide. Since there is no access in principle, we cannot observe anything but the “output”, we have no way of verifying any assumptions about a correspondence between “game timer” and “system timer” we may make, or of devising any experiments.
Even the recently linked “test the computational limits” doesn’t break the barrier, since for all we know the program may stall, and the next “frame” it outputs may still seem consistent, with no stalling, when viewed from inside the program, which we are. We wouldn’t subjectively realize the stall. If such an experiment did find something, it would be akin to a bug, not to a measurement of computational resources expended.
Back to diapers.
That’s a valid point, but it does presuppose exotic new physics to make that substrate, in which “our” time passes arbitrarily slowly compared to the really real time, so that it can solve NP-hard problems between our clock ticks. We would, in effect be in a simulation. Evidence of NP-hard problems actually being solved in P could be taken as evidence that we are in one.
If we assume that protein folding occurs according to the laws of quantum mechanics, then it shouldn’t tell us anything about the computational complexity of our universe besides what quantum mechanics tells us, right?
Well, yea that’s what I’m leaning towards. The laws of physics themselves need not govern the machine (Turing or otherwise), they are effects we observe, us being other effects. The laws of physics and the observers both are part of the output.
Like playing an online roleplaying game and inferring what the program can actually do or what resources it takes, when all you can access is “how high can my character jump” and other in-game rules. The rules regarding the jumping, and any limits the program chose to confer to the jumping behavior are not indicative of the resource requirements and efficiency of the underlying system. Is calculating the jumping easy or hard for the computer? How would you know as a character? The output, again, is a bad judge, take this example:
Imagine using an old Intel 386 system which you rigged into running the latest FPS shooter. It may only output one frame every few hours, but as a sentient character inside that game you wouldn’t notice. Things would be “smooth” for you because the rules would be unchanged from your point of view.
We can only say that given our knowledge of the laws of physics, the TM running the universe doesn’t output anything which seems like an efficient NP-problem solver, whether the program contains one, or the correct hardware abstraction running it uses one, is anyone’s guess. (The “contains one” probably isn’t anyone’s guess because of Occam’s Razor considerations.)
If this is all confused (it may well be, was mostly a stray thought), I’d appreciate a refutation.
If I understand correctly you’re saying that what is efficiently computable within a universe is not necessarily the same as what is efficiently computable on a computer simulating that universe. That is a good point.
Exactly. Thanks for succinctly expressing my point better than I could.
The question is whether assuming a correspondence as a somewhat default case (implied by the “not necessarily”) is even a good default assumption.
Why would the rules inherent in what we see inside the universe be any more indicative of the rules of the computer simulating that universe than the rules inside a computer game are reflective of the instruction set of the CPU running it (they are not)?
I am aware that the reference class “computer running super mario brother / kirby’s dream land” implies for the rules to be different, but on what basis would we choose any reference class which implies a correspondence?
Also, I’m not advocating simulationism with this per se, the “outer” computer can be strictly an abstraction.