we only have access to the time of the program [...] we are inside the Turing Machine “watching” the effects of other parts of the program, such as a folding protein
If we assume that protein folding occurs according to the laws of quantum mechanics, then it shouldn’t tell us anything about the computational complexity of our universe besides what quantum mechanics tells us, right?
Well, yea that’s what I’m leaning towards. The laws of physics themselves need not govern the machine (Turing or otherwise), they are effects we observe, us being other effects. The laws of physics and the observers both are part of the output.
Like playing an online roleplaying game and inferring what the program can actually do or what resources it takes, when all you can access is “how high can my character jump” and other in-game rules. The rules regarding the jumping, and any limits the program chose to confer to the jumping behavior are not indicative of the resource requirements and efficiency of the underlying system. Is calculating the jumping easy or hard for the computer? How would you know as a character? The output, again, is a bad judge, take this example:
Imagine using an old Intel 386 system which you rigged into running the latest FPS shooter. It may only output one frame every few hours, but as a sentient character inside that game you wouldn’t notice. Things would be “smooth” for you because the rules would be unchanged from your point of view.
We can only say that given our knowledge of the laws of physics, the TM running the universe doesn’t output anything which seems like an efficient NP-problem solver, whether the program contains one, or the correct hardware abstraction running it uses one, is anyone’s guess. (The “contains one” probably isn’t anyone’s guess because of Occam’s Razor considerations.)
If this is all confused (it may well be, was mostly a stray thought), I’d appreciate a refutation.
If I understand correctly you’re saying that what is efficiently computable within a universe is not necessarily the same as what is efficiently computable on a computer simulating that universe. That is a good point.
Exactly. Thanks for succinctly expressing my point better than I could.
The question is whether assuming a correspondence as a somewhat default case (implied by the “not necessarily”) is even a good default assumption.
Why would the rules inherent in what we see inside the universe be any more indicative of the rules of the computer simulating that universe than the rules inside a computer game are reflective of the instruction set of the CPU running it (they are not)?
I am aware that the reference class “computer running super mario brother / kirby’s dream land” implies for the rules to be different, but on what basis would we choose any reference class which implies a correspondence?
Also, I’m not advocating simulationism with this per se, the “outer” computer can be strictly an abstraction.
If we assume that protein folding occurs according to the laws of quantum mechanics, then it shouldn’t tell us anything about the computational complexity of our universe besides what quantum mechanics tells us, right?
Well, yea that’s what I’m leaning towards. The laws of physics themselves need not govern the machine (Turing or otherwise), they are effects we observe, us being other effects. The laws of physics and the observers both are part of the output.
Like playing an online roleplaying game and inferring what the program can actually do or what resources it takes, when all you can access is “how high can my character jump” and other in-game rules. The rules regarding the jumping, and any limits the program chose to confer to the jumping behavior are not indicative of the resource requirements and efficiency of the underlying system. Is calculating the jumping easy or hard for the computer? How would you know as a character? The output, again, is a bad judge, take this example:
Imagine using an old Intel 386 system which you rigged into running the latest FPS shooter. It may only output one frame every few hours, but as a sentient character inside that game you wouldn’t notice. Things would be “smooth” for you because the rules would be unchanged from your point of view.
We can only say that given our knowledge of the laws of physics, the TM running the universe doesn’t output anything which seems like an efficient NP-problem solver, whether the program contains one, or the correct hardware abstraction running it uses one, is anyone’s guess. (The “contains one” probably isn’t anyone’s guess because of Occam’s Razor considerations.)
If this is all confused (it may well be, was mostly a stray thought), I’d appreciate a refutation.
If I understand correctly you’re saying that what is efficiently computable within a universe is not necessarily the same as what is efficiently computable on a computer simulating that universe. That is a good point.
Exactly. Thanks for succinctly expressing my point better than I could.
The question is whether assuming a correspondence as a somewhat default case (implied by the “not necessarily”) is even a good default assumption.
Why would the rules inherent in what we see inside the universe be any more indicative of the rules of the computer simulating that universe than the rules inside a computer game are reflective of the instruction set of the CPU running it (they are not)?
I am aware that the reference class “computer running super mario brother / kirby’s dream land” implies for the rules to be different, but on what basis would we choose any reference class which implies a correspondence?
Also, I’m not advocating simulationism with this per se, the “outer” computer can be strictly an abstraction.