Nick—that proof works fine for any of the neorealist models, in which Everett’s model is, variably, placed. The problem is in interpretation. Remember that there is great disagreement in the Copenhagen models about where, exactly, waveform collapse happens—after all, if one treats the quantum measurement device itself as being in a quantum state, then 100% correlation may be acceptable. (Because the waveform state of the computer wasn’t collapsed until the first and third measurements were examined together.)
The real problem here is that the Copenhagen models are effectively unscientific, since it is fundamentally impossible to disprove the concept that anything that is unmeasured is in an uncertain/undefined state. It’s an intellectual parlour trick, and shouldn’t be taken seriously.
At the same time though, not calculating a value until something actually needs it is exactly the kind of efficiency hack one would really want to implement if they were going to simulate an entire universe...
So if we are in some level of sub-reality that would make it much more likely that the model is correct, even if there’s no way for us to actually test it...
So from a practical point of view, it comes down entirely to which model lets us most effectively predict things. Since that’s what we actually care about. I’ll take a collection of “parlour tricks” that can tell me things about the future with high confidence over a provably self-consistent system that is wrong more often.
Upvoted because, while I don’t know the details of the Copenhagen models, if it is true they rely on “the concept that anything that is unmeasured is in an uncertain/undefined state”, then until some method of testing this state is devised the theories are effectively pseudo-science.
The Popper essay, originally mentioned above, describes the problem nicely.
It doesn’t speak to the truth or untruth of the theory, just to its scientific status, or lack thereof. In a nutshell, if it’s not testable, it’s not scientific, whether it is true or not. This is why it should not be taken too seriously, at least not until it becomes testable.
Nick—that proof works fine for any of the neorealist models, in which Everett’s model is, variably, placed. The problem is in interpretation. Remember that there is great disagreement in the Copenhagen models about where, exactly, waveform collapse happens—after all, if one treats the quantum measurement device itself as being in a quantum state, then 100% correlation may be acceptable. (Because the waveform state of the computer wasn’t collapsed until the first and third measurements were examined together.)
The real problem here is that the Copenhagen models are effectively unscientific, since it is fundamentally impossible to disprove the concept that anything that is unmeasured is in an uncertain/undefined state. It’s an intellectual parlour trick, and shouldn’t be taken seriously.
At the same time though, not calculating a value until something actually needs it is exactly the kind of efficiency hack one would really want to implement if they were going to simulate an entire universe...
So if we are in some level of sub-reality that would make it much more likely that the model is correct, even if there’s no way for us to actually test it...
So from a practical point of view, it comes down entirely to which model lets us most effectively predict things. Since that’s what we actually care about. I’ll take a collection of “parlour tricks” that can tell me things about the future with high confidence over a provably self-consistent system that is wrong more often.
Upvoted because, while I don’t know the details of the Copenhagen models, if it is true they rely on “the concept that anything that is unmeasured is in an uncertain/undefined state”, then until some method of testing this state is devised the theories are effectively pseudo-science.
The Popper essay, originally mentioned above, describes the problem nicely.
It doesn’t speak to the truth or untruth of the theory, just to its scientific status, or lack thereof. In a nutshell, if it’s not testable, it’s not scientific, whether it is true or not. This is why it should not be taken too seriously, at least not until it becomes testable.