Could you unpack that a little more? It sounds like you’re saying that ‘some people’ are unfairly discounting the possibility that QM is incomplete and locality is violated, for reasons that are not logically required . Is that accurate?
If so, I would like to point out that computational cheapness is not a good prior. It’s extremely computationally cheaper to believe that our solar system is the only one and the other dots are simulated, coarse-grained, on a thin shell surrounding its outside. It simplifies the universe to a mind-boggling degree for this to be the case. Indeed, we should not stop there. It is best if we get rid of the interior of the sun, the interior of the earth, the interior of every rock, trees falling in the forest, people we don’t know… people we do know… and replace our interactions with them with simulacra that make stuff up and just provide enough to maintain a thin veneer of plausibility.
The rule set to implement such a world is HUGE, but the data and computational complexity is enough smaller to make up for it.
However, you’ve no evidence that you’re not a Boltzmann brain. You choose to accept on faith that you are not and, desiring to be consistent and even-handed, you further choose to accept on faith that the entire visible universe is just as complex as it seems to be (which would likely be false if e.g. we’re in a simulation).
You point out that adopting such priors requires biting an unpleasant bullet. This is not a reason for someone not to adopt it and indeed bite the bullet. The real reason is purely psychological: people don’t want to accept a Boltzmann prior, they’re not built that way.
Of course I write this from the POV of someone who does not accept the Boltzmann prior. From the POV of someone who does, time itself does not properly exist—or at least they always expect to cease coherently thinking in the few seconds with overwhelming probability—so an explanation based on psychology is problematic since psychology takes time to happen in a brain...
The cheapest approach is to fail to differentiate between different labeling systems that conform to all known observations. In this way, you stick to just the observations themselves.
Conventional interpretation of the Bell experiments violates this by implying c as a universal speed barrier. There is no evidence that such a barrier applies to things we have no experience of.
I have no wish to defend the ‘standard’ interpretation, whatever that is—but if you stick just to the observations themselves and provide no additional interpretation, then you are passing up an opportunity for massive compaction by way of explanation.
Moreover, supposing that the c limit only applies to the things we can see implies adding rules that go very far from sticking just to the observations themselves.
Could you unpack that a little more? It sounds like you’re saying that ‘some people’ are unfairly discounting the possibility that QM is incomplete and locality is violated, for reasons that are not logically required . Is that accurate?
If so, I would like to point out that computational cheapness is not a good prior. It’s extremely computationally cheaper to believe that our solar system is the only one and the other dots are simulated, coarse-grained, on a thin shell surrounding its outside. It simplifies the universe to a mind-boggling degree for this to be the case. Indeed, we should not stop there. It is best if we get rid of the interior of the sun, the interior of the earth, the interior of every rock, trees falling in the forest, people we don’t know… people we do know… and replace our interactions with them with simulacra that make stuff up and just provide enough to maintain a thin veneer of plausibility.
The rule set to implement such a world is HUGE, but the data and computational complexity is enough smaller to make up for it.
Don’t you think?
See also: Boltzmann brains.
However, you’ve no evidence that you’re not a Boltzmann brain. You choose to accept on faith that you are not and, desiring to be consistent and even-handed, you further choose to accept on faith that the entire visible universe is just as complex as it seems to be (which would likely be false if e.g. we’re in a simulation).
You point out that adopting such priors requires biting an unpleasant bullet. This is not a reason for someone not to adopt it and indeed bite the bullet. The real reason is purely psychological: people don’t want to accept a Boltzmann prior, they’re not built that way.
Of course I write this from the POV of someone who does not accept the Boltzmann prior. From the POV of someone who does, time itself does not properly exist—or at least they always expect to cease coherently thinking in the few seconds with overwhelming probability—so an explanation based on psychology is problematic since psychology takes time to happen in a brain...
The cheapest approach is to fail to differentiate between different labeling systems that conform to all known observations. In this way, you stick to just the observations themselves.
Conventional interpretation of the Bell experiments violates this by implying c as a universal speed barrier. There is no evidence that such a barrier applies to things we have no experience of.
I have no wish to defend the ‘standard’ interpretation, whatever that is—but if you stick just to the observations themselves and provide no additional interpretation, then you are passing up an opportunity for massive compaction by way of explanation.
Moreover, supposing that the c limit only applies to the things we can see implies adding rules that go very far from sticking just to the observations themselves.