It’s really simple. The hidden variables are not local. General Relativity does not apply in the case of the particles below a certain size. Can you create a logically consistent belief set such that the FTL particles are not FTL and really just existing in multiple states at once? Yes.
You can also say that on 4/25/12, up is down and down is up so I fell up and couldn’t get back down again.
IE there are infinite labeling systems for every set of observations. The minimal set has the least computational cost to consider, and thus is easier for people to process. Some people however, tribals to be specific, are more interested in protecting legacies than they are with using the computationally cheaper belief set. The cost is reduced frequency of new inspirations of understanding.
Could you unpack that a little more? It sounds like you’re saying that ‘some people’ are unfairly discounting the possibility that QM is incomplete and locality is violated, for reasons that are not logically required . Is that accurate?
If so, I would like to point out that computational cheapness is not a good prior. It’s extremely computationally cheaper to believe that our solar system is the only one and the other dots are simulated, coarse-grained, on a thin shell surrounding its outside. It simplifies the universe to a mind-boggling degree for this to be the case. Indeed, we should not stop there. It is best if we get rid of the interior of the sun, the interior of the earth, the interior of every rock, trees falling in the forest, people we don’t know… people we do know… and replace our interactions with them with simulacra that make stuff up and just provide enough to maintain a thin veneer of plausibility.
The rule set to implement such a world is HUGE, but the data and computational complexity is enough smaller to make up for it.
However, you’ve no evidence that you’re not a Boltzmann brain. You choose to accept on faith that you are not and, desiring to be consistent and even-handed, you further choose to accept on faith that the entire visible universe is just as complex as it seems to be (which would likely be false if e.g. we’re in a simulation).
You point out that adopting such priors requires biting an unpleasant bullet. This is not a reason for someone not to adopt it and indeed bite the bullet. The real reason is purely psychological: people don’t want to accept a Boltzmann prior, they’re not built that way.
Of course I write this from the POV of someone who does not accept the Boltzmann prior. From the POV of someone who does, time itself does not properly exist—or at least they always expect to cease coherently thinking in the few seconds with overwhelming probability—so an explanation based on psychology is problematic since psychology takes time to happen in a brain...
The cheapest approach is to fail to differentiate between different labeling systems that conform to all known observations. In this way, you stick to just the observations themselves.
Conventional interpretation of the Bell experiments violates this by implying c as a universal speed barrier. There is no evidence that such a barrier applies to things we have no experience of.
I have no wish to defend the ‘standard’ interpretation, whatever that is—but if you stick just to the observations themselves and provide no additional interpretation, then you are passing up an opportunity for massive compaction by way of explanation.
Moreover, supposing that the c limit only applies to the things we can see implies adding rules that go very far from sticking just to the observations themselves.
It’s really simple. The hidden variables are not local. General Relativity does not apply in the case of the particles below a certain size.
I assume that this is your personal model, given the lack of references. Feel free to flesh it out so that it makes new quantifiable testable predictions.
Some people however, tribals to be specific, are more interested in protecting legacies than they are with using the computationally cheaper belief set. The cost is reduced frequency of new inspirations of understanding.
My personal crackpot index counter clicks like mad after reading this.
You did not even remotely understand this comment. The whole point of what is written here is that there are infinite “Not even wrong” theories that conform to all current observations. The conventional interpretation of the Bell experiments is one of the less useful ones, because it is convoluted and has a larger computational complexity than necessary.
It’s really simple. The hidden variables are not local. General Relativity does not apply in the case of the particles below a certain size. Can you create a logically consistent belief set such that the FTL particles are not FTL and really just existing in multiple states at once? Yes.
You can also say that on 4/25/12, up is down and down is up so I fell up and couldn’t get back down again.
IE there are infinite labeling systems for every set of observations. The minimal set has the least computational cost to consider, and thus is easier for people to process. Some people however, tribals to be specific, are more interested in protecting legacies than they are with using the computationally cheaper belief set. The cost is reduced frequency of new inspirations of understanding.
Could you unpack that a little more? It sounds like you’re saying that ‘some people’ are unfairly discounting the possibility that QM is incomplete and locality is violated, for reasons that are not logically required . Is that accurate?
If so, I would like to point out that computational cheapness is not a good prior. It’s extremely computationally cheaper to believe that our solar system is the only one and the other dots are simulated, coarse-grained, on a thin shell surrounding its outside. It simplifies the universe to a mind-boggling degree for this to be the case. Indeed, we should not stop there. It is best if we get rid of the interior of the sun, the interior of the earth, the interior of every rock, trees falling in the forest, people we don’t know… people we do know… and replace our interactions with them with simulacra that make stuff up and just provide enough to maintain a thin veneer of plausibility.
The rule set to implement such a world is HUGE, but the data and computational complexity is enough smaller to make up for it.
Don’t you think?
See also: Boltzmann brains.
However, you’ve no evidence that you’re not a Boltzmann brain. You choose to accept on faith that you are not and, desiring to be consistent and even-handed, you further choose to accept on faith that the entire visible universe is just as complex as it seems to be (which would likely be false if e.g. we’re in a simulation).
You point out that adopting such priors requires biting an unpleasant bullet. This is not a reason for someone not to adopt it and indeed bite the bullet. The real reason is purely psychological: people don’t want to accept a Boltzmann prior, they’re not built that way.
Of course I write this from the POV of someone who does not accept the Boltzmann prior. From the POV of someone who does, time itself does not properly exist—or at least they always expect to cease coherently thinking in the few seconds with overwhelming probability—so an explanation based on psychology is problematic since psychology takes time to happen in a brain...
The cheapest approach is to fail to differentiate between different labeling systems that conform to all known observations. In this way, you stick to just the observations themselves.
Conventional interpretation of the Bell experiments violates this by implying c as a universal speed barrier. There is no evidence that such a barrier applies to things we have no experience of.
I have no wish to defend the ‘standard’ interpretation, whatever that is—but if you stick just to the observations themselves and provide no additional interpretation, then you are passing up an opportunity for massive compaction by way of explanation.
Moreover, supposing that the c limit only applies to the things we can see implies adding rules that go very far from sticking just to the observations themselves.
I assume that this is your personal model, given the lack of references. Feel free to flesh it out so that it makes new quantifiable testable predictions.
My personal crackpot index counter clicks like mad after reading this.
You did not even remotely understand this comment. The whole point of what is written here is that there are infinite “Not even wrong” theories that conform to all current observations. The conventional interpretation of the Bell experiments is one of the less useful ones, because it is convoluted and has a larger computational complexity than necessary.