How are you even going to do those calculations exactly? If you approximate, itll be measurable.
Ultimately there is this minimal descriptive complexity approach that yields things like GR based on assumptions of as few absolutes as possible, and then theres this minimal complexity of implementation on a very specific machine approach, which would yield a lot of false predictions had anybody bothered to try to use it as the measurements improved.
edit: also under an ontology where invariants and relationals with no absolutes are not simpler its awfully strange to find oneself in an universe wheich looks like ours. The way i see it, there are better and worse ways to assign priors, and if you keep making observations with very low priors under one assignment but not other, you should consider the prioes scheme where you keep predicting wrong to be worse.
You seem to think I find GR and quantum mechanics strange, or something. No, it’s perfectly normal to live in a universe with no newtonian ideas of “fixed distance”. GR does not have “no absolutes”, it has plenty of absolutes. It has a fixed riemannian manifold with a fixed metric tensor (that can be decomposed into components in any coordinate system you like).
A model like GR’s is exactly the kind that I would like to see for quantum mechanics—one where it’s perfectly clear what the universe is and what equations apply to it, and ideally an explanation of how we observers arise within it. For this position, MWI seems to be the only serious contender, followed perhaps by objective collapse, although the latter seems unlikely.
But wouldn’t GR still fall prey to the same ‘hard to implement on a TM’ argument? Also, one could define a relational model of computation which does not permit an outside view (indeed the relational QM is such a thing), by the way. It’s not clear which model of computation would be more complex.
With regards to the objective collapse, I recall reading some fairly recent paper regarding the impact of slight non-linearities in QFT on MWI-like superpositions, with conclusion that slight non-linearities would lead to objective collapse which occurs when the superposition is too massive. Collapse does seem unlikely on it’s own—if you view it as some nasty inelegant addition, but if it arises as a product of a slight non linearity, it seems entirely reasonable, especially if the non-linearity exists as a part of quantum gravity. It has been historically common that a linear relationship would be found non-linear as measurements improve. (The linear is simplest, but the non-linear models are many—one specific non-linear model is apriori less likely than a linear one, but the totality of non-linear models is not).
Without collapse you still have the open question of Born’s law, by the way. There been a suggestion to count the distinct observers somehow, but it seems to me that this wouldn’t work right if part of the wavefunction is beamed into the space (and thus doesn’t participate in decoherence of an observer), albeit I never seen a concrete proposal as to how the observers should be counted...
And back to the Turing machines, they can’t do true real numbers, so any physics as we know it can only be approximated, and it’s not at all clear what an approximate MWI should look like.
How are you even going to do those calculations exactly? If you approximate, itll be measurable.
Ultimately there is this minimal descriptive complexity approach that yields things like GR based on assumptions of as few absolutes as possible, and then theres this minimal complexity of implementation on a very specific machine approach, which would yield a lot of false predictions had anybody bothered to try to use it as the measurements improved.
edit: also under an ontology where invariants and relationals with no absolutes are not simpler its awfully strange to find oneself in an universe wheich looks like ours. The way i see it, there are better and worse ways to assign priors, and if you keep making observations with very low priors under one assignment but not other, you should consider the prioes scheme where you keep predicting wrong to be worse.
You seem to think I find GR and quantum mechanics strange, or something. No, it’s perfectly normal to live in a universe with no newtonian ideas of “fixed distance”. GR does not have “no absolutes”, it has plenty of absolutes. It has a fixed riemannian manifold with a fixed metric tensor (that can be decomposed into components in any coordinate system you like).
A model like GR’s is exactly the kind that I would like to see for quantum mechanics—one where it’s perfectly clear what the universe is and what equations apply to it, and ideally an explanation of how we observers arise within it. For this position, MWI seems to be the only serious contender, followed perhaps by objective collapse, although the latter seems unlikely.
But wouldn’t GR still fall prey to the same ‘hard to implement on a TM’ argument? Also, one could define a relational model of computation which does not permit an outside view (indeed the relational QM is such a thing), by the way. It’s not clear which model of computation would be more complex.
With regards to the objective collapse, I recall reading some fairly recent paper regarding the impact of slight non-linearities in QFT on MWI-like superpositions, with conclusion that slight non-linearities would lead to objective collapse which occurs when the superposition is too massive. Collapse does seem unlikely on it’s own—if you view it as some nasty inelegant addition, but if it arises as a product of a slight non linearity, it seems entirely reasonable, especially if the non-linearity exists as a part of quantum gravity. It has been historically common that a linear relationship would be found non-linear as measurements improve. (The linear is simplest, but the non-linear models are many—one specific non-linear model is apriori less likely than a linear one, but the totality of non-linear models is not).
Without collapse you still have the open question of Born’s law, by the way. There been a suggestion to count the distinct observers somehow, but it seems to me that this wouldn’t work right if part of the wavefunction is beamed into the space (and thus doesn’t participate in decoherence of an observer), albeit I never seen a concrete proposal as to how the observers should be counted...
And back to the Turing machines, they can’t do true real numbers, so any physics as we know it can only be approximated, and it’s not at all clear what an approximate MWI should look like.