There is something out there, but it is not anything that can even be conceived as existing in a classical view-from nowhere style.
To the extent that this seems to be meaningful at all, this would seem to imply that not only is the universe mysterious and ineffable, it’s also uncomputable—since anything you can calculate in a turing machine (or even a few kinds of hypercomputers) can be “conceived of as existing in a classical view-from nowhere style” (it’s just a list of memory states, together with the program). That’s a lot of complexity just to be able to deny the idea of objective reality!
Well, general relativity, while descriptively very simple, is awfully complex if you measure complexity by the length of a simulator program, so perhaps in the interest of consistency you should join the anti Einsteinian crank camp first.
Those incredibly successful theories were based entirely on the notion of complexity in a more abstract language where things like having no outside view and no absolute spacetime are simpler than having outside view.
Nice non-sequitor you’ve got there. Newtonian mechanics is simpler than general relativity. It also happens to be wrong, so there’s no point going back to it. But GR is not even that complex relative to a theory that claims that the cosmos is an ineffable mystery—GR has well defined equations, and takes place in a fixed riemannian manifold. You can in fact freely talk about the objective spacetime location of events in GR, using whatever coordinate system you like. This is because it is a good theory.
Actually GR shows the advantage of having an outside view and being able to fit things into a comprehensive picture. If my graduate GR course had refused to talk about manifolds and tensors and insisted that you could only measure “lengths relative to specific observers”, and shown us a bunch of arcane equations for converting measurements between different observers’ realties, I imagine it wouldn’t have been half as fun.
(Although the fact that certain solutions to the GR equations allow closed timelike curves and thereby certain kinds of hypercomputation is less than ideal—hopefully future unified theories will conspire to eliminate such shenanigens.)
The point is that absence of the absolute time really gets in the way of implementing a naive simulator, the sort that just updates per timestep. Furthermore, there is no preferred coordinate frame in GR, but there is a preferred coordinate frame in a simulator.
Ultimately, a Turing machine is highly arbitrary and comes with a complex structure, privileging implementations that fit into that structure, over conceptually simpler theories which do not.
The point is that absence of the absolute time really gets in the way of implementing a naive simulator, the sort that just updates per timestep.
But it’s no problem for a simulator that derives a proof of the solution to the equations, such as a SAT solver. Linear time is not neccesary for simulation, just easier for humans to grasp.
Furthermore, there is no preferred coordinate frame in GR, but there is a preferred coordinate frame in a simulator.
Even if this is true, if the simulation is correct, the existence of such a preferred reference frame is unobservable to any observer inside the simulation, and therefore makes no difference. A simulation that does GR calculations in a particular coordinate system, still does GR calculations.
How are you even going to do those calculations exactly? If you approximate, itll be measurable.
Ultimately there is this minimal descriptive complexity approach that yields things like GR based on assumptions of as few absolutes as possible, and then theres this minimal complexity of implementation on a very specific machine approach, which would yield a lot of false predictions had anybody bothered to try to use it as the measurements improved.
edit: also under an ontology where invariants and relationals with no absolutes are not simpler its awfully strange to find oneself in an universe wheich looks like ours. The way i see it, there are better and worse ways to assign priors, and if you keep making observations with very low priors under one assignment but not other, you should consider the prioes scheme where you keep predicting wrong to be worse.
You seem to think I find GR and quantum mechanics strange, or something. No, it’s perfectly normal to live in a universe with no newtonian ideas of “fixed distance”. GR does not have “no absolutes”, it has plenty of absolutes. It has a fixed riemannian manifold with a fixed metric tensor (that can be decomposed into components in any coordinate system you like).
A model like GR’s is exactly the kind that I would like to see for quantum mechanics—one where it’s perfectly clear what the universe is and what equations apply to it, and ideally an explanation of how we observers arise within it. For this position, MWI seems to be the only serious contender, followed perhaps by objective collapse, although the latter seems unlikely.
But wouldn’t GR still fall prey to the same ‘hard to implement on a TM’ argument? Also, one could define a relational model of computation which does not permit an outside view (indeed the relational QM is such a thing), by the way. It’s not clear which model of computation would be more complex.
With regards to the objective collapse, I recall reading some fairly recent paper regarding the impact of slight non-linearities in QFT on MWI-like superpositions, with conclusion that slight non-linearities would lead to objective collapse which occurs when the superposition is too massive. Collapse does seem unlikely on it’s own—if you view it as some nasty inelegant addition, but if it arises as a product of a slight non linearity, it seems entirely reasonable, especially if the non-linearity exists as a part of quantum gravity. It has been historically common that a linear relationship would be found non-linear as measurements improve. (The linear is simplest, but the non-linear models are many—one specific non-linear model is apriori less likely than a linear one, but the totality of non-linear models is not).
Without collapse you still have the open question of Born’s law, by the way. There been a suggestion to count the distinct observers somehow, but it seems to me that this wouldn’t work right if part of the wavefunction is beamed into the space (and thus doesn’t participate in decoherence of an observer), albeit I never seen a concrete proposal as to how the observers should be counted...
And back to the Turing machines, they can’t do true real numbers, so any physics as we know it can only be approximated, and it’s not at all clear what an approximate MWI should look like.
QM is computable. rQM doesnt change that. If an observer wants to do quantum cosmology, they can observe the universe, not from nowhere, but from their perspective, store observations and compute with them. Map-wise, nothing much has changed.
Territory-wise, it looks like the universe can’t be a (classical) computer. Is that a problem?
I can see how your conclusion follows from that assumption, but the assumption is as strange as the conclusion. Ideally, an argument should proceed from plausible premises.
“The universe is not anything that can even be conceived as existing in a classical view-from nowhere style” also means that the universe can’t be modeled on a computer (classical or otherwise). From a complexity theory point of view, this makes the rQM cosmology an exceptionally bad one, since you must have to add something uncomputable to QM to make this true (if there is even any logical model that makes this true at all).
The fact that you can still computably model a specific observer’s subjective perspective isn’t really relevant.
Out of the box, a classical computer doesn’t represent the ontology of rQM because all information has an observer-independent representation, but s software layer can hide literal representations in the way a
LISP gensym does. Uncomputability is not required.
In any case, classical computability isn’t a good index of complexity. It’s an index of how close something is to a classical computer. Problems are harder or easier to solve according to the technology used to solve them. That’s why people don’t write device drivers in LISP.
To the extent that this seems to be meaningful at all, this would seem to imply that not only is the universe mysterious and ineffable, it’s also uncomputable—since anything you can calculate in a turing machine (or even a few kinds of hypercomputers) can be “conceived of as existing in a classical view-from nowhere style” (it’s just a list of memory states, together with the program). That’s a lot of complexity just to be able to deny the idea of objective reality!
Well, general relativity, while descriptively very simple, is awfully complex if you measure complexity by the length of a simulator program, so perhaps in the interest of consistency you should join the anti Einsteinian crank camp first.
Those incredibly successful theories were based entirely on the notion of complexity in a more abstract language where things like having no outside view and no absolute spacetime are simpler than having outside view.
Nice non-sequitor you’ve got there. Newtonian mechanics is simpler than general relativity. It also happens to be wrong, so there’s no point going back to it. But GR is not even that complex relative to a theory that claims that the cosmos is an ineffable mystery—GR has well defined equations, and takes place in a fixed riemannian manifold. You can in fact freely talk about the objective spacetime location of events in GR, using whatever coordinate system you like. This is because it is a good theory.
Actually GR shows the advantage of having an outside view and being able to fit things into a comprehensive picture. If my graduate GR course had refused to talk about manifolds and tensors and insisted that you could only measure “lengths relative to specific observers”, and shown us a bunch of arcane equations for converting measurements between different observers’ realties, I imagine it wouldn’t have been half as fun.
(Although the fact that certain solutions to the GR equations allow closed timelike curves and thereby certain kinds of hypercomputation is less than ideal—hopefully future unified theories will conspire to eliminate such shenanigens.)
The point is that absence of the absolute time really gets in the way of implementing a naive simulator, the sort that just updates per timestep. Furthermore, there is no preferred coordinate frame in GR, but there is a preferred coordinate frame in a simulator.
Ultimately, a Turing machine is highly arbitrary and comes with a complex structure, privileging implementations that fit into that structure, over conceptually simpler theories which do not.
But it’s no problem for a simulator that derives a proof of the solution to the equations, such as a SAT solver. Linear time is not neccesary for simulation, just easier for humans to grasp.
Even if this is true, if the simulation is correct, the existence of such a preferred reference frame is unobservable to any observer inside the simulation, and therefore makes no difference. A simulation that does GR calculations in a particular coordinate system, still does GR calculations.
How are you even going to do those calculations exactly? If you approximate, itll be measurable.
Ultimately there is this minimal descriptive complexity approach that yields things like GR based on assumptions of as few absolutes as possible, and then theres this minimal complexity of implementation on a very specific machine approach, which would yield a lot of false predictions had anybody bothered to try to use it as the measurements improved.
edit: also under an ontology where invariants and relationals with no absolutes are not simpler its awfully strange to find oneself in an universe wheich looks like ours. The way i see it, there are better and worse ways to assign priors, and if you keep making observations with very low priors under one assignment but not other, you should consider the prioes scheme where you keep predicting wrong to be worse.
You seem to think I find GR and quantum mechanics strange, or something. No, it’s perfectly normal to live in a universe with no newtonian ideas of “fixed distance”. GR does not have “no absolutes”, it has plenty of absolutes. It has a fixed riemannian manifold with a fixed metric tensor (that can be decomposed into components in any coordinate system you like).
A model like GR’s is exactly the kind that I would like to see for quantum mechanics—one where it’s perfectly clear what the universe is and what equations apply to it, and ideally an explanation of how we observers arise within it. For this position, MWI seems to be the only serious contender, followed perhaps by objective collapse, although the latter seems unlikely.
But wouldn’t GR still fall prey to the same ‘hard to implement on a TM’ argument? Also, one could define a relational model of computation which does not permit an outside view (indeed the relational QM is such a thing), by the way. It’s not clear which model of computation would be more complex.
With regards to the objective collapse, I recall reading some fairly recent paper regarding the impact of slight non-linearities in QFT on MWI-like superpositions, with conclusion that slight non-linearities would lead to objective collapse which occurs when the superposition is too massive. Collapse does seem unlikely on it’s own—if you view it as some nasty inelegant addition, but if it arises as a product of a slight non linearity, it seems entirely reasonable, especially if the non-linearity exists as a part of quantum gravity. It has been historically common that a linear relationship would be found non-linear as measurements improve. (The linear is simplest, but the non-linear models are many—one specific non-linear model is apriori less likely than a linear one, but the totality of non-linear models is not).
Without collapse you still have the open question of Born’s law, by the way. There been a suggestion to count the distinct observers somehow, but it seems to me that this wouldn’t work right if part of the wavefunction is beamed into the space (and thus doesn’t participate in decoherence of an observer), albeit I never seen a concrete proposal as to how the observers should be counted...
And back to the Turing machines, they can’t do true real numbers, so any physics as we know it can only be approximated, and it’s not at all clear what an approximate MWI should look like.
QM is computable. rQM doesnt change that. If an observer wants to do quantum cosmology, they can observe the universe, not from nowhere, but from their perspective, store observations and compute with them. Map-wise, nothing much has changed.
Territory-wise, it looks like the universe can’t be a (classical) computer. Is that a problem?
As I understand it, any quantum computer can be modeled on a classical one, possibly with exponential slowdown.
Be modeled doesn’t mean be.
I guess that’s the root of our disagreement about instrumentalism.
The dictionary seems to be on my side.
I can see how your conclusion follows from that assumption, but the assumption is as strange as the conclusion. Ideally, an argument should proceed from plausible premises.
Disengaging due to lack of convergence.
Well, that’s one way of avoiding update.
“The universe is not anything that can even be conceived as existing in a classical view-from nowhere style” also means that the universe can’t be modeled on a computer (classical or otherwise). From a complexity theory point of view, this makes the rQM cosmology an exceptionally bad one, since you must have to add something uncomputable to QM to make this true (if there is even any logical model that makes this true at all).
The fact that you can still computably model a specific observer’s subjective perspective isn’t really relevant.
Out of the box, a classical computer doesn’t represent the ontology of rQM because all information has an observer-independent representation, but s software layer can hide literal representations in the way a LISP gensym does. Uncomputability is not required.
In any case, classical computability isn’t a good index of complexity. It’s an index of how close something is to a classical computer. Problems are harder or easier to solve according to the technology used to solve them. That’s why people don’t write device drivers in LISP.
Um, computability has very little to do with “classical” computers. It’s a very general idea relating to the existence of any algorithm at all.
Uncomputability isn’t needed to model the ontology of rQM,