Couldn’t one say that a model is not truly a model unless it’s instantiated in some cognitive/computational representation, and therefore since quantum mechanics is computationally intractable, it is actually quite far from being a complete model of the world? This would change it from being a map vs territory thing to more being a big vs precise Pareto frontier.
(Not sure if this is too tangential to what you’re saying.)
This is tangential to what I’m saying, but it points at something that inspired me to write this post. Eliezer Yudkowsky says things like the universe is just quarks, and people say “ah, but this one detail of the quark model is wrong/incomplete” as if it changes his argument when it doesn’t. His point, so far as I understand it, is that the universe runs on a single layer somewhere, and higher-level abstractions are useful to the extent that they reflect reality. Maybe you change your theories later so that you need to replace all of his “quark” and “quantum mechanics” words with something else, but the point still stands about the relationship between higher-level abstractions and reality.
I’m not sure I understand your objection, but I will write a response that addresses it. I suspect we are in agreement about many things. The point of my quantum mechanics model is not to model the world, it is to model the rules of reality which the world runs on. Quantum mechanics isn’t computationally intractable, but making quantum mechanical systems at large scales is. That is a statement about the amount of compute we have, not about quantum mechanics. We have every reason to believe that if we simulated a spacetime background which ran on general relativity and threw a bunch of quarks and electrons into it which run on the standard model and start in a (somehow) known state of the Earth, Moon, and Sun, then we would end up with a simulation which gives a plausible world-line for Earth. The history would diverge from reality due to things we left out (some things rely on navigation by starlight, cosmic rays from beyond the solar system cause bit flips which affect history, asteroid collisions have notable effects on Earth, gravitational effects from other planets probably have some effect on the ocean, etc.) and we would have to either run every Everett branch or constantly keep only one of them at random and accept slight divergences due to that. In spite of that, the simulation should produce a totally plausible Earth, although people would wonder where all the starts went. There do not exist enough atoms on Earth to build a computer which could actually simulate that, but that isn’t a weakness in the ability of the model to explain the base-level of reality.
This is tangential to what I’m saying, but it points at something that inspired me to write this post. Eliezer Yudkowsky says things like the universe is just quarks, and people say “ah, but this one detail of the quark model is wrong/incomplete” as if it changes his argument when it doesn’t. His point, so far as I understand it, is that the universe runs on a single layer somewhere, and higher-level abstractions are useful to the extent that they reflect reality. Maybe you change your theories later so that you need to replace all of his “quark” and “quantum mechanics” words with something else, but the point still stands about the relationship between higher-level abstractions and reality.
My in-depth response to the rationalist-reductionist-empiricist worldview is Linear Diffusion of Sparse Lognormals. Though there’s still some parts of it I need to write. The main objection I have here is that “single layer” is not so much the true rules of reality so much as it is the subset of rules that are unobjectionable due to applying everywhere and every time. It’s like the minimal conceivable set of rules.
The point of my quantum mechanics model is not to model the world, it is to model the rules of reality which the world runs on.
I’d argue the practical rules of the world are determined not just by the idealized rules, but also by the big entities within the world. The simplest example is outer space; it acts as a negentropy source and is the reason we can assume that e.g. electrons go into the lowest orbitals (whereas if e.g. outer space was full of hydrogen, it would undergo fusion, bombard us with light, and turn the earth into a plasma instead). More elaborate examples would be e.g. atmospheric oxygen, whose strong reactivity leads to a lot of chemical reactions, or even e.g. thinking of people as economic agents means that economic trade opportunities get exploited.
It’s sort of conceivable that quantum mechanics describes the dynamics as a function of the big entities, but we only really have strong reasons to believe so with respect to the big entities we know about, rather than all big entities in general. (Maybe there are some entities that are sufficiently constant that they are ~impossible to observe.)
Quantum mechanics isn’t computationally intractable, but making quantum mechanical systems at large scales is.
But in the context of your original post, everything you care about is large scale, and in particular the territory itself is large scale.
That is a statement about the amount of compute we have, not about quantum mechanics.
It’s not a statement about quantum mechanics if you view quantum mechanics as a Platonic mathematical ideal, or if you use “quantum mechanics” to refer to the universe as it really is, but it is a statement about quantum mechanics if you view it as a collection of models that are actually used. Maybe we should have three different terms to distinguish the three?
I appreciate your link to your posts on Linear Diffusion of Sparse Lognormals. I’ll take a look later. My responses to your other points are essentially reductionist arguments, so I suspect that’s a crux.
That said, I’m using “quantum mechanics” to mean “some generalization of the standard model” in many places. In practice, the actual experimental predictions of the standard model are something like probability distributions over the starting and ending momentum states of particles before and after they interact at the same place at the same time, so I don’t think you can actually run a raw standard model simulation of the solar system which makes sense at all. To make my argument more explicit, I think you could run a lattice simulation of the solar system far above the Planck scale and full of classical particles (with proper masses and proper charges under the standard model) which all interact via general relativity, so at each time slice you move each particle to a new lattice site based on its classical momentum and the gravitational field in the previous time slice. Then you run the standard model at each lattice site which has more than one particle on it to destroy all of the input particles and generate a new set of particles according to the probabilistic predictions of the standard model, and the identities and momenta of the output particles according to a sample of that probability distribution will be applied in the next time slice. I might be making an obvious particle physics mistake, but modulo my own carelessness, almost all lattice sites would have nothing on them, many would have photons, some would have three quarks, fewer would have an electron on them, and some tiny, tiny fraction would have anything else. If you interpreted sets of sites containing the right number of up and down quarks as nucleons, interpreted those nucleons as atoms, used nearby electrons to recognize molecules, interpreted those molecules as objects or substances doing whatever they do in higher levels of abstraction, and sort of ignored anything else until it reached a stable state, then I think you would get a familiar world out of it if you had the utterly unobtainable computing power to do so.
That said, I’m using “quantum mechanics” to mean “some generalization of the standard model” in many places.
I think this still has the ambiguity that I am complaining about.
As an analogy, consider the distinction between:
Some population of rabbits that is growing over time due to reproduction
The Fibonacci sequence as a model of the growth dynamics of this population
A computer program computing or mathematician deriving the numbers in or properties of this sequence
The first item in this list is meant to be analogous to the quantum mechanics qua the universe, as in it is some real-world entity that one might hypothesize acts according to certain rules, but exists regardless. The second is a Platonic mathematical object that one might hypothesize matches the rules of the real-world entity. And the third are actual instantiations of this Platonic mathematical object in reality. I would maybe call these “the territory”, “the hypothetical map” and “the actual map”, respectively.
In practice, the actual experimental predictions of the standard model are something like probability distributions over the starting and ending momentum states of particles before and after they interact at the same place at the same time, so I don’t think you can actually run a raw standard model simulation of the solar system which makes sense at all. To make my argument more explicit, I think you could run a lattice simulation of the solar system far above the Planck scale and full of classical particles (with proper masses and proper charges under the standard model) which all interact via general relativity, so at each time slice you move each particle to a new lattice site based on its classical momentum and the gravitational field in the previous time slice. Then you run the standard model at each lattice site which has more than one particle on it to destroy all of the input particles and generate a new set of particles according to the probabilistic predictions of the standard model, and the identities and momenta of the output particles according to a sample of that probability distribution will be applied in the next time slice. I might be making an obvious particle physics mistake, but modulo my own carelessness, almost all lattice sites would have nothing on them, many would have photons, some would have three quarks, fewer would have an electron on them, and some tiny, tiny fraction would have anything else. If you interpreted sets of sites containing the right number of up and down quarks as nucleons, interpreted those nucleons as atoms, used nearby electrons to recognize molecules, interpreted those molecules as objects or substances doing whatever they do in higher levels of abstraction, and sort of ignored anything else until it reached a stable state, then I think you would get a familiar world out of it if you had the utterly unobtainable computing power to do so.
Wouldn’t this fail for metals, quantum computing, the double slit experiment, etc.? By switching back and forth between quantum and classical, it seems like you forbid any superpositions/entanglement/etc. on a scale larger than your classical lattice size. The standard LessWrongian approach is to just bite the bullet on the many worlds interpretation (which I have some philosophical quibbles with, but those quibbles aren’t so relevant to this discussion, I think, so I’m willing to grant the many worlds interpretation if you want).
Anyway, more to the point, this clearly cannot be done with the actual map, and the hypothetical map does not actually exist, so my position is that while this may help one understand the notion that there is an rule that perfectly constrains the world, the thought experiment does not actually work out.
Somewhat adjacently, your approach to this is reductionistic, viewing large entities as being composed of unfathomably many small entities. As part of LDSL I’m trying to wean myself off of reductionism, and instead take large entities to be more fundamental, and treat small entities as something that the large entities can be broken up into.
Couldn’t one say that a model is not truly a model unless it’s instantiated in some cognitive/computational representation, and therefore since quantum mechanics is computationally intractable, it is actually quite far from being a complete model of the world? This would change it from being a map vs territory thing to more being a big vs precise Pareto frontier.
(Not sure if this is too tangential to what you’re saying.)
This is tangential to what I’m saying, but it points at something that inspired me to write this post. Eliezer Yudkowsky says things like the universe is just quarks, and people say “ah, but this one detail of the quark model is wrong/incomplete” as if it changes his argument when it doesn’t. His point, so far as I understand it, is that the universe runs on a single layer somewhere, and higher-level abstractions are useful to the extent that they reflect reality. Maybe you change your theories later so that you need to replace all of his “quark” and “quantum mechanics” words with something else, but the point still stands about the relationship between higher-level abstractions and reality.
I’m not sure I understand your objection, but I will write a response that addresses it. I suspect we are in agreement about many things. The point of my quantum mechanics model is not to model the world, it is to model the rules of reality which the world runs on. Quantum mechanics isn’t computationally intractable, but making quantum mechanical systems at large scales is. That is a statement about the amount of compute we have, not about quantum mechanics. We have every reason to believe that if we simulated a spacetime background which ran on general relativity and threw a bunch of quarks and electrons into it which run on the standard model and start in a (somehow) known state of the Earth, Moon, and Sun, then we would end up with a simulation which gives a plausible world-line for Earth. The history would diverge from reality due to things we left out (some things rely on navigation by starlight, cosmic rays from beyond the solar system cause bit flips which affect history, asteroid collisions have notable effects on Earth, gravitational effects from other planets probably have some effect on the ocean, etc.) and we would have to either run every Everett branch or constantly keep only one of them at random and accept slight divergences due to that. In spite of that, the simulation should produce a totally plausible Earth, although people would wonder where all the starts went. There do not exist enough atoms on Earth to build a computer which could actually simulate that, but that isn’t a weakness in the ability of the model to explain the base-level of reality.
My in-depth response to the rationalist-reductionist-empiricist worldview is Linear Diffusion of Sparse Lognormals. Though there’s still some parts of it I need to write. The main objection I have here is that “single layer” is not so much the true rules of reality so much as it is the subset of rules that are unobjectionable due to applying everywhere and every time. It’s like the minimal conceivable set of rules.
I’d argue the practical rules of the world are determined not just by the idealized rules, but also by the big entities within the world. The simplest example is outer space; it acts as a negentropy source and is the reason we can assume that e.g. electrons go into the lowest orbitals (whereas if e.g. outer space was full of hydrogen, it would undergo fusion, bombard us with light, and turn the earth into a plasma instead). More elaborate examples would be e.g. atmospheric oxygen, whose strong reactivity leads to a lot of chemical reactions, or even e.g. thinking of people as economic agents means that economic trade opportunities get exploited.
It’s sort of conceivable that quantum mechanics describes the dynamics as a function of the big entities, but we only really have strong reasons to believe so with respect to the big entities we know about, rather than all big entities in general. (Maybe there are some entities that are sufficiently constant that they are ~impossible to observe.)
But in the context of your original post, everything you care about is large scale, and in particular the territory itself is large scale.
It’s not a statement about quantum mechanics if you view quantum mechanics as a Platonic mathematical ideal, or if you use “quantum mechanics” to refer to the universe as it really is, but it is a statement about quantum mechanics if you view it as a collection of models that are actually used. Maybe we should have three different terms to distinguish the three?
I appreciate your link to your posts on Linear Diffusion of Sparse Lognormals. I’ll take a look later. My responses to your other points are essentially reductionist arguments, so I suspect that’s a crux.
That said, I’m using “quantum mechanics” to mean “some generalization of the standard model” in many places. In practice, the actual experimental predictions of the standard model are something like probability distributions over the starting and ending momentum states of particles before and after they interact at the same place at the same time, so I don’t think you can actually run a raw standard model simulation of the solar system which makes sense at all. To make my argument more explicit, I think you could run a lattice simulation of the solar system far above the Planck scale and full of classical particles (with proper masses and proper charges under the standard model) which all interact via general relativity, so at each time slice you move each particle to a new lattice site based on its classical momentum and the gravitational field in the previous time slice. Then you run the standard model at each lattice site which has more than one particle on it to destroy all of the input particles and generate a new set of particles according to the probabilistic predictions of the standard model, and the identities and momenta of the output particles according to a sample of that probability distribution will be applied in the next time slice. I might be making an obvious particle physics mistake, but modulo my own carelessness, almost all lattice sites would have nothing on them, many would have photons, some would have three quarks, fewer would have an electron on them, and some tiny, tiny fraction would have anything else. If you interpreted sets of sites containing the right number of up and down quarks as nucleons, interpreted those nucleons as atoms, used nearby electrons to recognize molecules, interpreted those molecules as objects or substances doing whatever they do in higher levels of abstraction, and sort of ignored anything else until it reached a stable state, then I think you would get a familiar world out of it if you had the utterly unobtainable computing power to do so.
I think this still has the ambiguity that I am complaining about.
As an analogy, consider the distinction between:
Some population of rabbits that is growing over time due to reproduction
The Fibonacci sequence as a model of the growth dynamics of this population
A computer program computing or mathematician deriving the numbers in or properties of this sequence
The first item in this list is meant to be analogous to the quantum mechanics qua the universe, as in it is some real-world entity that one might hypothesize acts according to certain rules, but exists regardless. The second is a Platonic mathematical object that one might hypothesize matches the rules of the real-world entity. And the third are actual instantiations of this Platonic mathematical object in reality. I would maybe call these “the territory”, “the hypothetical map” and “the actual map”, respectively.
Wouldn’t this fail for metals, quantum computing, the double slit experiment, etc.? By switching back and forth between quantum and classical, it seems like you forbid any superpositions/entanglement/etc. on a scale larger than your classical lattice size. The standard LessWrongian approach is to just bite the bullet on the many worlds interpretation (which I have some philosophical quibbles with, but those quibbles aren’t so relevant to this discussion, I think, so I’m willing to grant the many worlds interpretation if you want).
Anyway, more to the point, this clearly cannot be done with the actual map, and the hypothetical map does not actually exist, so my position is that while this may help one understand the notion that there is an rule that perfectly constrains the world, the thought experiment does not actually work out.
Somewhat adjacently, your approach to this is reductionistic, viewing large entities as being composed of unfathomably many small entities. As part of LDSL I’m trying to wean myself off of reductionism, and instead take large entities to be more fundamental, and treat small entities as something that the large entities can be broken up into.