Imagine you build a computer model, and it has an ontology. Ontolgy 1.
This computer model simulates the behavior or particles. These particles and their interactions form another computer. A computer inside a computer simulation. The internal computer has its own ontology. Ontology 2. The internal computer performs measurements of its world.
Basically, the idea is that Ontology 2 is what the computer model knows about itself. And that the uncertainty principle applies to our measurements, which in the model are represented by Onotology 2.
That means Ontology 1 literally does not have to follow the laws of physic that we observe, merely produce observations that are consistent with the laws of physics we observe.
Um… that just sounds like the simulation hypothesis, and doesn’t seem relevant. (Or is that what the paper means, and all the news articles have crazy authors?)
Although you could probably get that written up and have everyone report it as revolutionary.
The article says “The connection between uncertainty and wave-particle duality comes out very naturally when you consider them as questions about what information you can gain about a system. Our result highlights the power of thinking about physics from the perspective of information”.
Ok, with that in mind, and the primary/secondary ontology thing in mind too.
Now, we have an observer in a model (the computer inside the computer). And that observer makes measurements by no special rules, just the basic interactions of the physics engine simulating the particles in ontology 1.
Let’s ask the observer to measure a particle’s position and momentum with total certainty.
The observer makes the first measurement, and records it, and this alters the state of the particle. When the observer goes to make a second measurement, the state of the particle is obviously different after the first measurement than before.
The measurement gets stored in Ontology 2.
Ontology 2 is a different kind of informaiton than Ontolgy 1. The Uncertainty principle only applies to Ontology 2.
It’s whatever emergent information the model comes to know about itself by containing a neural network that performs measurements.
Well, you could look at Heisenberg’s Uncertainty Princpiple: (o_x * o_p >= hbar / 2) and try to interpret what it means.
But I’m suggesting something else entirely.
Set that equation off to the side. And then building a model of measurement.
We don’t have a model of measurement actually taking place, by Everett’s standards. Computers have a little ways to go to a model that size. But we will, and in the meantime, any abstract thinker should be able to grok this.
In our models, we have relationships between measurable quantities. In Everett’s model, the measurable quantitites are emergent features, existing an an inner ontology defined by the neural network of an observer that exists as a physical process in the primary ontology.
The idea here is, every measurement the observer makes will be consistent with the uncertainty principle. The uncertainty principle returns to the model as a pattern in the measurements made.
Well, you could look at Heisenberg’s Uncertainty Princpiple: (ox * op >= hbar / 2) and try to interpret what it means.
I have, and so have other people.
We don’t have a model of measurement actually taking place, by Everett’s standards.
We don’t have a model of what measurement? Quantum measurement?
The idea here is, every measurement the observer makes will be consistent with the uncertainty principle. The uncertainty principle returns to the model as a pattern in the measurements made.
You’re hinting that the UP will emerge form any notion of measurement, without any other assumptions.
I am not sure you can pull that off though.
That doesn’t answer the question “what is being explained”, it answers the question “how is being explained”.
I don’t follow.
It seems to me, in the Copenhagen interpretation, measurement was a collapse event. Everett is saying, you know, we can probably just model the observer as physical process.
Measurement, to Everett, would be a physical process which can be modeled. A measurment can be said to have objectively happened when the observer creates a record of the measurement.
Not actually a summary, since you introduce elements not present in the original.
Ok, that might be a valid point. What specific elements are not in the original?
Everett is saying, you know, we can probably just model the observer as physical process.
Yes, but youre reading that as a classical physical process, and then guessing that disturbance must be the classical mechanism by which the appearance of quantum weirdness arises.
Imagine you build a computer model, and it has an ontology. Ontolgy 1.
This computer model simulates the behavior or particles. These particles and their interactions form another computer. A computer inside a computer simulation. The internal computer has its own ontology. Ontology 2. The internal computer performs measurements of its world.
Basically, the idea is that Ontology 2 is what the computer model knows about itself. And that the uncertainty principle applies to our measurements, which in the model are represented by Onotology 2.
That means Ontology 1 literally does not have to follow the laws of physic that we observe, merely produce observations that are consistent with the laws of physics we observe.
Um… that just sounds like the simulation hypothesis, and doesn’t seem relevant. (Or is that what the paper means, and all the news articles have crazy authors?)
Although you could probably get that written up and have everyone report it as revolutionary.
The article says “The connection between uncertainty and wave-particle duality comes out very naturally when you consider them as questions about what information you can gain about a system. Our result highlights the power of thinking about physics from the perspective of information”.
Ok, with that in mind, and the primary/secondary ontology thing in mind too.
Now, we have an observer in a model (the computer inside the computer). And that observer makes measurements by no special rules, just the basic interactions of the physics engine simulating the particles in ontology 1.
Let’s ask the observer to measure a particle’s position and momentum with total certainty.
The observer makes the first measurement, and records it, and this alters the state of the particle. When the observer goes to make a second measurement, the state of the particle is obviously different after the first measurement than before.
The measurement gets stored in Ontology 2.
Ontology 2 is a different kind of informaiton than Ontolgy 1. The Uncertainty principle only applies to Ontology 2.
It’s whatever emergent information the model comes to know about itself by containing a neural network that performs measurements.
The disturbance interpretation is kind of deprecated nowadays.
Well, you could look at Heisenberg’s Uncertainty Princpiple: (o_x * o_p >= hbar / 2) and try to interpret what it means.
But I’m suggesting something else entirely.
Set that equation off to the side. And then building a model of measurement.
We don’t have a model of measurement actually taking place, by Everett’s standards. Computers have a little ways to go to a model that size. But we will, and in the meantime, any abstract thinker should be able to grok this.
In our models, we have relationships between measurable quantities. In Everett’s model, the measurable quantitites are emergent features, existing an an inner ontology defined by the neural network of an observer that exists as a physical process in the primary ontology.
The idea here is, every measurement the observer makes will be consistent with the uncertainty principle. The uncertainty principle returns to the model as a pattern in the measurements made.
I have, and so have other people.
We don’t have a model of what measurement? Quantum measurement?
You’re hinting that the UP will emerge form any notion of measurement, without any other assumptions. I am not sure you can pull that off though.
In any case, you need to be a lot more precise.
Measurement according to Everett’s PhD Thesis, page 9:
http://philosophyfaculty.ucsd.edu/faculty/wuthrich/PhilPhys/EverettHugh1957PhDThesis_BarrettComments.pdf
Summary here:
https://github.com/MazeHatter/Everett-s-Relative-State-Formulation
That doesn’t answer the question “what is being explained”, it answers the question “how is being explained”.
Not actually a summary, since you introduce elements not present in the original.
I don’t follow.
It seems to me, in the Copenhagen interpretation, measurement was a collapse event. Everett is saying, you know, we can probably just model the observer as physical process.
Measurement, to Everett, would be a physical process which can be modeled. A measurment can be said to have objectively happened when the observer creates a record of the measurement.
Ok, that might be a valid point. What specific elements are not in the original?
Yes, but not a non physical event. That would be Consciousness Causes Collapse
Yes, but youre reading that as a classical physical process, and then guessing that disturbance must be the classical mechanism by which the appearance of quantum weirdness arises.
Disturbance
Actually, what I described is Hugh Everett’s Relative State Formulation.
The inner ontology is the “relative state”.
DeWitt’s Many Worlds idea really is pretty different from Everett’s ideas, and not for the better, IMO.