It’s easy enough to get a single sensory datum — sample a classical state according to the Born probabilities, sample some coordinates, pretend that there’s an eyeball at those coordinates, record what it sees. But once we’ve done that, how do we get our next sense datum?
This doesn’t seem like it should be too hard—if you have some degrees of freedom which you take as representing your ‘eyeball’, and a preferred basis of ‘measurement states’ for that eyeball, repeatedly projecting onto that measurement basis will give sensible results for a sequence of measurements. Key here is that you don’t have to project e.g. all the electrons in the universe onto their position basis—just the eyeball DOF onto their preferred ‘measurement basis’(which won’t look like projecting the electrons onto their position basis either), and then the relevant entangled DOF in the rest of the universe will automatically get projected onto a sensible ‘classical-like’ state. The key property about the universe’s evolution that would make this procedure sensible is non-interference between the ‘branches’ produced by successive measurements. i.e. if you project onto two different eyeball states at time 1, then at time 2, those states will be approximately non-interfering in the eyeball basis. This is formalized in the consistent histories approach to QM.
What’s somewhat trickier is identifying the DOF that make a good ‘eyeball’ in the first place, and what the preferred basis should be. More broadly it’s not even known what quantum theories will give rise to ‘classical-like’ states at all. The place to look to make progress here is probably the decoherence literature, also quantum darwinism and Jess Riedel’s work.
I think that virtually every specialist would give you more or less the same answer as interstice, so I don’t see why it’s an open question at all. Sure, constructing a fully rigorous “eyeball operator” is very difficult, but defining a fully rigorous bridge rule in a classical universe would be very difficult as well. The relation to anthropics is more or less spurious IMO (MWI is just confused), but also anthropics is solvable using the infra-Bayesian approach to embedded agency. The real difficulty is understanding how to think about QM predictions about quantities that you don’t directly observe but that your utility function depends on. However, I believe that’s also solvable using infra-Bayesianism.
My own most recent pet theory is that the process of branching is deeply linked to thermalization, so to find model systems we should look to things modeling the flow of heat/entropy—e.g. a system coupled to two heat baths at different temperatures.
This doesn’t seem like it should be too hard—if you have some degrees of freedom which you take as representing your ‘eyeball’, and a preferred basis of ‘measurement states’ for that eyeball, repeatedly projecting onto that measurement basis will give sensible results for a sequence of measurements. Key here is that you don’t have to project e.g. all the electrons in the universe onto their position basis—just the eyeball DOF onto their preferred ‘measurement basis’(which won’t look like projecting the electrons onto their position basis either), and then the relevant entangled DOF in the rest of the universe will automatically get projected onto a sensible ‘classical-like’ state. The key property about the universe’s evolution that would make this procedure sensible is non-interference between the ‘branches’ produced by successive measurements. i.e. if you project onto two different eyeball states at time 1, then at time 2, those states will be approximately non-interfering in the eyeball basis. This is formalized in the consistent histories approach to QM.
What’s somewhat trickier is identifying the DOF that make a good ‘eyeball’ in the first place, and what the preferred basis should be. More broadly it’s not even known what quantum theories will give rise to ‘classical-like’ states at all. The place to look to make progress here is probably the decoherence literature, also quantum darwinism and Jess Riedel’s work.
I agree that the problem doesn’t seem too hard, and that there are a bunch of plausible-seeming theories. (I have my own pet favorites.)
I think that virtually every specialist would give you more or less the same answer as interstice, so I don’t see why it’s an open question at all. Sure, constructing a fully rigorous “eyeball operator” is very difficult, but defining a fully rigorous bridge rule in a classical universe would be very difficult as well. The relation to anthropics is more or less spurious IMO (MWI is just confused), but also anthropics is solvable using the infra-Bayesian approach to embedded agency. The real difficulty is understanding how to think about QM predictions about quantities that you don’t directly observe but that your utility function depends on. However, I believe that’s also solvable using infra-Bayesianism.
My own most recent pet theory is that the process of branching is deeply linked to thermalization, so to find model systems we should look to things modeling the flow of heat/entropy—e.g. a system coupled to two heat baths at different temperatures.
^_^
Also, thanks for all the resource links!
I think quantum darwinism is on the right track. FWIW, I found Zurek’s presentation of it here to be more clear to me.
The gist of it is, AFAICT:
Entangled states have a symmetry called envariance
This symmetry implies certain states must have equal probabilities
Other states can be decomposed into the kind from before
Putting this together implies the Born rule