Could you clarify whether you attribute the similarity to a) how human minds work, or b) how the physical world works, or c) something I am not thinking of?
b would seem clearly mistaken to me:
In some sense it is similar to large-scale Schrodinger’s cat, which can be in the state of both alive and dead only when unobserved.
For this I would recommend to use the decoherence conception of what measurements do (which is the natural choice in the Many Worlds Interpretation and still highly relevant if one assumes that a physical collapse occurs during measurement processes). From this perspective, what any measurement does is to separate the wave function into a bunch of contributions where each contains the measurement device showing result x and the measured system having the property x that is being measured[1]. Due to the high-dimensional space that the wave-function moves in, these parts will tend to never meet again, and this is what the classical limit means[2]. When people talk about ‘observation’ here, it is important to realize that an arbitrary physical interaction with the outside world is sufficient to count. This includes air molecules, thermal radiation, cosmic radiation, and very likely even gravity[3]. For objects large enough that we can see them, it will not happen without extreme effort that they remain ‘unobserved’ for longer times[4].
For anything macroscopic, there is no reason to believe that “human observation” is remotely relevant for observing classical behaviour.
- ↩︎
This assumes that this is a useful measurement. More generally, any arbitrary interaction between two systems does the same thing except that there is no legible “result x” or “property x” which we could make use of.
- ↩︎
of course, if there is a collapse which actually removes most of the parts there is additional reason why they will not meet in the future. The measurements we have done so far do not show any indication of a collapse in the regimes we could access, which implies that this process of decoherence is sufficient as a description for everyday behaviour. The reason why we cannot access further regimes is that decoherence kicks in and makes the behaviour classical even without the need for a physical collapse.
- ↩︎
Though getting towards experiments which manage to remove the other decoherence sources enough that gravity’s decoherence even could be observed is one of the large goals that researchers are striving for.
- ↩︎
E.g. Decoherence and the Quantum-to-Classical Transition by Maximilan Schlosshauer has a nice derivation and numbers for the ‘not-being-observed’ time scales: Table 3.2 gives the time scales resulting from different ‘observers’ for a dust grain of size 0.01 mm as “1 s due to cosmic background radiation, s from photons at room temperature, s from collisions with air molecules”.
I think I would agree to “decoherence does not solve the measurement problem” as the measurement problem has different sub-problems. One corresponds to the measurement postulate which different interpretations address differently and which Sabine Hossenfelder is mostly referring to in the video. But the other one is the question of why the typical measurement result looks like a classical world—and this is where decoherence is extremely powerful: it works so well that we do not have any measurements which manage to distinguish between the hypotheses of
“only the expected decoherence, no collapse”
“the expected decoherence, but additional collapse”
With regards to her example of Schrödinger’s cat, this means that the state |alive>+|dead> will not actually occur. It will always be a state where the environment must be part of the equation such that the state is more like |alive; trillions of photons encode a live cat>+|dead; trillions of photons encode a dead cat> after a nanosecond and already includes any surrounding humans after a microsecond (light went 300 m in all directions by then). When human perception starts being relevant, the state is |alive; photons encode alive; human retina excitations encode alive>+|dead; photons encode dead; human retina encodes dead> With regards to the first part of the measurement problem, this is not yet a solution. As such I would agree with Sabine Hossenfelder. But it does take away a lot of the weirdness because there is no branch on the wave function that contains non-classical behaviour[1].
You got me here. I did not follow the large debate around Wigner’s friend as i) this is not the topic I should spend huge amounts of time on, and ii) my expectations were that these will “boil down to normality” once I manage to understand all of the details of what is being discussed anyway.
It can of course be that people would convince me otherwise, but before that happens I do not see how these types of situations could lead to strange behaviour that isn’t already part of the well-established examples such as Schrödinger’s cat. Structurally, they only differ in that there are multiple subsequent ‘measurements’, and this can only create new problems if the formalism used for measurements is the source. I am confident that the many worlds and Bohmian interpretations do not lead to weirdness in measurements[2], such that I am as-of-yet not convinced.
Thanks for clarifying! (I take this to be mostly ‘b) physical world’ in that it isn’t ‘humans have bad epistemics’) Given the argument of the OP, I would at least agree that the remaining probability mass for UFOs/weirdness as a physical thing is on the cases where the weird things do mess with our perception, sensors and/or epistemics.
The difficult thing about such hypotheses is that they can quickly evolve to being able to explain anything and becoming worthless as a world-model.
This will generally be the case for any practical purposes. Mathematically, there will be minute contributions away from classicality.
at least not to this type of weirdness