My rather marginal view is that both UFO and BigFoot is the same phenomenon which can appear only in the sitiations of “low concentration of human attention”. In some sense it is similar to large-scale Schrodinger’s cat, which can be in the state of both alive and dead only when unobserved.
This explains why there are many evidence but never conclusive evidence.
Could you clarify whether you attribute the similarity to
a) how human minds work, or
b) how the physical world works, or
c) something I am not thinking of?
b would seem clearly mistaken to me:
In some sense it is similar to large-scale Schrodinger’s cat, which can be in the state of both alive and dead only when unobserved.
For this I would recommend to use the decoherence conception of what measurements do (which is the natural choice in the Many Worlds Interpretation and still highly relevant if one assumes that a physical collapse occurs during measurement processes).
From this perspective, what any measurement does is to separate the wave function into a bunch of contributions where each contains the measurement device showing result x and the measured system having the property x that is being measured[1]. Due to the high-dimensional space that the wave-function moves in, these parts will tend to never meet again, and this is what the classical limit means[2].
When people talk about ‘observation’ here, it is important to realize that an arbitrary physical interaction with the outside world is sufficient to count. This includes air molecules, thermal radiation, cosmic radiation, and very likely even gravity[3]. For objects large enough that we can see them, it will not happen without extreme effort that they remain ‘unobserved’ for longer times[4].
For anything macroscopic, there is no reason to believe that “human observation” is remotely relevant for observing classical behaviour.
This assumes that this is a useful measurement. More generally, any arbitrary interaction between two systems does the same thing except that there is no legible “result x” or “property x” which we could make use of.
of course, if there is a collapse which actually removes most of the parts there is additional reason why they will not meet in the future. The measurements we have done so far do not show any indication of a collapse in the regimes we could access, which implies that this process of decoherence is sufficient as a description for everyday behaviour. The reason why we cannot access further regimes is that decoherence kicks in and makes the behaviour classical even without the need for a physical collapse.
Though getting towards experiments which manage to remove the other decoherence sources enough that gravity’s decoherence even could be observed is one of the large goals that researchers are striving for.
E.g. Decoherence and the Quantum-to-Classical Transition by Maximilan Schlosshauer has a nice derivation and numbers for the ‘not-being-observed’ time scales: Table 3.2 gives the time scales resulting from different ‘observers’ for a dust grain of size 0.01 mm as “1 s due to cosmic background radiation, 10−18 s from photons at room temperature, 10−31 s from collisions with air molecules”.
I think (give like 30 per cent probability) that the general nature of the UFO phenomenon is that it is anti-epistemic, that it, it actively prevents our ability to get definite knowledge about about it. How exactly this happens is not clear, and there could be several ideas.
One idea (it will be quantum woo, I know) which I find attractive is that other observers are in the situation of Wigner friend. A remote observer in the such state can interact with other non-collapsed objects. But the only way I can learn about this is getting some strange stories after this remote observer was observed by me and “collapsed”. However, I can’t personally observe such high strangeness events. This put a limit on the power of evidence I can get about hight strangeness phenomena.
Note that by “collapse” I mean here not an observation of a single object inside the universe, but my observation of the whole universe, which fully collapses it.
I think (give like 30 per cent probability) that the general nature of the UFO phenomenon is that it is anti-epistemic, that it, it actively prevents our ability to get definite knowledge about about it. How exactly this happens is not clear, and there could be several ideas.
Something jumped out at me here. Regardless of the explanation, there’s a testable experiment in the works here. We could confirm or falsify this anti-epistemic property.
Setup: find the ‘base rate’ of UFO sightings, how often do humans and aircraft sensors see them. Then determine how large of an area you need to cover.
Cover half an area sufficiently large with thousands/millions of constantly recording high resolution cameras. Use AI to check the footage for UFOs.
The other half is your control region. Elicit UFO reports in both regions. (you might put the cameras in both regions but not power the ones in the control region so human reporters don’t know which region they are in)
Prediction: if UFOs are anti-epistemic, you will get no UFO reports from the region covered by cameras, and you will get a statistically meaningful number (because you chose a large enough collection area with enough people) from the control region.
If the cameras ever pick up anything it will be blurry and distant, of course.
Obviously you then swap the groups and run the cameras in the control region.
It would be weird if reality works this way, and we can debate theories after empirical confirmation, but it already is weird in many other ways.
We could check already existing data from e.g. parapsychology for this effect. As I remember, it was observed there that the stronger is control in the experiments, the less is co-called psi-effect which usually was interpreted as evidence against psi.
But suspect that that meta-anti-epistemic nature of the phenomena will appear even in such setup and it will produce initially promising but then declining results.
What she’s mainly arguing there is that decoherence does not solve the measurement problem because it does not result in the Born rule without further assumptions. She also links another post where she argues that attempts to derive the Born rule via rational choice theory are non-reductionist.
It might be that she thinks that means that some separate collapse is likely in addition to the separation into a mixture via decoherence, where the collapse selects a particular outcome from the mixture, but even if that were true, such a collapse would, I think, have to occur after or simultaneously with decoherence or it would be observable.
None of this leads, as far as I can tell, to the strange expectations that you seem to have.
As I understand, the main difference form her view is that decoherence is the relation between objects in the system, but measurement is related to the whole system “collapse”.
I think I would agree to “decoherence does not solve the measurement problem” as the measurement problem has different sub-problems. One corresponds to the measurement postulate which different interpretations address differently and which Sabine Hossenfelder is mostly referring to in the video.
But the other one is the question of why the typical measurement result looks like a classical world—and this is where decoherence is extremely powerful: it works so well that we do not have any measurements which manage to distinguish between the hypotheses of
“only the expected decoherence, no collapse”
“the expected decoherence, but additional collapse”
With regards to her example of Schrödinger’s cat, this means that the state |alive>+|dead> will not actually occur. It will always be a state where the environment must be part of the equation such that the state is more like |alive; trillions of photons encode a live cat>+|dead; trillions of photons encode a dead cat> after a nanosecond and already includes any surrounding humans after a microsecond (light went 300 m in all directions by then).
When human perception starts being relevant, the state is
|alive; photons encode alive; human retina excitations encode alive>+|dead; photons encode dead; human retina encodes dead>
With regards to the first part of the measurement problem, this is not yet a solution. As such I would agree with Sabine Hossenfelder. But it does take away a lot of the weirdness because there is no branch on the wave function that contains non-classical behaviour[1].
Wigner’s friend.
You got me here. I did not follow the large debate around Wigner’s friend as i) this is not the topic I should spend huge amounts of time on, and ii) my expectations were that these will “boil down to normality” once I manage to understand all of the details of what is being discussed anyway.
It can of course be that people would convince me otherwise, but before that happens I do not see how these types of situations could lead to strange behaviour that isn’t already part of the well-established examples such as Schrödinger’s cat. Structurally, they only differ in that there are multiple subsequent ‘measurements’, and this can only create new problems if the formalism used for measurements is the source. I am confident that the many worlds and Bohmian interpretations do not lead to weirdness in measurements[2], such that I am as-of-yet not convinced.
I think (give like 30 per cent probability) that the general nature of the UFO phenomenon is that it is anti-epistemic
Thanks for clarifying! (I take this to be mostly ‘b) physical world’ in that it isn’t ‘humans have bad epistemics’)
Given the argument of the OP, I would at least agree that the remaining probability mass for UFOs/weirdness as a physical thing is on the cases where the weird things do mess with our perception, sensors and/or epistemics.
The difficult thing about such hypotheses is that they can quickly evolve to being able to explain anything and becoming worthless as a world-model.
My hand wavy view is that ‘consciousness’ which causes collapse is a very small (collapse resistant as Chalmers wrote) object inside the brain. For example, it is an electric potential of membrane of a single neuron. As a result, everything outside it - the whole universe—is in some sense the Schrödinger cat.
The whole ‘macroscopic quantum effects’ are interferences between whole universes branches from the view of this small quantum object in they brain. It could be rephrased as small quantum object in the brain is itself in complex quantum states which may sound more plausibly.
Because the interference is happening between whole branches, photon-cause decoherence of some objects inside each branch is not relevant.
This is why Everett called his theory relative interpretation of QM: there is a relation (multiplication of vectors states) between two systems, observer and the universe. Note that later “many worlds interpretation” is oversimplification of this idea as it excludes interference between branches.
One aspect which I disagree with is that collapse is the important thing to look at.
Decoherence is sufficient to get classical behaviour on the branches of the wave function. There is no need to consider collapse if we care about ‘weird’ vs. classical behaviour. This is still the case even if the whole universe is collapse-resistant (as is the case in the many worlds interpretation).
The point of this is that true cat states ( = superposed universe branches) do not look weird.
The whole ‘macroscopic quantum effects’ are interferences between whole universes branches from the view of this small quantum object in they brain.
Superposition of universe—We can certainly regard the possibility that the macroscopic world is in a superposition as seen from our brain. This is what we should expect (absent collapse) just from the sizes of universe and brain:
The size of our brain corresponds to a limited number for the dimensionality of all possible brain states (we can include all sub-atomic particles for this)
If the number of branches of the universe is larger than the number of possible brain states, there is no possible wave function in which there aren’t some contributions in which the universe is in a superposition with regards to the brain. Some brain states must be associated with multiple branches.
the universe is a lot larger than the brain and dimensionality scales exponentially with particle number
further, it seems highly likely that many physical brain-states correspond to identical mind states (some unnoticeable vibration propagating through my body does not seem to scramble my thinking very much)
Because of this, anyone following the many worlds interpretation should agree that from our perspective, the universe is always in a superposition—no unknown brain properties required. But due to decoherence (and assuming that branches will not meet), this makes no difference and we can replace the superposition with a probability distribution.
Perhaps this is captured by your “why Everett called his theory relative interpretation of QM”—I did not read his original works.
The question now becomes the interference between whole universe branches:
A deep assumption in quantum theory is locality which implies that two branches must be equal in all properties[1] in order to interfere[2].
Because of this, interference of branches can only look like “things evolving in a weird direction” (double slit experiment) and not like “we encounter a wholly different branch of reality” (fictional stories where people meet their alternate-reality versions).
Because of this, I do not see how quantum mechanics could create the weird effects that it is supposed to explain.
If we do assume that human minds have an extra ability to facilitate interaction between otherwise distant branches if they are in a superposition compared to us, this of course could create a lot of weirdness.
But this seems like a huge claim to me that would depart massively from much of what current physics believes. Without a much more specific model, this feels closer to a non-explanation than to an explanation.
This is not a necessary property of quantum theories, but it is one of the core assumptions used in e.g. the standard model. People who explore quantum gravity do consider theories which soften this assumption
My rather marginal view is that both UFO and BigFoot is the same phenomenon which can appear only in the sitiations of “low concentration of human attention”. In some sense it is similar to large-scale Schrodinger’s cat, which can be in the state of both alive and dead only when unobserved.
This explains why there are many evidence but never conclusive evidence.
Could you clarify whether you attribute the similarity to a) how human minds work, or b) how the physical world works, or c) something I am not thinking of?
b would seem clearly mistaken to me:
For this I would recommend to use the decoherence conception of what measurements do (which is the natural choice in the Many Worlds Interpretation and still highly relevant if one assumes that a physical collapse occurs during measurement processes). From this perspective, what any measurement does is to separate the wave function into a bunch of contributions where each contains the measurement device showing result x and the measured system having the property x that is being measured[1]. Due to the high-dimensional space that the wave-function moves in, these parts will tend to never meet again, and this is what the classical limit means[2]. When people talk about ‘observation’ here, it is important to realize that an arbitrary physical interaction with the outside world is sufficient to count. This includes air molecules, thermal radiation, cosmic radiation, and very likely even gravity[3]. For objects large enough that we can see them, it will not happen without extreme effort that they remain ‘unobserved’ for longer times[4].
For anything macroscopic, there is no reason to believe that “human observation” is remotely relevant for observing classical behaviour.
This assumes that this is a useful measurement. More generally, any arbitrary interaction between two systems does the same thing except that there is no legible “result x” or “property x” which we could make use of.
of course, if there is a collapse which actually removes most of the parts there is additional reason why they will not meet in the future. The measurements we have done so far do not show any indication of a collapse in the regimes we could access, which implies that this process of decoherence is sufficient as a description for everyday behaviour. The reason why we cannot access further regimes is that decoherence kicks in and makes the behaviour classical even without the need for a physical collapse.
Though getting towards experiments which manage to remove the other decoherence sources enough that gravity’s decoherence even could be observed is one of the large goals that researchers are striving for.
E.g. Decoherence and the Quantum-to-Classical Transition by Maximilan Schlosshauer has a nice derivation and numbers for the ‘not-being-observed’ time scales: Table 3.2 gives the time scales resulting from different ‘observers’ for a dust grain of size 0.01 mm as “1 s due to cosmic background radiation, 10−18 s from photons at room temperature, 10−31 s from collisions with air molecules”.
Sabina Hossenfelder argues against the idea decoherence is measurement e.g. here: http://backreaction.blogspot.com/2019/10/what-is-quantum-measurement-problem.html As I understand, the main difference form her view is that decoherence is the relation between objects in the system, but measurement is related to the whole system “collapse”.
I think (give like 30 per cent probability) that the general nature of the UFO phenomenon is that it is anti-epistemic, that it, it actively prevents our ability to get definite knowledge about about it. How exactly this happens is not clear, and there could be several ideas.
One idea (it will be quantum woo, I know) which I find attractive is that other observers are in the situation of Wigner friend. A remote observer in the such state can interact with other non-collapsed objects. But the only way I can learn about this is getting some strange stories after this remote observer was observed by me and “collapsed”. However, I can’t personally observe such high strangeness events. This put a limit on the power of evidence I can get about hight strangeness phenomena.
Note that by “collapse” I mean here not an observation of a single object inside the universe, but my observation of the whole universe, which fully collapses it.
I think (give like 30 per cent probability) that the general nature of the UFO phenomenon is that it is anti-epistemic, that it, it actively prevents our ability to get definite knowledge about about it. How exactly this happens is not clear, and there could be several ideas.
Something jumped out at me here. Regardless of the explanation, there’s a testable experiment in the works here. We could confirm or falsify this anti-epistemic property.
Setup: find the ‘base rate’ of UFO sightings, how often do humans and aircraft sensors see them. Then determine how large of an area you need to cover.
Cover half an area sufficiently large with thousands/millions of constantly recording high resolution cameras. Use AI to check the footage for UFOs.
The other half is your control region. Elicit UFO reports in both regions. (you might put the cameras in both regions but not power the ones in the control region so human reporters don’t know which region they are in)
Prediction: if UFOs are anti-epistemic, you will get no UFO reports from the region covered by cameras, and you will get a statistically meaningful number (because you chose a large enough collection area with enough people) from the control region.
If the cameras ever pick up anything it will be blurry and distant, of course.
Obviously you then swap the groups and run the cameras in the control region.
It would be weird if reality works this way, and we can debate theories after empirical confirmation, but it already is weird in many other ways.
Really interesting idea.
We could check already existing data from e.g. parapsychology for this effect. As I remember, it was observed there that the stronger is control in the experiments, the less is co-called psi-effect which usually was interpreted as evidence against psi.
But suspect that that meta-anti-epistemic nature of the phenomena will appear even in such setup and it will produce initially promising but then declining results.
What she’s mainly arguing there is that decoherence does not solve the measurement problem because it does not result in the Born rule without further assumptions. She also links another post where she argues that attempts to derive the Born rule via rational choice theory are non-reductionist.
It might be that she thinks that means that some separate collapse is likely in addition to the separation into a mixture via decoherence, where the collapse selects a particular outcome from the mixture, but even if that were true, such a collapse would, I think, have to occur after or simultaneously with decoherence or it would be observable.
None of this leads, as far as I can tell, to the strange expectations that you seem to have.
I think I would agree to “decoherence does not solve the measurement problem” as the measurement problem has different sub-problems. One corresponds to the measurement postulate which different interpretations address differently and which Sabine Hossenfelder is mostly referring to in the video. But the other one is the question of why the typical measurement result looks like a classical world—and this is where decoherence is extremely powerful: it works so well that we do not have any measurements which manage to distinguish between the hypotheses of
“only the expected decoherence, no collapse”
“the expected decoherence, but additional collapse”
With regards to her example of Schrödinger’s cat, this means that the state |alive>+|dead> will not actually occur. It will always be a state where the environment must be part of the equation such that the state is more like |alive; trillions of photons encode a live cat>+|dead; trillions of photons encode a dead cat> after a nanosecond and already includes any surrounding humans after a microsecond (light went 300 m in all directions by then). When human perception starts being relevant, the state is |alive; photons encode alive; human retina excitations encode alive>+|dead; photons encode dead; human retina encodes dead> With regards to the first part of the measurement problem, this is not yet a solution. As such I would agree with Sabine Hossenfelder. But it does take away a lot of the weirdness because there is no branch on the wave function that contains non-classical behaviour[1].
You got me here. I did not follow the large debate around Wigner’s friend as i) this is not the topic I should spend huge amounts of time on, and ii) my expectations were that these will “boil down to normality” once I manage to understand all of the details of what is being discussed anyway.
It can of course be that people would convince me otherwise, but before that happens I do not see how these types of situations could lead to strange behaviour that isn’t already part of the well-established examples such as Schrödinger’s cat. Structurally, they only differ in that there are multiple subsequent ‘measurements’, and this can only create new problems if the formalism used for measurements is the source. I am confident that the many worlds and Bohmian interpretations do not lead to weirdness in measurements[2], such that I am as-of-yet not convinced.
Thanks for clarifying! (I take this to be mostly ‘b) physical world’ in that it isn’t ‘humans have bad epistemics’) Given the argument of the OP, I would at least agree that the remaining probability mass for UFOs/weirdness as a physical thing is on the cases where the weird things do mess with our perception, sensors and/or epistemics.
The difficult thing about such hypotheses is that they can quickly evolve to being able to explain anything and becoming worthless as a world-model.
This will generally be the case for any practical purposes. Mathematically, there will be minute contributions away from classicality.
at least not to this type of weirdness
My hand wavy view is that ‘consciousness’ which causes collapse is a very small (collapse resistant as Chalmers wrote) object inside the brain. For example, it is an electric potential of membrane of a single neuron. As a result, everything outside it - the whole universe—is in some sense the Schrödinger cat.
The whole ‘macroscopic quantum effects’ are interferences between whole universes branches from the view of this small quantum object in they brain. It could be rephrased as small quantum object in the brain is itself in complex quantum states which may sound more plausibly.
Because the interference is happening between whole branches, photon-cause decoherence of some objects inside each branch is not relevant.
This is why Everett called his theory relative interpretation of QM: there is a relation (multiplication of vectors states) between two systems, observer and the universe. Note that later “many worlds interpretation” is oversimplification of this idea as it excludes interference between branches.
One aspect which I disagree with is that collapse is the important thing to look at. Decoherence is sufficient to get classical behaviour on the branches of the wave function. There is no need to consider collapse if we care about ‘weird’ vs. classical behaviour. This is still the case even if the whole universe is collapse-resistant (as is the case in the many worlds interpretation). The point of this is that true cat states ( = superposed universe branches) do not look weird.
Superposition of universe—We can certainly regard the possibility that the macroscopic world is in a superposition as seen from our brain. This is what we should expect (absent collapse) just from the sizes of universe and brain:
The size of our brain corresponds to a limited number for the dimensionality of all possible brain states (we can include all sub-atomic particles for this)
If the number of branches of the universe is larger than the number of possible brain states, there is no possible wave function in which there aren’t some contributions in which the universe is in a superposition with regards to the brain. Some brain states must be associated with multiple branches.
the universe is a lot larger than the brain and dimensionality scales exponentially with particle number
further, it seems highly likely that many physical brain-states correspond to identical mind states (some unnoticeable vibration propagating through my body does not seem to scramble my thinking very much)
Because of this, anyone following the many worlds interpretation should agree that from our perspective, the universe is always in a superposition—no unknown brain properties required. But due to decoherence (and assuming that branches will not meet), this makes no difference and we can replace the superposition with a probability distribution.
Perhaps this is captured by your “why Everett called his theory relative interpretation of QM”—I did not read his original works.
The question now becomes the interference between whole universe branches: A deep assumption in quantum theory is locality which implies that two branches must be equal in all properties[1] in order to interfere[2]. Because of this, interference of branches can only look like “things evolving in a weird direction” (double slit experiment) and not like “we encounter a wholly different branch of reality” (fictional stories where people meet their alternate-reality versions).
Because of this, I do not see how quantum mechanics could create the weird effects that it is supposed to explain.
If we do assume that human minds have an extra ability to facilitate interaction between otherwise distant branches if they are in a superposition compared to us, this of course could create a lot of weirdness. But this seems like a huge claim to me that would depart massively from much of what current physics believes. Without a much more specific model, this feels closer to a non-explanation than to an explanation.
more strictly: must have mutual support in phase-space. For non-physicists: a point in phase-space is how classical mechanics describes a world.
This is not a necessary property of quantum theories, but it is one of the core assumptions used in e.g. the standard model. People who explore quantum gravity do consider theories which soften this assumption