There is a whole school of thought based on treating the dualistic quantum formalism (the one that alternates between unitary evolution of a state vector, and instantaneous projection of the vector onto a measurement basis) as a model of general cognition. I see it as deriving from two traditions in physics.
First, I have to emphasize for the billionth time that in the original Copenhagen interpretation, the wavefunction or state vector is not a physical entity, it’s not the objective state of anything, it has exactly the same sort of reality as a probability distribution. Sorry if you’ve heard this before, but the Sequences guarantee a steady supply of LW readers who think that the basic choice in making sense of QM is between “wavefunction is real and it collapses” and “wavefunction is real and it doesn’t collapse”. In the original Copenhagen interpretation, the wavefunction is not real, and the “collapse” is no different to the “collapse” of a probability distribution when you get new information. You have to understand this “epistemic” perspective on quantum mechanics, to understand the history I’m about to give.
OK. So, there is a history of people trying to justify the quantum formalism as, in effect, a modification of ordinary probability theory which is appropriate for describing subatomic particles, because of some peculiarity of their relationship to us as observers. For example, the peculiarity might be that we always disturb them in trying to obtain information about their state. Or the peculiarity might be that precision regarding one physical property implies imprecision regarding a complementary physical property. Typically, these arguments will start with the abstract feature of subatomic particles, like “complementarity” or “nonseparability”, and then they will try to logically derive the use of complex Hilbert spaces and noncommuting operators from this beginning.
The other tradition in physics, which I see as having contributed, is the tradition of treating quantum formalism as a new way of thinking or reasoning about things. A prominent example would be “quantum logic”. Another would be people who say that quantum mechanics is “just” noncommutative probability theory, as if there were no further mystery in the idea of a “noncommutative probability”.
What these traditions have in common is an attempt to justify quantum mechanics as the last word in physics, by an argument about the relation between observer and observed. The first variant says that some observed systems, those with a particular property, need to be modeled in this peculiar way. The second variant focuses on the observer, in effect saying that they can or should or must think this way.
One genesis of the “quantum cognition” school lies in the extension of the first type of argument (or rationalization) to domains way beyond quantum physics, e.g. financial markets or social situations. The act of you querying the system in itself changes the system state, and the idea is that you should model such a system using the same linear algebra of complementary observables as appears in quantum physics.
As for the second type of argument, as a proposition of cognitive science, this turns into claims that human psychology—memory, decision-making—shows evidence of quantum-like representations being used. In other words, the idea here is not that reality is intrinsically better represented by state vectors accessed through noncommuting operators, but just that this is how our brains store and retrieve information.
All this isn’t exactly woo. But it does create confusion, and a number of bad arguments are made. It’s a bad idea to call something “quantum-like” just because it responds to your observation. The attempt to justify the peculiarities of quantum probabilities in this way is not going to work. The arguments that “quantum-like representations” are being employed in human cognition are weak—there are plenty of other ways to generate the cognitive effects which are being advanced as evidence.
On top of “quantum cognition”, this Italian course has an extra ingredient of “quantum psychiatry”, which I would guess draws inspiration from Lacan, whose school likes to borrow concepts from mathematics and physics in order to symbolize the unconscious.
Look at this. Does it set off anybody else’s quantum woo detector? And yet it’s a course offered by a real university, as far as I can see.
There is a whole school of thought based on treating the dualistic quantum formalism (the one that alternates between unitary evolution of a state vector, and instantaneous projection of the vector onto a measurement basis) as a model of general cognition. I see it as deriving from two traditions in physics.
First, I have to emphasize for the billionth time that in the original Copenhagen interpretation, the wavefunction or state vector is not a physical entity, it’s not the objective state of anything, it has exactly the same sort of reality as a probability distribution. Sorry if you’ve heard this before, but the Sequences guarantee a steady supply of LW readers who think that the basic choice in making sense of QM is between “wavefunction is real and it collapses” and “wavefunction is real and it doesn’t collapse”. In the original Copenhagen interpretation, the wavefunction is not real, and the “collapse” is no different to the “collapse” of a probability distribution when you get new information. You have to understand this “epistemic” perspective on quantum mechanics, to understand the history I’m about to give.
OK. So, there is a history of people trying to justify the quantum formalism as, in effect, a modification of ordinary probability theory which is appropriate for describing subatomic particles, because of some peculiarity of their relationship to us as observers. For example, the peculiarity might be that we always disturb them in trying to obtain information about their state. Or the peculiarity might be that precision regarding one physical property implies imprecision regarding a complementary physical property. Typically, these arguments will start with the abstract feature of subatomic particles, like “complementarity” or “nonseparability”, and then they will try to logically derive the use of complex Hilbert spaces and noncommuting operators from this beginning.
The other tradition in physics, which I see as having contributed, is the tradition of treating quantum formalism as a new way of thinking or reasoning about things. A prominent example would be “quantum logic”. Another would be people who say that quantum mechanics is “just” noncommutative probability theory, as if there were no further mystery in the idea of a “noncommutative probability”.
What these traditions have in common is an attempt to justify quantum mechanics as the last word in physics, by an argument about the relation between observer and observed. The first variant says that some observed systems, those with a particular property, need to be modeled in this peculiar way. The second variant focuses on the observer, in effect saying that they can or should or must think this way.
One genesis of the “quantum cognition” school lies in the extension of the first type of argument (or rationalization) to domains way beyond quantum physics, e.g. financial markets or social situations. The act of you querying the system in itself changes the system state, and the idea is that you should model such a system using the same linear algebra of complementary observables as appears in quantum physics.
As for the second type of argument, as a proposition of cognitive science, this turns into claims that human psychology—memory, decision-making—shows evidence of quantum-like representations being used. In other words, the idea here is not that reality is intrinsically better represented by state vectors accessed through noncommuting operators, but just that this is how our brains store and retrieve information.
All this isn’t exactly woo. But it does create confusion, and a number of bad arguments are made. It’s a bad idea to call something “quantum-like” just because it responds to your observation. The attempt to justify the peculiarities of quantum probabilities in this way is not going to work. The arguments that “quantum-like representations” are being employed in human cognition are weak—there are plenty of other ways to generate the cognitive effects which are being advanced as evidence.
On top of “quantum cognition”, this Italian course has an extra ingredient of “quantum psychiatry”, which I would guess draws inspiration from Lacan, whose school likes to borrow concepts from mathematics and physics in order to symbolize the unconscious.