Their idea seems to be to combine a social networking site with facilities for coordinating action and a karma system. If it can be designed in such a way that signals are honest, karma is fair and the system becomes widely-used, I imagine it could be highly effective. On the other hand, Facebook and co. give free karma that’s instantly visible to all your associates, so I fear it will be very difficult for the new site to invade the market.
In short, two psychologists modeled decision-making in a variation of the Prisoner’s Dilemma with a “quantum” probability model. Their motivation was to reconcile results from actual studies (the participants consistently made apparently irrational choices) with what classical probability theory predicts a rational agent would choose.
Oh, and the quantum thing isn’t new-age mysticism at all. It’s simply a model wherein instead of a binary choice, a choice can sort of be 0 and 1 simultaneously. I don’t claim to fully understand it, but it sounds awfully interesting.
EDIT: I looked at the context, and I’m setting a bad example for thales. This is off-topic for the post, so it should have been put in Open Thread instead.
But EY already responded, so I’ll leave my comment instead of deleting it.
the original motivation for developing quantum mechanics in physics was to explain findings that seemed paradoxical from a classical point of view. Possibly, quantum theory can better explain paradoxical findings in psychology, as well.
Same justification Penrose used for saying quantum mechanics is required to explain consciousness.
If you were asked to gamble in a game in which you had a 50⁄50 chance to win $200 or lose $100, would you play? In one study, participants were told that they had just played this game, and then were asked to choose whether to try the same gamble again. One-third of the participants were told that they had won the first game, one-third were told they had lost the first game, and the remaining one-third did not know the outcome of their first game. Most of the participants in the first two scenarios chose to play again (69% and 59%, respectively), while most of the participants in the third scenario chose not to (only 36% played again). These results violate the “sure thing principle,” which says that if you prefer choice A in two complementary known states (e.g., known winning and known losing), then you should also prefer choice A when the state is unknown.”
This is very interesting. I would guess that this is linked to instinctive fight-or-flight decisions, and has to do with adrenaline, not rational decisions.
participants who were told that their partner had defected or cooperated on the first round usually chose to defect on the second round (84% and 66%, respectively). But participants who did not know their partner’s previous decision were more likely to cooperate than the others (only 55% defected).
I assume this is a 2-round PD? Otherwise, why 66% defecting in response to cooperation?
As the scientists showed, both classical and quantum probability models accurately predict an individual’s decisions when the opponent’s choice is known. However, when the opponent’s action is unknown, both models predict that the probability of defection is the average of the two known cases, which fails to explain empirical human behavior.
When the action is unknown, you don’t assume 1-1 odds. But you certainly would predict that P(defection) | unknown is between P(defection) | defection and P(defection) | cooperation.
To address this problem, the scientists added another component to both models, which they call cognitive dissonance, and can also be thought of as wishful thinking. The idea is that people tend to believe that their opponent will make the same choice that they do; if an individual chooses to cooperate, they tend to think that their opponent will cooperate, as well.
This isn’t cognitive dissonance, but whatever.
In the quantum model, on the other hand, the addition of the cognitive dissonance component produces interference effects that cause the unknown probability to deviate from the average of the known probabilities.
Sounds to me—and this is based on more than what I quoted here—like they are simply positing that people think that the probability of their defecting is correlated with the probability of the other person defecting. Possibly they just don’t understand probability theory, and think they’re working outside it. I attended a lecture by Lofti Zadeh, inventor of fuzzy logic, in which he made it appear (to me, not to him) that he invented fuzzy logic to implement parts of standard probability theory that he didn’t understand.
But the math for that explanation doesn’t work. You’d have to read their paper in Proceedings of the Royal Society B to figure out what they really mean.
The idea is that people tend to believe that their opponent will make the same choice that they do; if an individual chooses to cooperate, they tend to think that their opponent will cooperate, as well.
It sounds like they’re describing Evidential Decision Theory.
I’ve heard other Bayesians say they’re not impressed with Zadeh. I know fuzzy logic primarily as a numerical model of a nonstandard deduction system, as opposed to anything that would be used in real life.
Here is one proposal:
http://blog.wired.com/business/2009/03/yes-we-plan-how.html
Their idea seems to be to combine a social networking site with facilities for coordinating action and a karma system. If it can be designed in such a way that signals are honest, karma is fair and the system becomes widely-used, I imagine it could be highly effective. On the other hand, Facebook and co. give free karma that’s instantly visible to all your associates, so I fear it will be very difficult for the new site to invade the market.
I’m new here and didn’t know if this has been a topic of discussion yet, but I found this story to be fascinating:
http://www.physorg.com/news158928941.html
In short, two psychologists modeled decision-making in a variation of the Prisoner’s Dilemma with a “quantum” probability model. Their motivation was to reconcile results from actual studies (the participants consistently made apparently irrational choices) with what classical probability theory predicts a rational agent would choose.
Oh, and the quantum thing isn’t new-age mysticism at all. It’s simply a model wherein instead of a binary choice, a choice can sort of be 0 and 1 simultaneously. I don’t claim to fully understand it, but it sounds awfully interesting.
EDIT: I looked at the context, and I’m setting a bad example for thales. This is off-topic for the post, so it should have been put in Open Thread instead.
But EY already responded, so I’ll leave my comment instead of deleting it.
Same justification Penrose used for saying quantum mechanics is required to explain consciousness.
This is very interesting. I would guess that this is linked to instinctive fight-or-flight decisions, and has to do with adrenaline, not rational decisions.
I assume this is a 2-round PD? Otherwise, why 66% defecting in response to cooperation?
When the action is unknown, you don’t assume 1-1 odds. But you certainly would predict that P(defection) | unknown is between P(defection) | defection and P(defection) | cooperation.
This isn’t cognitive dissonance, but whatever.
Sounds to me—and this is based on more than what I quoted here—like they are simply positing that people think that the probability of their defecting is correlated with the probability of the other person defecting. Possibly they just don’t understand probability theory, and think they’re working outside it. I attended a lecture by Lofti Zadeh, inventor of fuzzy logic, in which he made it appear (to me, not to him) that he invented fuzzy logic to implement parts of standard probability theory that he didn’t understand.
But the math for that explanation doesn’t work. You’d have to read their paper in Proceedings of the Royal Society B to figure out what they really mean.
It sounds like they’re describing Evidential Decision Theory.
I’ve heard other Bayesians say they’re not impressed with Zadeh. I know fuzzy logic primarily as a numerical model of a nonstandard deduction system, as opposed to anything that would be used in real life.