I have this half-baked idea that trying to be rational by oneself is a slightly pathological condition. Humans are naturally social, and it would make sense to distribute cognition over several processors, so to speak. It would explain the tendencies I notice in relationships to polarize behavior—if my partner adopts the position that we should go on vacations as much as possible, I almost automatically tend to assume the role worrying about money, for example, and we then work out a balanced solution together. If each of us were to decide on our own, our opinions would be much less polarized.
I could totally see how it would make sense in groups that some members adopt some low probability beliefs, and that it would benefit the group overall.
Is there any merit to this idea? Considering the well known failures in group rationality, I wonder if this is something that has long been disproved.
There are studies that compared performance of couples with randomly assigned pairs (from the same group) and found that couples perform better than random assignment. This suggests that couple specialize and at the same time rely on the specialization of the other part (“I knew you’d make the appointment”).
The other side of the coin this breaking-up: You feel like a part of your brain has been ripped off—namely the part you outsourced to your partner.
You feel like a part of your brain has been ripped off—namely the part you outsourced to your partner.
Just like when the internet goes out and you can’t get to google/Wikipedia/etc! But more traumatic considering how much more bandwidth is exchanged between people in physical and emotional space.
Yes, it is difficult to maintain balance when the other person is pushing in some direction. You feel the instinct to push the other way, as if to provide a balance on average. The problem is, balance on the average means imbalance in your head, if the other person is unbalanced.
It’s like when we have a debate about how much is 2+2, and the other person insists that it is 3, then when I say 4, there is a risk that in the future we will achieve a compromise value of 3.5, which I already perceive as wrong. So people have the social instinct to say at least 5, so that the future compromise value may be 4. Even if they originally did not really believe it was 5.
One possible solution would be to make everyone write their opinion before hearing the opinions of others. But that can be done in artificial settings, not in real life—we usually already heard the opinions of some people. Also, if we have iterated debates about the same topic (e.g. the vacations), we can already predict what our partner will say.
To me it simply means that to have a rational debate, it is better to exclude the people who are strongly mindkilled about something. (Obviously, deciding who they are, is a problem on a higher level.) Maintaining balance is difficult on its own, and almost impossible when someone keeps pushing you on one side: you either fall on the side you are pushed, or you tilt to the opposite direction and fall down later when you are alone. We should not overestimate our own ability to be reasonable in difficult situations.
I could totally see how it would make sense in groups that some members adopt some low probability beliefs, and that it would benefit the group overall.
I can imagine a debate where you flip a coin and you either present your true opinion, or you role-play a selected opinion. Problem is, how would you create the set of the role-played opinions?
What if you forget to include something important? What if most of the supposedly “random” opinions are actually variants of one side (which is already overrepresented in the sincere part of debate), and the other side is underrepresented (and some third side is completely absent). That would be quite likely if people who prepare the “random” options are from the same population as the sincerely debating ones: they would add many minor variants of their own opinion, because those would sound meaningful; and then a few obvious strawmen of their enemies, to create a feeling of a fulfilled duty.
Mercier & Sperber made a similar argument, commenting that e.g. things that seem like biases in the context of a single individual (such as confirmation bias) are actually beneficial for the decision-making of a group. An excerpt:
… the idea that the confirmation bias is a normal feature of reasoning that plays a role in the production of arguments may seem surprising in light of the poor outcomes it has been claimed to cause. Conservatism in science is one example (see Nickerson 1998 and references therein). Another is the related phenomenon of groupthink, which has been held responsible for many disasters, from the Bay of Pigs fiasco (Janis 1982) to the tragedy of the Challenger shuttle (Esser & Lindoerfer 1989; Moorhead et al. 1991) (for review, see Esser 1998). In such cases, reasoning tends not to be used in its normal context: that is, the resolution of a disagreement through discussion. When one is alone or with people who hold similar views, one’s arguments will not be critically evaluated. This is when the confirmation bias is most likely to lead to poor outcomes. However, when reasoning is used in a more felicitous context – that is, in arguments among people who disagree but have a common interest in the truth – the confirmation bias contributes to an effi- cient form of division of cognitive labor.
When a group has to solve a problem, it is much more efficient if each individual looks mostly for arguments supporting a given solution. They can then present these arguments to the group, to be tested by the other members. This method will work as long as people can be swayed by good arguments, and the results reviewed in section 2 show that this is generally the case. This joint dialogic approach is much more efficient than one where each individual on his or her own has to examine all possible solutions carefully.8 The advantages of the confirmation bias are even more obvious given that each participant in a discussion is often in a better position to look for arguments in favor of his or her favored solution (situations of asymmetrical information). So group discussions provide a much more effi- cient way of holding the confirmation bias in check. By contrast, the teaching of critical thinking skills, which is supposed to help us overcome the bias on a purely individual basis, does not seem to yield very good results (Ritchart & Perkins 2005; Willingham 2008).
For the confirmation bias to play an optimal role in discussions and group performance, it should be active only in the production of arguments and not in their evaluation. Of course, in the back-and-forth of a discussion, the production of one’s own arguments and the evaluation of those of the interlocutor may interfere with each other, making it hard to properly assess the two processes independently. Still, the evidence reviewed in section 2.1 on the understanding of arguments strongly suggests that people tend to be more objective in evaluation than in production. If this were not the case, the success of group reasoning reviewed in section 2.3 would be very hard to explain.
That’s a powerful idea and it actually goes deeper than you may think. We are divided even internally inside ourselves. There is reason to think that your internal rational decision-making processes consist of multiple sub-processes that combine and compare various points of view. Each sub-process has the same level of interaction with other sub-processes as you would have when speaking to another person. Your mental sub-processes may not even distinguish between thoughts and ideas coming from another part of your brain and coming from another person.
I have this half-baked idea that trying to be rational by oneself is a slightly pathological condition. Humans are naturally social, and it would make sense to distribute cognition over several processors, so to speak. It would explain the tendencies I notice in relationships to polarize behavior—if my partner adopts the position that we should go on vacations as much as possible, I almost automatically tend to assume the role worrying about money, for example, and we then work out a balanced solution together. If each of us were to decide on our own, our opinions would be much less polarized.
I could totally see how it would make sense in groups that some members adopt some low probability beliefs, and that it would benefit the group overall.
Is there any merit to this idea? Considering the well known failures in group rationality, I wonder if this is something that has long been disproved.
There are studies that compared performance of couples with randomly assigned pairs (from the same group) and found that couples perform better than random assignment. This suggests that couple specialize and at the same time rely on the specialization of the other part (“I knew you’d make the appointment”).
The other side of the coin this breaking-up: You feel like a part of your brain has been ripped off—namely the part you outsourced to your partner.
Just like when the internet goes out and you can’t get to google/Wikipedia/etc! But more traumatic considering how much more bandwidth is exchanged between people in physical and emotional space.
Yes, it is difficult to maintain balance when the other person is pushing in some direction. You feel the instinct to push the other way, as if to provide a balance on average. The problem is, balance on the average means imbalance in your head, if the other person is unbalanced.
It’s like when we have a debate about how much is 2+2, and the other person insists that it is 3, then when I say 4, there is a risk that in the future we will achieve a compromise value of 3.5, which I already perceive as wrong. So people have the social instinct to say at least 5, so that the future compromise value may be 4. Even if they originally did not really believe it was 5.
One possible solution would be to make everyone write their opinion before hearing the opinions of others. But that can be done in artificial settings, not in real life—we usually already heard the opinions of some people. Also, if we have iterated debates about the same topic (e.g. the vacations), we can already predict what our partner will say.
To me it simply means that to have a rational debate, it is better to exclude the people who are strongly mindkilled about something. (Obviously, deciding who they are, is a problem on a higher level.) Maintaining balance is difficult on its own, and almost impossible when someone keeps pushing you on one side: you either fall on the side you are pushed, or you tilt to the opposite direction and fall down later when you are alone. We should not overestimate our own ability to be reasonable in difficult situations.
I can imagine a debate where you flip a coin and you either present your true opinion, or you role-play a selected opinion. Problem is, how would you create the set of the role-played opinions?
What if you forget to include something important? What if most of the supposedly “random” opinions are actually variants of one side (which is already overrepresented in the sincere part of debate), and the other side is underrepresented (and some third side is completely absent). That would be quite likely if people who prepare the “random” options are from the same population as the sincerely debating ones: they would add many minor variants of their own opinion, because those would sound meaningful; and then a few obvious strawmen of their enemies, to create a feeling of a fulfilled duty.
Mercier & Sperber made a similar argument, commenting that e.g. things that seem like biases in the context of a single individual (such as confirmation bias) are actually beneficial for the decision-making of a group. An excerpt:
That’s a powerful idea and it actually goes deeper than you may think. We are divided even internally inside ourselves. There is reason to think that your internal rational decision-making processes consist of multiple sub-processes that combine and compare various points of view. Each sub-process has the same level of interaction with other sub-processes as you would have when speaking to another person. Your mental sub-processes may not even distinguish between thoughts and ideas coming from another part of your brain and coming from another person.