I know politics is the mindkiller and arguments are soldiers yet still the question looms large: What makes some people more suceptible to arguing about politics and ideology? I know of people I can talk to while having differing points of view and just go “well, seems like we disagree” and carry on a conversation. Conversations with other people invariably disintegrate into political discussion with neither side yielding.
Different people may have different reasons. I guess it’s usually a form of bonding: if you believe that the other person is likely to have similar political opinions, then if you confirm it explicitly, you have common values and common enemies, which makes you emotionally closer.
And people who often start political debates with those who disagree… could be just uncalibrated. I mean, there is some kind of surprise/outrage when they find out that the other person doesn’t agree with them. But maybe I’m just protecting my hypothesis against falsification here. Perhaps we could find such person and ask them to make an estimate about how likely it is that a random person within their social group would share their opinions.
The attempt at making the hypothes falsifiable itself already warrants an upvote.
So bonding over policy might be a game-theoretic strategy to find allies at the cost of obviously alienating some people. Very interesting hypothesis. How might this be made falsifiable? I’d reject the hypothesis if I see politicking decrease or stay constant with need for allies, assuming satisfying measures for both politicking and need for allies.
Well, the adaptation may have been well-balanced in the ancient environment, but imbalanced for today. (Which could explain why people are uncalibrated.) So… let’s just separate the “what” from “why”. Let’s assume that people are running an algorithm that even doesn’t have to make sense. And we just have to throw in a lot of different inputs, examine the outputs, and make a hypothesis about the algorithm. And the whole meaning of that would be a prediction that if we keep making experiments, the outputs will be generated by the same algorithm.
That’s the “what” part. And the “why” part would be a story about how such algorithm would provide good results in the ancient environment.
Unfortunately, I can’t quite imagine making that experiment. Would we… take random people from the streets, ask them how many friends and enemies they have, then put them in a room together and wait how much time passes until someone starts debating politics? Or make an artificial environment with artificial “political sides”, like a reality show?
Do you find yourself refusing to yield in the latter case but not the former case? Or is this observation of mutually unrelenting parties purely an external observation?
If there is a bug in your behavior (inconsistencies and double standards), then some introspection should yield potential explanations.
I know politics is the mindkiller and arguments are soldiers yet still the question looms large: What makes some people more suceptible to arguing about politics and ideology? I know of people I can talk to while having differing points of view and just go “well, seems like we disagree” and carry on a conversation. Conversations with other people invariably disintegrate into political discussion with neither side yielding.
Why?
Different people may have different reasons. I guess it’s usually a form of bonding: if you believe that the other person is likely to have similar political opinions, then if you confirm it explicitly, you have common values and common enemies, which makes you emotionally closer.
And people who often start political debates with those who disagree… could be just uncalibrated. I mean, there is some kind of surprise/outrage when they find out that the other person doesn’t agree with them. But maybe I’m just protecting my hypothesis against falsification here. Perhaps we could find such person and ask them to make an estimate about how likely it is that a random person within their social group would share their opinions.
The attempt at making the hypothes falsifiable itself already warrants an upvote.
So bonding over policy might be a game-theoretic strategy to find allies at the cost of obviously alienating some people. Very interesting hypothesis. How might this be made falsifiable? I’d reject the hypothesis if I see politicking decrease or stay constant with need for allies, assuming satisfying measures for both politicking and need for allies.
Well, the adaptation may have been well-balanced in the ancient environment, but imbalanced for today. (Which could explain why people are uncalibrated.) So… let’s just separate the “what” from “why”. Let’s assume that people are running an algorithm that even doesn’t have to make sense. And we just have to throw in a lot of different inputs, examine the outputs, and make a hypothesis about the algorithm. And the whole meaning of that would be a prediction that if we keep making experiments, the outputs will be generated by the same algorithm.
That’s the “what” part. And the “why” part would be a story about how such algorithm would provide good results in the ancient environment.
Unfortunately, I can’t quite imagine making that experiment. Would we… take random people from the streets, ask them how many friends and enemies they have, then put them in a room together and wait how much time passes until someone starts debating politics? Or make an artificial environment with artificial “political sides”, like a reality show?
Do you find yourself refusing to yield in the latter case but not the former case? Or is this observation of mutually unrelenting parties purely an external observation?
If there is a bug in your behavior (inconsistencies and double standards), then some introspection should yield potential explanations.