The attempt at making the hypothes falsifiable itself already warrants an upvote.
So bonding over policy might be a game-theoretic strategy to find allies at the cost of obviously alienating some people. Very interesting hypothesis. How might this be made falsifiable? I’d reject the hypothesis if I see politicking decrease or stay constant with need for allies, assuming satisfying measures for both politicking and need for allies.
Well, the adaptation may have been well-balanced in the ancient environment, but imbalanced for today. (Which could explain why people are uncalibrated.) So… let’s just separate the “what” from “why”. Let’s assume that people are running an algorithm that even doesn’t have to make sense. And we just have to throw in a lot of different inputs, examine the outputs, and make a hypothesis about the algorithm. And the whole meaning of that would be a prediction that if we keep making experiments, the outputs will be generated by the same algorithm.
That’s the “what” part. And the “why” part would be a story about how such algorithm would provide good results in the ancient environment.
Unfortunately, I can’t quite imagine making that experiment. Would we… take random people from the streets, ask them how many friends and enemies they have, then put them in a room together and wait how much time passes until someone starts debating politics? Or make an artificial environment with artificial “political sides”, like a reality show?
The attempt at making the hypothes falsifiable itself already warrants an upvote.
So bonding over policy might be a game-theoretic strategy to find allies at the cost of obviously alienating some people. Very interesting hypothesis. How might this be made falsifiable? I’d reject the hypothesis if I see politicking decrease or stay constant with need for allies, assuming satisfying measures for both politicking and need for allies.
Well, the adaptation may have been well-balanced in the ancient environment, but imbalanced for today. (Which could explain why people are uncalibrated.) So… let’s just separate the “what” from “why”. Let’s assume that people are running an algorithm that even doesn’t have to make sense. And we just have to throw in a lot of different inputs, examine the outputs, and make a hypothesis about the algorithm. And the whole meaning of that would be a prediction that if we keep making experiments, the outputs will be generated by the same algorithm.
That’s the “what” part. And the “why” part would be a story about how such algorithm would provide good results in the ancient environment.
Unfortunately, I can’t quite imagine making that experiment. Would we… take random people from the streets, ask them how many friends and enemies they have, then put them in a room together and wait how much time passes until someone starts debating politics? Or make an artificial environment with artificial “political sides”, like a reality show?