Of Two Minds
Follow-up to: The Intelligent Social Web
The human mind evolved under pressure to solve two kinds of problems:
How to physically move
What to do about other people
I don’t mean that list to be exhaustive. It doesn’t include maintaining homeostasis, for instance. But in practice I think it hits everything we might want to call “thinking”.
…which means we can think of the mind as having two types of reasoning: mechanical and social.
Mechanical reasoning is where our intuitions about “truth” ground out. You throw a ball in the air, your brain makes a prediction about how it’ll move and how to catch it, and either you catch it as expected or you don’t. We can imagine how to build an engine, and then build it, and then we can find out whether it works. You can try a handstand, notice how it fails, and try again… and after a while you’ll probably figure it out. It means something for our brains’ predictions to be right or wrong (or somewhere in between).
I recommend this TED Talk for a great overview of this point.
The fact that we can plan movements lets us do abstract truth-based reasoning. The book Where Mathematics Comes From digs into this in math. But for just one example, notice how set theory almost always uses container metaphors. E.g., we say elements are in sets like pebbles are in buckets. That physical intuition lets us use things like Venn diagrams to reason about sets and logic.
…well, at least until our intuitions are wrong. Then we get surprised. And then, like in learning to catch a ball, we change our anticipations. We update.
Mechanical reasoning seems to already obey Bayes’ Theorem for updating. This seems plausible from my read of Scott’s review of Surfing Uncertainty, and in the TED Talk I mentioned earlier Daniel Wolpert claims this is measured. And it makes sense: evolution would have put a lot of pressure on our ancestors to get movement right.
Why, then, is there systematic bias? Why do the Sequences help at all with thinking?
Sometimes, occasionally, it’s because of something structural — like how we systematically feel someone’s blow as harder than they felt they had hit us. It just falls out of how our brains make physical predictions. If we know about this, we can try to correct for it when it matters.
But the rest of the time?
It’s because we predict it’s socially helpful to be biased that way.
When it comes to surviving and finding mates, having a place in the social web matters a lot more than being right, nearly always. If your access to food, sex, and others’ protection depends on your agreeing with others that the sky is green, you either find ways to conclude that the sky is green, or you don’t have many kids. If the social web puts a lot of effort into figuring out what you really think, then you’d better find some way to really think the sky is green, regardless of what your eyes tell you.
Is it any wonder that so many deviations from clear thinking are about social signaling?
The thing is, “clear thinking” here mostly points at mechanical reasoning. If we were to create a mechanical model of social dynamics… well, it might start looking like a recursively generated social web, and then mechanical reasoning would mostly derive the same thing the social mind already does.
…because that’s how the social mind evolved.
And once it evolved, it became overwhelmingly more important than everything else. Because a strong, healthy, physically coordinated, skilled warrior has almost no hope of defeating a weakling who can inspire many, many others to fight for them.
Thus whenever people’s social and mechanical minds disagree, the social mind almost always wins, even if it kills them.
You might hope that that “almost” includes things like engineering and hard science. But really, for the most part, we just figured out how to align social incentives with truth-seeking. And that’s important! We figured out that if we tie social standing to whether your rocket actually works, then being right socially matters, and now culture can care about truth.
But once there’s the slightest gap between cultural incentives and making physical things work, social forces take over.
This means that in any human interaction, if you don’t see how the social web causes each person’s actions, then you’re probably missing most of what’s going on — at least consciously.
And there’s probably a reason you’re missing it.
- Crisis and opportunity during coronavirus by 12 Mar 2020 20:20 UTC; 78 points) (
- 25 Jun 2019 6:45 UTC; 20 points) 's comment on Causal Reality vs Social Reality by (
- 1 Jun 2020 8:42 UTC; 3 points) 's comment on From self to craving (three characteristics series) by (
- 21 Aug 2020 2:35 UTC; 2 points) 's comment on Should we write more about social life? by (
- STRUCTURE: How the Social Affects your rationality by 1 Feb 2019 23:35 UTC; 0 points) (
Related: Robin Hanson’s A Tale of Two Tradeoffs.
(And obviously, “The Elephant in the Brain” is basically an extended survey of the empirical evidence for these kinds of theses.)
Curated.
I’ve been interested for awhile in a more explicit “social reality” theory and sequence. The original sequences certainly explore this, as does a lot of Robin Hanson’s work, but I feel like I had to learn a lot about how to meaningfully apply it to myself via in person conversations.
I think this is one of the more helpful posts I’ve read for orienting around why social reasoning might be different from analytic reasoning – it feels like it takes a good stab at figuring out where to carve reality at the joints.
I do think that in the longterm, as curated posts get cached into something more like Site Canon, I’d want to see some followup posts (whether by Val or others) that take the “mechanical vs social thinking” frame from the hypothesis stage to the “formulate it into something that makes concrete predictions, and do some literature reviews that check how those predictions have born out so far.”
I agree that lots of biases have their roots in social benefits, but I’m unsure whether they’re really here now “because we predict it’s socially helpful to be biased that way” or whether they’re here because it was socially helpful to be biased that way. Humans are adaption executers, not fitness maximizers, so the question is whether we adapted to the ancestral environment by producing a mind that could predict what biases were useful, or by producing a mind with hardcoded biases. The answer is probably some combination of the two.
Yep, that seems like a correct nuance to add. I meant “predict” in a functional sense, rather than in a thought-based one, but that wasn’t at all clear. I appreciate you adding this correction.
You might have gone too far with speculation—your theory can be tested. If your model was true, I would expect a correlation between, say, the ability to learn ball sports and the ability to solve mathematical problems. It is not immediately obvious how to run such an experiment, though.
Sports/math is an obvious thing to check, but I’m not sure whether it quite gets at the thing Val is pointing at.
I’d guess that there are a few clusters of behaviors and adaptations for different type of movement. I think predicting where a ball will end up doesn’t require… I’m not sure I have a better word than “reasoning”.
In the Distinctions in Types of Thought sense, my guess is that for babies first learning how to move, their brain is doing something Effortful, which hasn’t been cached down to the level of S1 intuition. But they’re probably not doing something sequential. You can get better at it just by throwing more data at the learning algorithm. Things like math have more to do with the skill of carving up surprising data into new chunks, and the ability to make new predictions with sequential reasoning.
My understanding is that “everything good-associated tends to be correlated with everything else good”, a la wealth/height/g-factor so I think I expect sports/math to be at least somewhat correlated. But I think especially good ball players are probably maxed out on a different adaptation-to-execute than especially good math-problem-solvers.
I do agree that it’d be really good to formulate the movement/social distinction hypothesis into something that made some concrete predictions, and/or delve into some of the surrounding literature a bit more. (I’d be interested in a review of Where Mathematics Comes From)
I think that’s good, isn’t it? :-D
Maybe…? I think it’s more complicated than I read this implying. But yes, I expect the abilities to learn to be somewhat correlated, even if the actualized skills aren’t.
Part of the challenge is that math reasoning seems to coopt parts of the mind that normally get used for other things. So instead of mentally rehearsing a physical movement in a way that’s connected to how your body can actually move and feel, the mind mentally rehearses the behavior (!) of some abstract mathematical object in ways that don’t necessarily map onto anything your physical body can do.
I suspect that closeness to physical doability is one of the main differences between “pure” mathematical thinking and engineering-style thinking, especially engineering that’s involved with physical materials (e.g., mechanical, electrical — as opposed to software). And yes, this is testable, because it suggests that engineers will tend to have developed more physical coordination than mathematicians relative to their starting points. (This is still tricky to test, because people aren’t randomly sorted into mathematicians vs. engineers, so their starting abilities with learning physical coordination might be different. But if we can figure out a way to test this claim, I’d be delighted to look at what the truth has to say about this!)
I’m surprised by your post and would have expected different post based on how it started.
One ability our social mind evolved is the ability to care. Caring producing a certain kind of ability to pattern match that’s often superior to straight mechanical reasoning. David Chapman wrote the great post Going down on the phenomenon to drive deeper into that dynamic as it applies to science.
Curiosity is also a very important mental move that doesn’t come out of the need to coordinate physical movement but that’s more social in nature.
I mostly agree. I had, like, four major topics like this that I was tempted to cram into this essay. I decided to keep it to one message and leave things like this for later.
But yes, totally, nearly everything we actually care about comes from the social mind doing its thing.
I disagree about curiosity though. I think that cuts across the two minds. “Oh, huh, I wonder what would happen if I connected this wire to that glowing thing….”
I don’t think there’s any rat out there that thinks “huh, I wonder what would happen if I connected this wire to that glowing thing…” and I don’t think the basic principles about movement coordination changed that much on that evolutionary time-scale.
I could imagine a chimpanzee wondering about what will happen but then chimpanzee’s also have strong social mind.
There may be more than one form of curiosity; this discussion suggests that humans, monkeys and rats differ in the kinds of curiosity that they exhibit (emphasis added):
Later in the paper, when trying to establish a more up-to-date framework for thinking about curiosity, they suggest that its evolutionary pathway can be traced to behaviors which are already present in roundworms:
“Why, then, is there systematic bias?… But the rest of the time? It’s because we predict it’s socially helpful to be biased that way.”
Similar thoughts led me to write De-Centering Bias. We have a bias towards biases (yes, I know the irony). True rationality isn’t just eliminating biases, but also realising that they are often functional.