I enjoyed reading a review of Sick Societies. Seems like it’s difficult to find the right balance between “primitive cultures are stupid” and “everything in primitive cultures is full of deep wisdom that we modern people are unable to understand”.
As usual, the public opinion moves as a pendulum; on the social level it goes from “we are 100% correct about everything, there is nothing to learn from others” to “everything would be better if we replaced our ways by the wisdom of others”.
In the rationalist community, I think we started at the position of thinking about everything explicitly, and we keep getting “post-rationalist” reminders of Chesterton’s fences, illegible wisdom hidden in traditions (especially Buddhism), et cetera. Which is good, in moderate doses. But it is also good to admit that sometimes things that seem stupid… are actually stupid. Not every seemingly stupid behavior contains a hidden wisdom; sometimes people are stupid and/or stuck in horrible Nash equilibria.
As usual, the frustrating answer is “it depends”. If we see something that doesn’t make sense to us, it is good to try figuring out whether there is a good reason we missed. But this doesn’t mean there always is a good reason. It doesn’t even mean (as Chesterton would implore us) that we can find out why exactly some tribe started doing this many years ago. Maybe they simply made a mistake! Or they had a mad leader who was good at killing those who opposed him, but his policy proposals were disastrous. They were just as fallible as we are; possibly much more.
Saying whether “something” “is” “stupid” is sort of confused. If I run algorithm X which produces concrete observable Y, and X is good and Y is bad, is Y stupid? When you say that Y is stupid, what are you referring to? Usually we don’t even want to refer to [Y, and Y alone, to the exclusion of anything Y is entangled with / dependent on / productive of / etc.].
I don’t have an exact definition, but approximately it is a behavior that is justified by false beliefs, and if only one person did it, we would laugh at them, and the person would only be hurting themselves… but if many people start doing it, and they add an extra rule that those who don’t do the thing or even argue against doing the thing must be punished, they can create a Nash equilibrium where people doing the thing hurt themselves, but people who refuse to do the thing get hurt more by their neighbors. And where people, if they were allowed to think about it freely, would reflexively not endorse being stuck in such equilibrium. (It’s mention in the article that often when the people learn that others do not live in the same equilibrium, they become deeply ashamed for their previous behavior. Which suggest that an important part of why they were doing it was because they did not realize that an alternative is possible—either it didn’t occur to them at all, or they believed incorrectly that for some reason it wouldn’t work.)
I cannot dismiss that possibility completely, but I assume that cultural inventions like scientific method and free speech are helpful—I mean, compared to living in a society that believes in horrible monsters and spirits everywhere, where interpersonal violence is the norm, and two-digit percent of males die by murder. In such society, if someone tells you “believe X, or else”, then it doesn’t matter how absurd X is, you will at least pretend that you take it seriously. (Or you die.) Even if it’s something obviously self-serving, like the leader if the tribe telling little boys that they need to suck his dick, otherwise they will not have enough male energy to grow up healthy.
These days, if you express doubts about the Emperor’s new clothes… you will likely survive. So the stupid ideas get some opposition. And I don’t know how much the Asch’s conformity experiment replicates, but it suggests that even a lonely dissent can do wonders.
I enjoyed reading a review of Sick Societies. Seems like it’s difficult to find the right balance between “primitive cultures are stupid” and “everything in primitive cultures is full of deep wisdom that we modern people are unable to understand”.
As usual, the public opinion moves as a pendulum; on the social level it goes from “we are 100% correct about everything, there is nothing to learn from others” to “everything would be better if we replaced our ways by the wisdom of others”.
In the rationalist community, I think we started at the position of thinking about everything explicitly, and we keep getting “post-rationalist” reminders of Chesterton’s fences, illegible wisdom hidden in traditions (especially Buddhism), et cetera. Which is good, in moderate doses. But it is also good to admit that sometimes things that seem stupid… are actually stupid. Not every seemingly stupid behavior contains a hidden wisdom; sometimes people are stupid and/or stuck in horrible Nash equilibria.
As usual, the frustrating answer is “it depends”. If we see something that doesn’t make sense to us, it is good to try figuring out whether there is a good reason we missed. But this doesn’t mean there always is a good reason. It doesn’t even mean (as Chesterton would implore us) that we can find out why exactly some tribe started doing this many years ago. Maybe they simply made a mistake! Or they had a mad leader who was good at killing those who opposed him, but his policy proposals were disastrous. They were just as fallible as we are; possibly much more.
Saying whether “something” “is” “stupid” is sort of confused. If I run algorithm X which produces concrete observable Y, and X is good and Y is bad, is Y stupid? When you say that Y is stupid, what are you referring to? Usually we don’t even want to refer to [Y, and Y alone, to the exclusion of anything Y is entangled with / dependent on / productive of / etc.].
I don’t have an exact definition, but approximately it is a behavior that is justified by false beliefs, and if only one person did it, we would laugh at them, and the person would only be hurting themselves… but if many people start doing it, and they add an extra rule that those who don’t do the thing or even argue against doing the thing must be punished, they can create a Nash equilibrium where people doing the thing hurt themselves, but people who refuse to do the thing get hurt more by their neighbors. And where people, if they were allowed to think about it freely, would reflexively not endorse being stuck in such equilibrium. (It’s mention in the article that often when the people learn that others do not live in the same equilibrium, they become deeply ashamed for their previous behavior. Which suggest that an important part of why they were doing it was because they did not realize that an alternative is possible—either it didn’t occur to them at all, or they believed incorrectly that for some reason it wouldn’t work.)
Do you have a reason to dismiss them being “possibly much less” fallible?
I cannot dismiss that possibility completely, but I assume that cultural inventions like scientific method and free speech are helpful—I mean, compared to living in a society that believes in horrible monsters and spirits everywhere, where interpersonal violence is the norm, and two-digit percent of males die by murder. In such society, if someone tells you “believe X, or else”, then it doesn’t matter how absurd X is, you will at least pretend that you take it seriously. (Or you die.) Even if it’s something obviously self-serving, like the leader if the tribe telling little boys that they need to suck his dick, otherwise they will not have enough male energy to grow up healthy.
These days, if you express doubts about the Emperor’s new clothes… you will likely survive. So the stupid ideas get some opposition. And I don’t know how much the Asch’s conformity experiment replicates, but it suggests that even a lonely dissent can do wonders.