Malice, Stupidity, or Egalité Irréfléchie?
Anyone who has decided to strike off the mainstream path has experienced this: Strong admonitions and warnings against what they were doing, and pressures not do it.
It doesn’t really matter what it is you’re trying to change. If you’re trying to become a nondrinker in a drinking culture, if you’re trying to quit eating junk food, if you’re trying to become a vegetarian or otherwise have a different diet, this will have happened to you.
If you decide to pursue a nontraditional career path (artist, entrepreneur, etc), you will have experienced this.
If you try to live a different lifestyle than the people around you – for instance, rising each day at 4:30AM and sleeping early instead of partying, you will have experienced this.
People will pressure and cajole you in many different ways to keep doing it the old way. Almost always, it will be phrased as though they’re looking after your best interest.
The specifics will vary. It could be phrased as cautious prudence – “What if your business doesn’t succeed and you don’t have a college degree? That could be really bad for you.”
It could be phrased as desiring for you to have the best way in life – “Go on, live a little, a beer won’t kill you.”
It could be encouraging you to do whatever you’ve set out to change without any specific reasoning at all.
I used to wonder why this is so common. Are people stupid? Or malicious? They must be one of those two.
If someone has a preference that has an expected value of a better life for them and they really want to live that preference, then why would someone that’s in their peer group or family want to discourage them? Is it because they have different calculations of what’s valuable, even when pursuing obvious no-brainer decisions like quitting the lowest quality junk foods? Is it because they’re malicious and want to hold you back and tear you down?
I think now – neither. Rather, I think it’s an uncritical, unexamined form of desire for equality.
The egalitarian instinct is strong in humans. Most people want others to do and act broadly similar to them. It’s almost an affront if you don’t live the normal way – do you think you’re better than them? No matter how subtle, gracious, or modest you try to be about it, it makes people feel bad if you’re breaking from the egalitarian way.
There’s plenty of research on this. High performers getting punished or shunned.
The French in the title is probably wrong (you’re welcome to correct it if you’re fluent), but I think it has a nice ring to it – Egalite Irréfléchi. Unthinking egalitarianism.
There’s a place for some egalitarianism in the world. A desire to bring others up, to open opportunities for others, to decentralize knowledge, to make resources available for people who want to use them.
But that’s a thoughtful egalitarianism, that examines how we can get everyone to be better. An unthinking egalitarianism generally promotes the status quo and punishes people who strive to do better.
But it’s not stupid or malicious. Stupidity implies poor judgment and malice implies poor motivations. Rather, the egalitarian instinct appears to be natural to most people. Pressures on you to conform (even when conforming is bad for your health and life) are not made out of stupidity or malicious intent, but rather from a natural instinctual drive towards equality that was never carefully examined.
- 14 Jun 2011 18:55 UTC; 2 points) 's comment on Influence = Manipulation by (
To me, the simpler explanation is loyalty signaling. A change in behavior represents a change in alliances, so your friends/family are trying to make sure you’re not unintentionally signaling your desire to break alliance with them, to join another group. They don’t want to lose an ally, or for your nonconforming behavior to negatively affect their status with others within your common social circle.
IOW, if you = Person A and your friend/family member is B, then B fears that some set of others C will identify you as an out-group member, and question B’s status due to their association with you.
However, since humans are adaptation-executors rather than utility maximizers (and are thus inherently self-deceiving), only the most self-aware B will realize that this is what they fear. Instead, they will simply feel a sense that your behavior is somehow not-right, dangerous, or even offensive to some degree… and a resulting desire to save you from yourself, so as to reduce the agitation and discomfort they feel in the face of your behavior.
Btw, in at least some self-help and entrepreneurial circles this phenomenon is well-known, and persons involved in efforts to change or improve themselves are urged to seek out peer groups in which their desired/target behaviors are normal, desirable, and praiseworthy… as well as to expect/be prepared for negative reactions from current peer groups.
Actually, come to think of it, the advice to seek a peer group is also common to PUAs and 12-step recovery groups alike. Humans just seem to function better when they can realistically believe they are behaving in ways that are admired by others in their social circle.
Anyone have ideas about how to easily and convincingly falsify pjeby’s and/or lionhearted’s hypotheses about psychology?
(Tangential:) I’m not sure, but I think a big perk of the existence of the LW community is that to some extent it is such a social circle.
Good analysis.
Changing your peer group/reference group is generally helpful for making changes. Also, most people report having a difficult time changing others’ minds around them in their current peer group.
I do wonder if there’s other solutions besides changing your peer/reference group. I guess, keeping your ambitious changes quiet/lowkey for a while while starting… other thoughts?
Isn’t this simply another hypothesis? It sounds nice, sure, but I think it would take more evidence than you’ve given to promote this from plausible to probable.
This certainly differs from my mom’s advice, which is essentially to be a member of at least one “average” peer group, in order to remain grounded in reality. Her idea is that if you abandon your current groups and join some other one, you’ll begin to think like them whether their ideas are right or not. Thus, it would be a bad idea to, say, flee Grand Rapids and run off to Santa Clara, where everyone wants to live forever in robot bodies. I might become one of those crazy people.
Your mom’s advice only makes sense if your goal is to be average, because being a member of said group will make it difficult to do any better than average.
It’s also an example of status-quo bias, because she’s defining “reality” as whatever “average” people believe… but the type of people she considers “average” is itself determined by her pre-existing beliefs.
In other words, if you taboo “average”, you find that the advice is really saying, “don’t change, because you won’t fit in with my group any more”!
(That is, it’s exactly what we’d expect someone faced with a changing ally would say.)
I said “seek out peer groups in which their desired/target behaviors are normal”.
In other words, the presupposition is that you’ve already come to the conclusion that you want to have those beliefs or behaviors, because you evaluated them before choosing to participate in the group in question.
But it nonetheless beats the crap out of the article’s hypothesis, which posits an entirely new piece of machinery, rather than falling naturally out of existing theory (i.e humans are motivated by status and alliances, behavior signals likely alliances or changes of alliance, etc.).
AFAICT, nothing I’ve said in the explanation proposes any new instincts, machinery, or inclinations that aren’t already textbook ev. psych. IOW, based on what we know so far, my explanation should be what we should predict even if we didn’t already know people did this sort of thing.
Ah, there’s the kicker. I thought her advice was good, but I had never realized that I could check out groups’ behaviors in order to see if they’re good or bad, rather than just blindly joining a group and hoping it’s a good one. This should change my behavior in the future.
So, fearing that A is signalling a desire to leave the group, B discourage A’s new behaviour; to counteract this, A seeks out a new peer group, increasing the odds that A does end up leaving the group. So B is engaging in classic self-defeating behaviour … unless, of course, the peer pressure succeeds.
Unfortunately, B’s response to A may well be rational, if B expects other Bs to react the same way, leading A to leave the group unless B can make the peer pressure on A to conform strong enough. The various Bs are in something like the prisoner’s dilemma with each other; (if I knew my catalogue of game theory better, I’d be able to say just what they’re in).
Which it usually does. In the ancestral environment, opportunities for seeking out a new peer group were quite limited, so our brains don’t quite realize they can do it; they’re still quite biased towards keeping the existing group happy.
If this weren’t the case, it wouldn’t be so necessary for wealth, self-help, PUA, and other gurus to harp on the importance of doing it, and of being prepared for a negative response from your existing peer group.
Well, their problem is not opposing interests. In your model, they seem to have the same interests—they’re just at the wrong Nash equilibrium.
Right, it’s definitely not PD. And it’s not Chicken. As you say, it’s one with two Nash equilibria, a good one at both-cooperate and a worse one at both-defect. I just don’t remember what it’s called and don’t know where to find out online.
(Parts of this comment were misinterpreted. I have slightly edited this comment to make it clearer; this editing was done after lionhearted replied to it. )
I downvoted this post. The OP describes a phenomenon that everyone knows about, then suggests that “stupidity” (a word mostly left undefined) and “malice” aren’t good explanations. (How did “malice” ever seem like a good explanation in the first place?) That’s kind of correct, maybe, depending on what class of things the OP is using “stupidity” to refer to. A single word is never a good explanation for an aspect of human psychology. The OP then suggests that the “egalitarian instinct” is an explanation. The OP gives little explanation for this explanation (ETA: I didn’t mean explaining what the egalitarian instinct is, which is easily researched by those interested, but explaining more persuasively/effectively how it explains the phenomenon mentioned), no mention of other possible explanations (no acknowledgment of the existence of other possible explanations), and no description of what the world would look like if the OP’s explanation were wrong. Thinking up one plausible explanation for an observed phenomenon is fine, I guess, if that’s where you want to start, and you don’t care too much about the phenomenon in question. Writing a Less Wrong post that looks exactly like that, though, is just wrong. This is not how you go about constructing a model of human psychology.
I would like to complain in more detail about the way “stupidity” and “malice” are brought up and almost immediately dismissed. It causes some part of the the reader’s brain to read along and think “Yeah, stupidity doesn’t sound like a very good hypothesis, and huh, malice doesn’t either… I wonder what a good explanation would be? Oh, the OP suggests the egalitarian instinct, that’s comparatively a lot more plausible than stupidity or malice which means it’s probably correct.” If stupidity and malice had never been brought up, the reader would be a lot more likely to treat the proposed explanation of egalitarianism with a healthier amount of skepticism. Bringing up the red herrings thus misleads.
Why not? Are you saying humans are never systematically glad at another’s misfortune or loss under certain conditions? I would say humans often are. LWers probably come from societies and situations where this kind of behaviour is less prevalent than the human norm.
This is a really bad habit. (Specifically the habit of asking or thinking things like “Are you saying completely ridiculous thing #24626772?”.)
The answer is yes fairly often, which gives a lot of info cheaply.
You’re right, I was imprecise; the bad habit is asking it and halfway-assuming the answer will be ‘yes’ instead of asking it without the presumption of nonsense.
Yes, being polite is good, and rhetorical questions can easily go the other away.
Perhaps this is so.
But malice never being a good explanation when malice is basically “desiring another misfortune or loss”. So malice never being a good explanation is completely ridiculous. I wanted to check if you where really saying what I thought you where saying so I rephrased it in my own words to match how I understood your statement.
That part is of course a good habit. I think the confusion happened because I was asking “how did malice ever seem like a good explanation [for this phenomenon] in the first place?”, not “how did malice ever seem like a good explanation [for anything ever in the history of the universe] in the first place?”.
In fairness, you did some raise some good points as well, and I’ll address those -
Indeed. And yet, one many people can’t explain. Which is why it’s worth thinking about.
I defined stupidity in the post as “impl[ying] poor judgment,” meaning going through a conscious faulty thought process. I could have been more explicit about this definition at the expense of pace and brevity, by making the post more heavy and harder for casual readers to digest, without adding significantly more clarity. I suppose it could have been defined explicitly, but I don’t think the piece becomes stronger if I do. Rather, I think it becomes weaker for the vast majority of potential readers.
Some of this behavior certainly seems mean-spirited and malicious to people. Many examples available if you honestly can’t think of any.
True, yes, but you must consider audience. There’s a reason, unfortunately, why popular magazines are more popular than science journals. Style does matter, which always must be a consideration if you’re tackling a complex theme and want your piece to be accessible to a wide variety of people.
It has been written about extensively. Again, this wasn’t a PhD thesis. In fact, it’s been written about extensively here on LessWrong before, notably “Tsuyoku vs. the Egalitarian Instinct” by Eliezer. I suppose I did assume some familiarity with the material that other readers might not have, and could have cited that as relevant prerequisite reading.
Again, because I was formulating a hypothesis, not writing a thesis.
I appreciate you taking the time to reply and elaborate on your thoughts, but there might be a difference in goals and expectations here. I’ve attempted to write a series of observations, reason through them, and come up for one explanation for a not-fully-understood phenomenon.
It’s already stimulated some good discussion. I’m happy with that result and it has, thus far, done what I intended. I think a longer, weightier, more formal post would have been less effective at the intended goal of putting out observations, a hypothesis, and stimulating some discussion.
I didn’t pick up that the article was “formulating a hypothesis”. Did the article indicate that this is what it was doing? Perhaps I missed it.
Now that I do know, from your comment, that the article was doing that, I have to say I’m a bit surprised; I didn’t expect to see that sort of article in the main section. Then again, I’m no expert on Less Wrong so maybe that sort of thing is not so uncommon.
Read and understood, we probably agree about most everything here and discussing it further is probably suboptimal.
I’ll make a few clarifications that I don’t think you’d argue with too much:
I had read the post and recognized the concept, but a link to it would have primed me more for looking at similarities/differences between the phenomena you and he discuss. Consider adding one?
I can think of many examples, but I can also think of many examples that don’t seem malicious, in fact most don’t, and since you’re proposing an explanation of the class of behaviors, it seemed absurd to think that anyone would think that malice was an explanation. But upon reflection this was severe typical mind fallacy on my part.
Mostly fair points, but you missed the scope of this post. It’s a 550 word post that marks a series of observations and a hypothesis.
It has stimulated some good discussion about alternative reasons that this phenomenon exists.
Your idea of the target scope is off. It was not an attempt to construct a model of human psychology, which would be more fit for a PhD thesis than a 550 word post. It is a series of observation, reasoning, and a hypothesis.
I had neglected this important consideration and I will retract my downvote until I have thought about this more. I still think this post shows off some bad cognitive habits, and I’m afraid that it getting many upvotes would both incentivize bad cognitive habits and reflect poorly on Less Wrong. Thus currently my policy is “downvote if it gets above 15, upvote if it gets below 0, else do nothing”.
I didn’t mean to imply that. I was trying to say “this is not how one should generally go about constructing a model of any aspect or set of aspects of human psychology” but thought that sounded too clunky.
I agree that a <1000 word post shouldn’t go into lots of details, but if you’re trying to keep it short then I think it’s a bad idea to put forth a hypothesis unless it’s sufficiently clear that it’s a particularly good or interesting one. I think you could have spent the words you did on your hypothesis much more effectively by proposing some plausible hypotheses and then explicitly asking Less Wrong what they thought. I would consider upvoting the post if you did this, sexy title be damned, but I realize that would be a fair bit of work for you even if you agreed it would be better.
Interesting. Okay, thank you for the feedback. One thing I’m going to think about is signal-to-noise ratio vs. putting ideas out there.
My first inclination is that putting out a larger volume of potentially correct work and letting it go through trial-by-fire and be discussed is superior to waiting until an argument is fully bulletproofed, caveat’ed, and so on. But there’s probably some tuneups I could have made to write it more strongly—I’ll sleep on it tonight, re-read your comments tomorrow, and give it more thought. Thanks for your replies.
A large volume of potentially correct work that you seek discussion on might do better in the discussion area. I found this one valuable for the term Egalité Irréfléchie, which does have a nice ring to it; but otherwise a bit tedious: Everything between the second paragraph and “I used to wonder...” sounded completely superfluous; the last half sounded painfully belabored on its own points.
The redundancy of the text was balanced by the lightness of the support. Where lukeprog would’ve added 60 citations to peer-reviewed papers, you merely said “there’s plenty of research on this.”
I’ve enjoyed your writing before, but I think this one wasn’t quite ready for the main LW posting area.
Definitely depends on what your goals are. If you’re interested in getting feedback for your ideas while stimulating discussion then doing what you did except with more of a “but that’s my take on it and I’m not completely satisfied with it yet, what does Less Wrong think?” approach will get more useful feedback. Posting things somewhat-haphazardly will get you more feedback on background/meta stuff, like the things I’ve focused on in my comment replies, which can be useful but can also backfire in non-obvious ways. Your reputation might get hurt to some extent, which will cause you to get less upvotes and attention in the future when you want people to really pay attention to your well-thought-out ideas. I think this downside is very easy to underestimate. I tentatively think (though I haven’t done a quantitative analysis and there are many other possible explanations) that I used to get a lot more karma per comment before I started posting about things that had way too much inferential distance, or that pattern-matched to things people believe for silly reasons. That was about 4 months ago.
I think knb’s “weirdness” and pjebys “loyalty” seem like good reasons.
Another reason could be that they feel discomfortable about you making another decision because it could imply that they are making wrong life decisions. So persuading you to make the same choice also reassures them that they have have made the right choice for themselves, thus removing a stress source.
Also, a very simple explanation (you kinda mentioned this with the different values) they might just be trying to make the “correct” decision for you. People differ in knowledge. They might just think they make better decisions than you do and try to make your life better.
Another possibility, though I am not so sure how aware people would be of this: if you differ in lifestyle from other people, you might grow apart (less time spent together etc.). Keeping the same lifestyle could ensure that you remain friends.
I really doubt this explanation. It seems like people want to discourage their friends from acting weird because there are serious social costs for people who act weird, and for people closely associated with weird people.
Maybe, but the same kinds of reactions happen even when trying to, say, quit eating donuts.
Heck, there’s stories of people from particularly poor backgrounds getting insulted for trying to get a college education. “Don’t act weird” might explain radical behavioral shifts, but doesn’t explain everything.
I think this is still “explained” by “don’t act weird” to the extent that any of these theories explain anything.
knb’s explanation also covers this one. Also, the only time I recall something like this happening is when my high school friends disapproved of vegetarianism among males because it is unmanly. “The egalitarian instinct” is not a good explanation for that behavior.
It doesn’t seem to me that these are really exclusive hypotheses. If standing out incurs serious social costs, it would imply a general opposition to deviating from the norm.
I have to doubt any explanation that hinges upon preserving the status of members of one’s social group though; I occasionally find myself pressured to conform by people who aren’t friends or even associates, who I am likely never to meet again.
Why the “rather”? How ‘natural’ an instinct is implies nothing about its moral quality.
On the pure grammar:
Egalité Irréfléchie.
(Also the first E is really É, but one is allowed to omit diacritics on uppercase letters.)
I’ll defer to the native speakers on whether the phrase is a good translation. (Those native speakers may also want to have a look here, hint hint.)
I’m a native speaker. “Égalité Irrefléchie” sounds ankward. I preffer “Égalitarisme irréfléchi” (back to male gender by the way). The first would be “thoughtless equality”, while my version would be “thoughtless egalitarianism”. But frankly, neither sound very good.
We have an approaching idiomatic term however: “Nivellement par le bas”. Which quite literally means achieving equality by lowering upper bounds instead of raising the lower ones. It calls the image of cutting mountains off instead of filling the valleys. This expression is widely used as an applause light when talking about schools. (Generally, it goes like: “Let’s help failing children in such and such way”, then “But that method will slow everyone else down! That’s a nivellement par le bas!”)
(Oh, and please don’t forget the diacritic in Égalitarisme or “Égalité”. French typography requires it)
English also has a similar term, “tall poppy syndrome”. To make all the poppies the same height, you behead the tallest ones.
While it’s true that diacritics on capitals are now recommended, it should be acknowledged that there was a tradition of leaving them out.
Thanks for the info on the expression nivellement par le bas.
I acknowledge the tradition, but have rejected it since I used a decent French layout. Now, you can easily have them with a US layout by using dead keys (they can a mild chore if write mostly in English though).
Are you in or near Paris by any chance? We have an upcoming meetup on June 25.
I’m not a native speaker of French. I’m not any speaker of French. I gave up on trying to learn it (and Japanese, except to say wakarimasen). German is tolerable. English is quirky, but has a big enough installed user base.
This (allowance to omit) comes from the days of mechanical typewriters, many of which could put accents on lowercase letters but not on uppercase ones. There’s really no reason for it now (or ever, in handwriting).
Well, there is something of an analogous issue now, namely that accented uppercase letters are infrequent enough that I can never remember the codes, and always have to look them up: while I’ll never forget Alt+0233 for é, I had already forgotten Alt+0201 for É after looking it up yesterday.
That having been said, I certainly agree that “Égalité” looks better than “Egalité”.
No need to remember them; just subtract 32.
Thanks, amended title accordingly.
Please amend the phrase in the fourth paragraph counted from the bottom, too.
Or perhaps these unwanted counselors are suggesting to the non-conformist that he can learn something from the people who did go through the decision process and made a decision. If someone is pursuing their first exploratory steps on a life of heroin addiction, abusive relationships, and petty crime (which in my social sphere, at least, is non-conformist), and I tell them why, in my humble opinion, they might want to reconsider, is that really “stupidity” and “malice” on my part?
I posit that this would be a classic case of reasoned egalitarianism, and therefore does not fit the given profile outlined for “unthinking egalitarianism”
There’s also the possibility that you’re being inconvenient to them. Say, vegetarians can’t go to a true meat lover’s party, people who get up early might need ME to get up eartly for whatever reason, and if your business fails and I live with you, that’s obviously my problem.
More generally, it may be that your unusual choices benefit you, but impose costs on your friends and family. Unusual choices are less “safe”—they can move you farther from ordinary outcomes, and the results are harder to predict. Compare the stereotyped conflicts between parents and their teenaged kids:
Teenager (as seen by parents): “Later, olds! I’m going out with my poorly socialised friends to get wasted and hook up (maybe someone will get pregnant). Woo!”
Parents (as seen by teenager): “Stop there! Ve have ways of preventing your fun! You are never allowed to do anything that you enjoy, ever!”
You do make a good point—especially re: failed businesses.
Though I have also observed the opposite far more frequently. A good example would be when I want to go to bed at a sensible hour and I have friends telling me “oh, you can just stay up just this once”. The friends gain by my staying up (more people to party with), but don’t have to suffer the consequences (eg the lack of my ability to work effectively the next morning). I think this disparity in expected outcomes means they are free to try and “tempt” me to break my new habit that is mildly inconvenient for them.… because they don’t have to pay the heavy costs.
At a minimum, if one person in a household is on a significantly different sleeping schedule, it’s going to be logistically more difficult—everyone else is going to need to be careful about noise for extra hours, and people will have less time with each other.
This hypothesis is at least falsifiable- one can test whether the degree of a peer group’s opposition to one of their members changing depends substantially on these sort of inconveniences.
I like the concept of “unthinking egalitarianism”. I down-voted the article because it dismisses other solutions (including being intensely dismissive of the possibility that someone else really could know better o.o), and provides very little evidence to support this.
No discussion of unthinking egalitarianism is complete without a nod to Vonnegut’s Harrison Bergeron.
Can you distinguish thoughtless egalitarianism from stupidity a little more? Stupidity seems to me to mean just that sort of thoughtlessness.
To me, stupidity can apply to both types of thoughtlessness as well, but I feel, a moral difference between these two types of stupidity (“stupid” vs. “unreflective”):
If people were being stupid, but knew that their stupidity was causing problems you could condemn them for not being willing to learn more. When people accuse others of being racist, they frequently say “that’s ignorant”. In this case, ignorance is considered inexcusable. The offender hasn’t done their due diligence.
On the other hand, if people aren’t even aware that they have some shortcoming that they should be alleviating, i.e. by being unreflective, it’s harder to hold them morally accountable. It seems to be normal and perfectly acceptable in modern society not to have ever considered the reasons for your values in the cases that lionhearted brings up (such as junk food, drinking and sleeping).
Voted up.
A good policy for Less Wrong folk (and ‘nerds’ generally) might be to think of all stupidity as the latter, i.e. something like lack of reflectivity caused by a combination of boundedness and near-total absence of affordances. In fact, I would argue that allowing oneself to condemn others for ignorance (as you put it) is mostly a harmful policy also caused by a combination of boundedness and near-total absence of affordances, in roughly the same manner as the ignorance one condemns. Considering the fact that I just went meta everyone should now upvote me. ;P
It might be useful to spend some time thinking about how we and others use the word “stupid”.
Creationism is stupid. Believing creationism is stupid. Teaching it, or advocating for teaching it, is stupid. Why would anyone ever be so stupid? Well … I’m not sure, but I think it has a lot to do with: ① inferential distance: evolution is a bit of a counterintuitive idea, like recursion or quantum theory; and ② loyalty: many people are systematically taught that evolution is an idea of the Enemy, so they have a presumption against it.
But here’s the thing. “Inferential distance” is an exterior view on a cognitive process. From the inside, an idea that’s too far away from your knowledge looks a lot like a nonsense idea, an unproven idea, a wild conjecture, a “how could you ever know that‽” idea. Evolution must look to creationists the way claims of paranormal abilities look to me: “How could that ever work? You don’t have any good evidence of that. That’s not the way I was taught the world works! Your experiments can’t be trusted; it’s more likely you’re playing some sort of tricks with the data. Besides, my fellow skeptics have thought of a bunch of challenges you need to meet to prove you’re not just making it all up.”
Creationists think I am stupid because I believe in evolution. They think I have fallen for a hoax put together by atheistic scientists under the influence of the Devil. I think that they are stupid. I think they have fallen for a hoax put together by preachers under the influence of the memetic evolution of religious beliefs.
It seems likely that someone has been fooled. If human intelligence arose from protohumans’ differences in ability to fool one another about life-critical subjects … or if human doubt arose from protohumans’ inability to distinguish between the promises of God and those of the Devil … then whether you are being fooled is pretty much the most important thing to know.
And this sums up why I feel that respect for the silly beliefs of others is important: it sets the stage for the acceptable treatment of things that are confusing or silly.
It’s not that you take the belief seriously, but rather that you take seriously the epistemic position that makes that belief seem sensible.
Beautifully put.
...I think you just successfully reduced stupidity into non-stupid parts. You should do a top-level post.
Good question. “Stupidity” can mean a variety of subtly different things in English. I was drawing to a distinction between “poor judgment” and “never having consciously thought about it at all.”