How to teach to magical thinkers?
I’m afraid I haven’t properly designed the Muggles Studies course I introduced at my local Harry Potter fan club. Last Sunday we finally had our second class (after wasted months of insistence and delays), and I introduced some very basic descriptions of common biases, while of course emphasizing the need to detect them in ourselves before trying to detect them in other people. At some point, which I didn’t completely notice, the discussion changed from an explanation of the attribution bias into a series of multicultural examples in favor of moral relativity. I honestly don’t know how that happened, but as more and more attendants voiced their comments, I started to fear someone would irreversibly damage the lessons I was trying to teach. They basically stopped short of calling the scientific method a cultural construct, at which point I’m sure I would have snapped. I don’t know what to make of this. Some part of me tries to encourage me and make me put more effort into showing these people the need for more reductionism in their worldview, but another part of me just wants to give them up as hopeless postmodernists. What should I do?
- 17 Mar 2014 14:41 UTC; 3 points) 's comment on High school students and epistemic rationality by (
I can’t resist...
Did scientific method grow on a tree, or did people invent it?
Did people invent scientific method simultaneously everywhere, or was it invented and practiced at specific places?
:D
The real fallacy in my opinion is having a connotation that if something is constructed and promoted within a culture, that makes it wrong. For example, consider the Pythagorean theorem… knowing that Pythagoras was a rich white cis male, shouldn’t we remove it from the curriculum? And perhaps replace it with something more enlightened, such as: “all sides of a triangle are equal, even if their lengths may be different”.
In the same sense, science, even rationality itself, are cultural constructs. Maybe even human speech is a cultural construct, but luckily that happened sufficiently long ago so now all cultures have it. Okay, I am not sure about the last example. But I am sure that calling things “cultural constructs” is a cultural construct itself.
That’s already been done
“A topologist is someone who can’t tell the difference between his ass and a hole in the ground.”
Are those two things really homeomorphic? A topologist’s arse has a hole running all the way through it, but a pit in the ground’s only open at one end. You might say: go far enough into a bottom and eventually you reach a hole; go far enough into a hole and eventually you reach a bottom.
(Sorry. I’ll go to bed now.)
The scientific method is a cultural construct, but one that yields nice things such as iPhones and reasonably accurate theories of physics. Of course, it also helps produce nasty things like atomic bombs.
I think the real fallacy is saying that the scientific method is just as good as any other method at finding truth.
and
Are these statements as independent as they seem? It is my impression that ”… and all cultural constructs are equally valid” is at least connotatively associated with the notion of a “cultural construct”.
Good point.
Can you say more about where this impression comes from?
I would agree with ”...and cultural constructs do not represent a uniquely valid objective truth,” and various things along those lines. But “all cultural constructs are equally valid” seems significantly overstating the case.
For example… I expect that most people who talk about cultural constructs at all would agree that chattel slavery and abolitionism are both cultural constructs. I doubt they would agree that they are equally valid for any understanding of “valid” that is at all relevant to this discussion.
Do you expect something different?
I think that the expression “cultural construct” implies that the construct in question is a representation not of physical reality, but of something inside people’s heads.
Usually this is held to mean that cultural constructs are somewhat arbitrary, highly malleable, and do not involve laws of nature.
I think the scientific method is something that scientist do. It’s not an object in physical reality the way a chair happens to be.
Do you think that the scientific method-1800, the scientific method-1900, the scientific method-1950 and scientific method-2014 happen to be exactly the same thing?
Yes, of course.
I don’t understand the question.
Well, I certainly agree that “cultural construct” implies (indeed, I would say it denotes) something inside people’s heads. And I agree that many people believe that, or at least are in the habit of thinking as if, the contents of people’s heads are somewhat arbitrary, highly malleable, and do not involve laws of nature.
I’m not sure how that relates to the ”… and all cultural constructs are equally valid” clause I asked about, though.
It relates through the not involving the laws of nature part. In a certain sense cultural constructs are not real. They are imaginary. And you can think of all imaginary things as about equally valid.
I am aware of holes in that argument, but getting back to the original point, when people call something a “cultural construct” there is a pretty heavy implication that whatever the replacement for it they have in mind is going to be at least as good and probably better.
Just because something is a cultural construct, and thus “imaginary” or pethaps even subjective to some extent, does not mean it’s not about reality. To think otherwise is simply a mind projection fallacy.
True, but I’d not place much trust on a map whose creators refuse to constantly check with the territory.
Mm. Yeah, I’ll accept that.
It looks like the problem might be that saying “X is a cultural construct” gets read as “X is just a cultural construct and as such has no value outside of it’s cultural boundaries”.
There is more to a thing than how it came to be.
If your definition of “truth” is such that any method is as good as any other of finding it, then the scientific method really is no better than anything else at finding it. Of course most of the “truths” won’t bear much resemblance to what you’d get if you only used the scientific method.
Also most of these truths will eventually wind up putting you in a position where you start experiencing pain or even dying despite your “truth” telling you that you aren’t.
Or as Chesterton put it:
Or as Dick put it: “Reality is that which, when you stop believing in it, doesn’t go away.”
Scientific method should not be blamed, it is the people who created them should be questioned. We all know people who have invented iPhones and bombs have followed a specific method.
It grew on a tree. Olives grow on trees too, but no-one knew you could eat them until someone discovered that soaking them in brine makes them edible.
Or less metaphorically, science was discovered, not invented. It works for reasons that have nothing to do with us.
The techniques of the scientific method are universally valid; they’re not contingent on a specific culture. If civilization was wiped today and we had to start from scratch, we would discover the same methods to ascertain natural laws and apply them to our purposes.
When we find an alien culture, I expect they will follow the same rules to find out what works and what is real (if they’re advanced enough to use science).
Different scientific communities have different methods. The scientific method as practiced by physicists isn’t the same method we use in computer systems research and it isn’t the same method they use in medical research. And this isn’t because these different fields have different deviations from the One True Method—it’s because different subjects require different methods to prevent error.
In computer systems and physics, data is typically collected by machines. We therefore aren’t worried about observer bias or placebo effects, and we don’t usually worry about blinding things from experimenters.
With computer systems, everything is reasonably deterministic and so statistical error isn’t a major concern. Also, any effect that’s big enough to be interesting is likely to be far larger than statistical noise—it’s not an interesting paper unless you got a factor of two improvement or something like that.
In physics, it’s routine to circulate preprints before a peer-reviewed paper. This doesn’t happen much in computer science.
In physics and CS, a purely theoretical argument without data can be taken seriously, and often people will trust theory more than experiment. My impression is that people don’t have nearly the same sort of confidence in theory in biological or social science.
I think talking about The scientific method is mostly an oversimplification. I don’t hear professional scientists using that category when talking amongst themselves about their work. I hear much more about the particular publication and review norms of individual fields.
By scientific method I would mean something on a far more general level than details about circulation of preprints.
Architecture varies, but the structural mechanics that describes how buildings stay up is the same always and everywhere.
The awkwardness is that once you generalize enough to cover everything we normally refer to as “science”, it’s hard to include a very wide range of things we don’t normally think of as science.
We don’t think of legal reasoning as science, but it involves using information and experimentation (with a community of experts!) to update our model of the world.
The fashion industry uses experiment and empirical reasoning to figure out what people want to buy. But I don’t think it’s useful to talk about fashion designers as scientists.
I think the term “scientific method” as normally used in English does not pick out any actual cluster of behaviors or practices. It’s a term without a coherent referent.
The term “scientific method” as ordinarily used is associated with the traditional rituals of “Science”, which are themselves unsatisfactory, or at best an improvable-upon approximation to what really works in finding out about the world. The more useful cluster is the one hereabouts called Bayesian epistemology. It can and should be practiced everywhere, and if a fashion designer employs it, it is just as useful to call it that as when a scientist in the laboratory does.
Science is tailored to counteract human cognitive biases. Aliens might or might not have the same biases. AIs wouldn’t need science.
For example, science says you make the hypothesis, then you run the test. You’re supposed to make a prediction, not explain why something happened in retrospect. This is to prevent hindsight bias and rationalization from changing what we think is a consequence of our hypotheses. But the One True Way does not throw out evidence because humans are too weak to use it.
That isn’t really clear to me. Science wasn’t intelligently designed; it evolved. While it has different ideals and functions from other human institutions (such as religions and governments), it has a lot in common with them as a result of being a human institution. It has a many features that contribute to the well-being of its participants and the stability of their organizations, but that don’t necessarily contribute much to its ostensible goal of finding truth.
For instance, it has been commonly observed that wrong ideas in science only die when their adherents do. Senior scientists have influence proportional to their past success, not their current accuracy. This serves the interests of individual humans in the system very well, by providing a comfortable old age for successful scientists. But it certainly does not counteract human cognitive biases; it works with them!
Yes, science has the effect of finding quite a lot of truth. And philosophers and historians of science can point to good reasons to expect science to be much better at this than other claimed methods such as mysticism or traditionalism. But science as an institution is tailored at least as much to self-sustenance through human biases, as to counteracting them.
What do you mean with those techniques if you would have to taboo “scientific method”?
Sistematized curiosity, carefully doubt-filtered and confirmation-dependent.
Could you explain that a bit more in detail?
I think a Buddhist who seeks enlightenment might practice systematized curiosity. He doubts a lot of things that I take for granted. He only believes things that are in some sense confirmed by his perception.
Wow. That’s an unexpected view into myself. I happen to be a Theravada Buddhist.
Of course, I wouldn’t expect Buddhist meditation techniques to be necessarily useful for alien species.
I’m not familiar with that particular brand of Buddhism, but does it have concepts like karma and reincarnation? If so how do you deal with them and at the same time wanting to promote reductionism?
Theravada Buddhism is mainly practiced in Sri Lanka and Mainland Southeast Asia.
We do not consider Buddha to be a mystical superbeing come to Earth from celestial realms. We see Buddha as a regular guy who thought very hard about the problem of emotional suffering and came up with an innovative self-hack.
Karma and reincarnation are an inevitable part of Indian culture, and Buddhism was also touched by them. Karma is understood as the effect of your intentions, rippling across causal chains, and through some of those causal chains, influencing your future circumstances. It is not a cosmic system of morality, but a way of reminding you to be mindful of how what you do affects others and, potentially, your future selves.
Reincarnation is something I have more serious problems with. For one, I do not believe it. It is not a mandatory belief, though (nothing is, actually).
I have to preface by not saying that I’m not a Buddhist myself but that I do my meditation in a non-Buddhist framework. That means I do have my fair share of experiences but I do have to translate between frameworks.
Of course karma is all about causal changes but a lot of the causal changes that Buddhists see don’t really lend itself to materialist reductionism.
More importantly if you say karma is the effect of your intentions rippling through causal changes, that doesn’t answer the question of why the whole “goal” of Buddhism is to move beyond karma and become enlightened. It doesn’t even tell you what that “goal” is supposed to mean.
To me your answer looks like you are just reciting the teachers password. If I would go to an advanced Buddhist teacher I doubt that he would tell me that karma is about influencing your future circumstances because Buddhism is about being in the now, being in the moment.
Of course Buddhism has no mandatory beliefs but if you drop reincarnation and keep karma you are left with asking where all that karma that determines your life comes from if not a previous life. In some sense it’s a valid Buddhist position to not seek for a source but if you a a reductionist, then part of that means actually breaking things down and not just stopping at saying that the karma comes from somewhere.
From the actions of other people? One part of Buddhism is to de-emphasize the concept of ‘self’, so the difference between “good/bad actions will cause good/bad things to happen to future reincarnations of me” and “good/bad actions will cause good/bad things to happen to other people in the future” might be smaller than it would seem at first sight.
Buddhism does not de-emphasize “self” to focus on other people.
Buddhism de-empahsizes “self” in the meaning of the continuity of identity—the classic Buddhist view looks at the mind/soul as beads on a string (of time) -- the beads are similar but they are not just one bead.
Yeah. It reminds me of questions like what if, 5 seconds from now, I will be Britney Spears?. I’m a little unclear on exactly what parts of “you” continue into the next incarnation (metaphors like “a lamp lighting another lamp” are not very precise)---I think you don’t get memories, but you do get mental habits and inclinations?
I could imagine a Less Wronger taking the position that “supposing for the sake of argument that everything in Buddhist metaphysics is correct, the similarities between two reincarnations are not great enough to preserve your personal identity in the philosophical/moral/my-utility-function sense. So you have no reason to care more about your future incarnation than about any other person”.
Furthermore, I could also imagine a Buddhist making that argument. Two recurring themes seem to be that it’s bad to focus on what you want, and that in fact you should abandon the idea that there is a “you” that wants things. If you follow that advice it seems you should not care about what will happen to “your” reincarnation in particular.
A few notes:
Moral relativism and metaethics in general is unrelated to the scientific method, I hope you can figure out why and maybe discuss it the next time.
You appear to make a sharp division between you (the enLWightened) and “them” (the unwashed). Given that “the need to detect [the biases] in ourselves”, how much effort and time have you put into describing your own experiences?
Given the apparent failure of this last class, can you identify your personal bias or a fallacy which resulted in you being blindsided by this failure?
Consider starting small, with short, clear and engaging examples, like the Newcomb’s problem, the PD or the Trolley problem, or the Milgram or Stanford experiments
A common problem of novice instructors is to cram a lot more material in one class than the students can conceivably absorb. This is because we tend to underestimate how hard something is to learn after we internalized it. After all, it looks so clear now! Consider reducing the amount of material you plan to present and go over more examples instead.
If you know your audience well, consider modeling their reactions to what you say, given their level of understanding, interest and skepticism, then plan for contingencies, like how to get a sidelined discussion back on track without being heavy-handed.
Good luck!
Newcomb and Trolley problems are too removed from the real world to be useful topics for an introductory class, and I’d say the others are too advanced for an introductory class. All of them are controversial enough that you can’t simply say, this is the right answer and all other answers are wrong.
Thinking aloud about how I might go about it (but without ever having done so) I wouldn’t start with biases. I’d start on the positive topics of the truth being out there and what you must do to discover it. The virtues of rationality, with the vices (biases, error) introduced to illustrate how people go wrong. The 2,4,6 problem is about the right level of example to use, rather than exotic decision theory.
Yeah, I thought about it, but then my personal ontology does not rely on the concept of objective truth, so I’ve been reluctant to suggest it. It is easy to imagine that postulating objective truth would likely devolve into a discussion of logical positivism and its issues, which is not what the OP wants.
Against that background idea, how do you manage even to safely cross a road?
Consider less strawman.
Sorry, I can’t make any better guess as to what you mean, that would rule out the truth of what traffic is out there and what you must do to perceive it as valid concepts, while making crossing a road safely unproblematic.
I posted about my ontological views multiple times, here is one. Not interested in revisiting this discussion here, since it’s not relevant to the OP.
It’s awfully easy to read the Milgram or Stanford experiments as (e.g.) ammunition for anti-authoritarianism without deeply understanding what makes them tick. This seems to be a general problem with dramatic psychological results.
I haven’t noticed that that’s any less common among non-novice instructors.
In retrospect this was almost inevitable. Bias means one thing in modern society.
Taboo bias and try again?
Perhaps they could be called “errors”, errors we have systematic tendencies to make, and when describing them, explain every time why they are errors, why they fail to cut through to the truth. Then people may not find it so easy to interpret “biases” as being like taste in music or clothes.
Disclaimer: I have no experience of trying to teach this stuff to anyone.
I had sort of forgotten that “bias” could be taste in music or differential human outcomes based on “biased” treatment. Noticing that collision was helpful to me.
Also, I think there is an interesting quirk in the LW/local usage of the term “bias” and its general stance towards epistemology. The local culture is really really into “overcoming biases” with a zeal and cultural functionality that has echoes in the Christian doctrine of Original Sin.
(Not that this is bad! Assuming that people are in cognitive error by default because of biases is useful for getting people to actually listen with some measure of generosity to inferentially distant third parties and teachers and so on. Also, the “biases” framing powers a pretty good sales pitch for learning about meta-cognition because loss aversion is a known bias that people who need meta-cognitive training probably have. Given knowledge of loss aversion, you should naively expect people who need a rationality upgrade to be about three times more interested in avoiding cognitive downsides as compared to their enthusiasm for cognitive upgrades. The very name of the website “less wrong” is great marketing from this perspective :-P)
In any case, in academic psychology it is generally acknowledged that “biases” and “heuristics” are in some sense equivalent. Specifically, both processes involve leaping from hints to conclusions with locally inadequate justification. When this happens in a way that can be retrospectively determined to be incorrect, it gets negative valence and we call it a “bias”. When it comes out well so that it seems like a dandy cognitive labor saving device, it gets positive valence and we call it a “heuristic”.
The key insight is that heuristics are heuristics only in limited domains, and no technique that we call a heuristic can be profitably deployed outside it’s appropriate context. When someone attempts to deploy a heuristic in a completely generic way, they transport it outside of the context it was tuned for and it becomes a bias. In the meantime, there are distinct techniques that are neither biases nor heuristics, but they generally take much longer to compute, or require more data gathering than busy people with busy competitors have time for.
Cialdini’s book Influence has a bunch of great examples of contextually dependent cognitive shortcuts. If you lived in a small social context that had been self contained, poorly mixed, and functional for a long period of time in the past, it would be a pretty great life heuristic to trust and copy people who were benevolent towards you, similar to you, but slightly higher status. Doing the same “trust and copy” routine with people you see on TV, people on random street corners, or with professional modern/urban sales people who have read Cialdini is much less advisable. The heuristic becomes a bias because the social context has changed.
The issue of context and generalization can get really deep, and (so far as I’m aware) is not a solved subject with a widely recognized popular solution. An entry point into substantive literature on what is sometimes called “the foundations of inference” is Wolpert and Macready’s “no free lunch theorem” and thematically associated mathematical work having to do with compression and sorting.
A deep (and admittedly somewhat hand-wavy) conclusion that falls out of this work is that for inference or evolution or thinking to ever find any sort of “purchase”, there must be substantial structural and/or energetic redundancy in the local “reality”. Otherwise it would be be pointless and/or impossible to progressively accumulate things like: (1) knowledge worth remembering or (2) adaptations worth having or (3) heuristics of seemingly generic utility. If physics was pure chaos and noise, there would be no life, no brains, and no point for those brains to be concerned with such abstruse concepts as epistemology in the first place.
This loops back around to the OP’s classification of some people as “magical thinkers”. Many humans do not seem to feel in their bones that they exist within a logically consistent mesh of redundantly patterned causation. They seem to model the world as being mostly chaos, with some moderately powerful agent(s) that approve or disapprove of various rituals being followed. I think what the OP is asking for is a way to convey “the feeling of existing within a logically consistent mesh of redundantly patterned causation such that various inference techniques are justified” to arbitrary humans via a few thousand words, but (tragically?) at the present time I do not know how to do it.
Yeah, I figured ‘errors’ would be a prime candidate for replacing ‘bias’. Almost anything else that made any sense at all would be better.
Other possible words: “distortion”, “contortion”, “inclination”, “tendency”, “trend”.
(generated using Google Translate, translating “bias” to another language and back)
I’ve heard “bias” and “conflict of interest” used as interchangeable synonyms in the same sentence before. I’ve also seen it often used to refer to partisanship.
Might want to specifically defuse those two preconceptions before any sort of course on biases can be taught.
I had this problem recently too, and my solution was to not mention “science” in and of itself, but mention heuristics based on probability. It’s much harder to argue that math is a social construct. If you can explain how biases fail using probability theory it might go over a lot better.
I think speaking in terms of probabilities also clears up a lot of epistemological confusion. “Magical” thinkers tend to believe that a lack of absolute certainty is more or less equivalent to total uncertainty (I know I did). At the same time, they’ll understand that a 50% chance is not a 99% chance even though neither of them is 100% certain. It might also be helpful to point out all the things they are intuitively very certain of (that the sun will rise, that the floor will not cave in, that the carrot they put in their mouth will taste like carrots always do) but don’t have absolute certainty of. I think it’s important to make clear that you agree with them that we don’t have absolute certainty of anything and instead shift the focus toward whether absolute certainty is really necessary in order to make decisions or claim that we “know” things.
Not just magical thinkers. I heard Massimo Pigliucci making the same “this isn’t definitive and therefore it tells us nothing” argument on the most recent Rationally Speaking podcast.
You’re right. I think scientific thinkers can sometimes misinterpret skepticism as meaning that nothing short of peer-reviewed, well-executed experiments can be considered evidence. I think sometimes anecdotal evidence is worth taking seriously. It isn’t the best kind of evidence, but it falls above 0 on the continuum.
I’m generally skeptical of lecturing as a method for teaching anything. Find or invent a game where victory hinges on understanding some basic principle you want to teach, and have the club play it.
The problem with this approach is that compartmentalization may interfere, making people think of the game as “that weird thing where we do X”, where X is something you’re trying to teach people to apply to life in general.
Compartmentalization is a problem with everything. I expect it to be smaller with games than with lecturing.
Yeah, somehow you also need to train them to not do that, and I have no ideas along those lines.
Great idea. Will use.
Perhaps you did not pick the correct biases? Remember the way that Rational!Harry convinced Draco Malfoy? You need to start small. First produce a belief that your methods lead to more correct results, then build up more and more biases and have them create a more accurate picture of the world. Then when you have a critical mass, go for a bias that is more central to their worldview. You need your ideas to be strong enough to win the inevitable contradiction war.
Consider that LessWrong is a self selected community. People come here, read the site, and only stay if they agree. What if out of 10000 potential members the site only convinces 5% or 10%? And furthermore the people who disagree here don’t have the power to control the conversation so as to throw off people on the fence or people who have a small initial agreement.
Personally my anecdotal experience, re: convincing people of atheism, is that you need to focus on doubters and even then you maybe only get a 1⁄10 ideal result. Group conversions are unlikely to be effective. Maybe aim for a few people to talk to you afterwards about more info on the topic.
I think your post is quite ironic. You start by saying that you explicitly tried to teach them that to first detect biases in yourself and then in other people. Then you say how they got it all wrong without any investigation of whether your own beliefs might need updating.
You confuse the quest for reductionist with the quest for bias free thinking. Those two are different projects. Nobody gives you a good Anki deck for rationality because there nobody around who reduced rationality to atomic concepts that’s you could stuff into an Anki deck. Most people usually don’t take reductionism really seriously and try to use it on everything. Most people just use it for those questions for which other people use reductionism.
In many cases today the quest for empirical experiments is very different from the quest for reductionism. If you want to teach people to value empirical evidence, teach them to do QS experiments. If you do QS you will soon learn that it’s pointless to try to reduce all phenomena you interact with to atomic units. It doesn’t change anything about the data and you will make a lot of mistakes if you focus to much on reducing things to much.
If I on the other hand try to create an Anki deck for a topic it’s very important to practice reductionism and reduce concepts to atomic units. Running empirical experiments however doesn’t help much with creating a good Anki deck (at least if you don’t have a lot of people to test variations of the deck).
Both reductionism and empiricism is a frame. It’s useful to know when to use which one and when to use an even different frame.
Thanks for the clarification.
So far I haven’t touched the subject of reductionism with them; I feel they’re still too hostile to the idea. For the moment I’m focusing on the rules of logic and proper thinking.
Oog, I cringed when I read this. This kind of language is very hostile.
You mean Quantified Self, right? It wasn’t clear to me at first and I want to clarify for others.
Yes. Thank for the note, in future I’m going to spell it out while on LW.
I don’t think the problem is magical thinkers, it’s probably (as Luke said) that bias has more than one meaning.
It might be worth exploring how you tell when a behavior is a matter of harmless variations in custom and when some behaviors are better than others (identifying when a project is no longer worth pursuing rather than just assuming it should be completed).
You should characterize such discussions as “advanced”, and briefly comment on the major emotional, social, and status biases that go into such questions. When they have some understanding of their cognitive biases around questions of facts that they have no emotional investment in, then you can start talking about social and value laden biases, and maybe try some discussions where they are operative.
It’s your party. They’re the guests. When people are talking off topic, politely inform them of the agenda and move on.
I would just make the point that nothing wrong with the idea that different people’s cultures and traditions and lifestyles are equally valid; cultural relativity does make sense in it’s own context (there’s nothing inherently better about living in a small middle class suburb with 1.5 kids a dog and two cars). But just make the point that objective reality is not relative; when you’re talking about the universe itself and objective reality, it is something that simply is. There was one great quote, “reality is that which continues to exist even if you stop believing in it”.
That’s really the key here; don’t tell people that they’re wrong for thinking that each culture has it’s own value and that no culture or lifestyle is necessarily any better then any other, but just try to draw a hard line between that and the objective nature of reality and therefore of science.
Why should the attempt to draw that line convince anyone who doesn’t care about the scientific project in the first place?
I’m at the moment reading “The Feeling Good Handbook” with is about Cognitive Behavior Therapy. Part of the exercises that the book has is the identification of irrational beliefs that make people unhappy.
The people who are interacting with probably want to be happy. That’s a place where you can meet them. If you can give people a way clear idea that they can become more happy by getting rid of their irrational beliefs, your chances of conversion are much better than if you simply try to enforce a hard line that makes the objective nature of reality special.
I think that most people have an intuitive understanding that there are some things that are objectively true and some things that are objectively false, at least in terms of the physical universe around us. Few people would disagree with a statement like that. If they don’t agree with that right away, give them some concrete examples; is the statement “If I’m standing on Earth and I drop a rock, it generally falls” more or less true then the statement “If I’m standing on Earth and I drop a rock, it generally flies into the sky”.
Most people will concede that the first statement is more true then the second statement, and you can work from there to a general principal.
Basically, you have to understand why people accept certain ideas. In the case of cultural relativism, the reason people accept it is because a lot of cultural and social behaviors and beliefs are in fact very relative, and have more to do with how someone was raised then with any sort of objective reality. If a person fails to understand that, then they’re likely to end up like the characters in the Dr. Seuss book “The Butter Battle Book” who end up fighting a war over the question of which side of the bread you should butter.
Very often when a person has a general heuristic or way of thinking, there was originally a good reason for it; the person just made the mistake of applying that heuristic to situations where it doesn’t work. The way to deal with that isn’t to tell them the heuristic is wrong, because then they’ll think of all the times when it seems right to use it and dismiss you; it’s to try to draw a line to separate situations where the heuristic is useful and situations where it’s not.
Eh. Maybe. I tend to find that people are instinctively paranoid about that line of approach, though, since “you’ll be happier if you believe X” is a line of attack usually taken by religions, cults, and other hostile memes. And, of course, most people don’t think that their beliefs are irrational.
That will only work if the person has no emotional attachments to his current beliefs.
No. People might accept cultural relativism because all their friends think that cultural relativism is cool.
It a line of attack being used by religions because it works for certain people. If you want to convince the kind of person who is religious because he believes the promise that being religious leads to a happy life, than you have to play on that level.
I think it’s fairly straightforward to find a minor irrational belief of someone else that’s irrational and where changing that belief would make the person more happy. It’s just a matter of being flexible enough to understand where another person is coming from, understanding the challenges they face in life and understanding which beliefs hold the person back.
Convincing a person who believes that they are worth nothing that, this is an irrational belief isn’t that hard. Changing the belief is a bit more difficult but most people have plenty of beliefs that they don’t like to have.
That’s why it’s important to acknowledge that the idea they are using is useful in some situations, just not in others. It’s a way to “leave them a path of retreat”, a way for them to take a step back and not totally “lose” a belief they found useful and effective (and that they have an attachment to), but to realize that it’s just not useful in all situations.
In that one sentence, you are both dramatically overestimating and dramatically underestimating the vast majority of people at the same time.
Overestimating, because most people don’t discuss with their friends the virtues of cultural relativism on a regular basis. And underestimating, because the people that do generally are more thoughtful, philosophical types, who tend to hold their beliefs for actual reason.
Keep in mind that “cultural relativism” isn’t anywhere close to the lowest common denominator here. It’s several steps more rational then the lowest common denominator, which is “my culture (Christian/American/white/English speaking) is the best culture, and anyone who disagrees with it is either an idiot, evil, or both”.
Cultural relativism is a somewhat more rational level reached by people with a certain amount of education and intelligence, or who come from a more cosmopolitan/open minded background. It’s not the most rational level, certainly not if they try to apply it to sciences or if they use mangled versions of it to defend bizzare belief systems, but it’s not the least rational level either; we’re talking about people who are generally already in the top 20% or so. And people who are cultural relativists tend, almost by definition, to be willing to listen to other points of view; they’re more likely to hear you out, if you appeal to them on a rational level and don’t treat them like idiots (which is sounds like you’re doing right now).
But we’re not talking about narrow minded conservative religious types; they, almost by definition, hate the idea of cultural relativism (because their culture is the only one that’s right). We’re talking about people who have moved past that.
Anyway, people who are religious are VERY resistant to hearing recognizable religious-style arguments being used for what they deem to be anti-religious purposes. They have a lot of resistance to the meme, because they carry a version of it themselves. It’s a reason that devout Christians don’t, as a rule, become Scientologists; they’re already protected against that type of mematic attack. It’s also why very religious Christians are so likely to call ideas like the singularity or transhumansim a “cult” and thus reject it; if you’re using something that looks to them like a religious argument to “convert” them to a “different religion”, their meme immune system rejects it instantly. (Obviously ideas about transhumanism and the singularity are not actually religions, but that doesn’t matter.)
The thread opened by talking about people who go to a Harry Potter fan club. The person who wrote the thread mentioned in the past that the fan club holds things like astrology lessons.
I would think that the audience is people in the vague New Age spectrum which like pop spirituality and do have some sort of belief in God.
Being thoughtful just means that you are better at rationalizing your belief. It doesn’t make you escape the trap of holding beliefs for signaling social status. Read a bit Robert Hanson.
Do you feel like I’m treating you as an idiot? If so, that’s not intended. Cultural relativists are not the target audience of posts I write on LessWrong.
That is probably true.
If a person is thoughtful and feels the need to rationalize their belief, then they are usually someone who can be reached through reason and rational arguments. If nothing else, they’ll probably have to improve their own rationalization, perhaps take a small step back from their previous position or have a little more doubt about it. Most people actually are willing to be convinced of most things, in the right situation, so long as you don’t try to push them too far out of their comfort zone all at once.
The only people who can’t be reached at all by reason are people who claim to be completely motivated by faith and belief.
Edit: Also, there are real and valid reasons that “cultural relativism” has become a system that intellectual types claim to have in order to signal social status. If you don’t understand that, then you’re never going to change the minds of the people who help create those signals in the first place.
No, no; I’m not offended. I just feel like you have an extremely low opinion of the people you’re talking about trying to convince, which is something you should generally try to avoid; if you act like you have contempt for someone, you will never convince them of anything.
I’m far from contempt when it comes to people who are cultural relativists not being convinced by reason.
There nothing contemptful about recognizing that another person wants to be happy and helping them to be happy.
I don’t let my emotions interfere with my reasoning on that level. I don’t let myself get blinded by compassion. I don’t act based on the belief that people should be rational. I have read enough cognitive psychology to know that they aren’t.
I think this is a classic example where arguments alone don’t do much. You don’t like cultural relativists on some level. You think you would need to feel contempt if you would recognize the finding of cognitive psychology about how people come to hold the beliefs that they do.
If I don’t provide you with a way to not feel contempt while accepting cognitive psychology ideas about how humans come to hold the beliefs that they do, I won’t convince you because you have something to lose on a emotional level.
At the moment I’m not trying to convince them. I’m want to convince you.
It’s less than two weeks ago that a woman with a New Age background asked me whether I teach meditation somewhere. I don’t have any problem with interacting in that environment.