Publication: the “anti-science” trope is culturally polarizing and makes people distrust scientists
Paper by the Cultural Cognition Project: The culturally polarizing effect of the “anti-science trope” on vaccine risk perceptions
This is a great paper (indeed, I think many at LW would find the whole site enjoyable). I’ll try to summarize it here.
Background: The pro/anti vaccine debate has been hot recently. Many pro-vaccine people often say, “The science is strong, the benefits are obvious, the risks are negligible; if you’re anti-vaccine then you’re anti-science”.
Methods: They showed experimental subjects an article basically saying the above.
Results: When reading such an article, a large number of people did not trust vaccines more, but rather, trusted the American Academy of Pediatrics less.
My thoughts: I will strive to avoid labeling anybody as being “anti-science” or “simply or willfully ignorant of current research”, etc., even when speaking of hypothetical 3rd parties on my facebook wall. This holds for evolution, global warming, vaccines, etc.
///
Also included in the article: references to other research that shows that evolution and global warming debates have already polarized people into distrusting scientists, and evidence that people are not yet polarized over the vaccine issue.
If you intend to read the article yourself: I found it difficult to understand how the authors divided participants into the 4 quadrants (α, ß, etc.) I will quote my friend, who explained it for me:
I was helped by following the link to where they first introduce that model.
The people in the top left (α) worry about risks to public safety, such as global warming. The people in the bottom right (δ) worry about socially deviant behaviors, such as could be caused by the legalization of marijuana.
People in the top right (β) worry about both public safety risks and deviant behaviors, and people in the bottom left (γ) don’t really worry about either.
- 13 Feb 2014 14:32 UTC; 2 points) 's comment on Publication: the “anti-science” trope is culturally polarizing and makes people distrust scientists by (
When some folks think “Science”, they don’t think “formalized careful thinking/writing things down/reproducibility,” they think “competitor priesthood.”
“Anti-science” seems like sloganeering.
I’ve previously remarked that
And in this case, the layman is much closer to the truth. While the scientist in question likely isn’t an idiot, he is basically a liar.
For the scenario indicated, “there’s no scientific evidence for X”—is almost always false. The “scientist” in question is arguing from authority with a lie. He doesn’t have fantastical standards for evidence, he just pretends to himself that such standards are appropriate for things he disagrees with.
If the scientist actually says
possibly starting with “Maybe, but I have heard too many enthusiastic claims that failed later so I’m skeptical.”, then it is no lie and both don’t need to depart angrily.
But a condescending “science says no” surely sounds like ivory tower arrogance.
If you’re actually pushing rationality in general rather than scientific results in particular, you could talk with the person about doing experiments.
Yes, whenever you hear “there’s no scientific evidence for X” you should keep in mind that there are published meta-reviews in support of homeopathy and telepathy.
Yes, there might be good reasons to assume that a lot of the studies that find that homeopathy and telepathy works are flawed but saying there no evidence often just ignores the research.
If there really no evidence in favor it usually just means that nobody studied the question at all. In that case if I hear from someone who lost weight with method X and nobody did run a study on it, there’s nothing wrong with trying method X yourself provided the method doesn’t seem dangerous.
Oh yes, I avoid talking about fallacies for that same reason.
It doesn’t help that the “scientists” in the OP were in fact acting like priests, i.e., using appeals to the authority of an abstract concept and accusing their opponents of heresy rather than presenting arguments.
True. I think hardly anyone on either side would use the term “anti-science”. The terms aren’t important, but rather the article is referring to the “us-vs-them” mentality.
Also, I like the term “competitor priesthood.”
Google only turns up “About 915,000,000 results” for anti-science.
Not really surprising. If you tell most people that those who think as they do are ignorant savages, the expected response is not “Oh, we need to change our ways”. The expected response is “Fuck you”.
I know this feels obvious on paper, but when I look at people arguing for evolution or vaccines, it doesn’t look that way. I want to stress again that most people don’t go outright and insult people. Rather, arguments from the pro-vaccine, pro-evolution, etc. camps often have a subtle context of, “this is obvious, why are we even still talking about this?” When summed up across countless conversations, though, it constructs a trope of “people who don’t believe in evolution are ignorant savages.” It’s then really hard to keep that subtext out of your conversations.
Calling anti-vaccination people “anti-science” is a transparently bad persuasion tactic. Leave a social line of retreat.
Also, it probably isn’t even true that they’re anti-science. It’s more likely their stances on science are inconsistent, trusting it to varying degrees in different situations depending on the political and social implications of declaring belief.
Agreed.
Science isn’t a package you have to accept as one huge clump. There is nothing inconsistent about affirming some scientific claims and denying others, especially if you also believe that some sciences are more reliable at arriving at the truth than others (which very few scientists themselves would deny).
Good point, thanks. Skepticism of specific scientific claims is fully consistent with a “pro-science” outlook. I would maintain that people rejecting legitimate scientific claims often are inconsistent, though. Case in point, Young Earth Creationists that completely trust technology and medication that could only work if the scientific case for YEC is false.
They aren’t rejecting “legitimate scientific claims”, they’re disputing which claims are legitimate.
Can you give an example of such a technology and/or medication?
Upvoted.
I meant to say that if you believe a scientific claim to be legitimate, there should/are going to be implications of that on other parts of your worldview. When we misjudge what the implications of a belief are, we can believe it while simultaneously rejecting something it implies. (That’s what reductio ad absurdum’s are for.)
I was under the impression that GPS was such a technology. I also don’t see much room for reasonably believing in evolutionary medicine without accepting macro-evolution—but that’s a bit of a stretch from my original point. After struggling to find examples, I’m going to downshift my probability of there being many around.
We also tend to overestimate how much parts of our worldview support each other, or as this quote says:
GPS requires corrections for general relativity, it’s somewhat of a stretch to say that implies the big bang.
Well, according to the wikipedia entry for evolutionary medicine the key concepts are:
These all involve at most micro-evolution and the observation that humans are well designed for an ancestral environment, neither of which YEC’s reject to my knowledge.
In a geology course I took in undergrad, I was under the impression that successfully locating fossil fuels that can be used for production purposes requires understanding the mechanics of how fossils form and how long the organic material has to be fossilized, which explicitly requires deep time. Young Earth Creationism cannot be used as a model for providing the world’s oil, and our professor made sure that we understood those implications.
http://youtu.be/MWAbr-SoMAs
Pat Robertson (who considers young earth to be an embarrassment to his sort of Christian) uses that very example.
Here’s a relevant study, Negative persuasion via personal insult:
Years and years ago, when I was trolling creationist discussion boards (not for lulz, but to deconvert), I tried to make mild claims that were nonetheless adequate to send the more extreme elements into frothy rages.
Those were more effective at deconversion than the actual valid arguments that provoked them.
On the risk of seeming to ride a hobby horse (which I don’t) I post this:
Is there any risk that we (as a society) may lose science (or rather scientific literacy) in the medium run to religous or other anti-science factions?
This can actually happen and did happen more than once during human history. As a data point take this:
Frederick Starr: Lost Enlightenment
A very interesting account of the rise and fall of the arab enlightenment in central asia.
First chapter here: http://press.princeton.edu/chapters/s10064.pdf
From that chapter:
It is quite possible to use science and reason to destroy it (or at least diminish it for some time).
Funny that I get the chance to post this off-track quote two times in a row.
Rationality could be defeated by one powerful enemy (e.g. religion), but also by a concentrated attack of many diverse enemies. These days I would be more worried by the latter.
Rationality is a common enemy of many beliefs. If you refuse one specific truth, you must also refuse other specific truths this one is connected with, then you must refuse the general rules of reasoning, and the whole meta-level. And this is where it becomes dangerous: people with different false beliefs can be opponents at the object level, but allies at the meta-level; they may all agree that all this talking about “evidence” is bullshit… some of them because it offends their religious beliefs; others because it leads to politically unacceptable conclusions; yet others because it can be used to support sexism or racism; etc. Each of them wants to remove some specific conclusion; all of them want to stop the same algorithms for reasoning.
I don’t think it’s so likely that the science in the west could be destroyed these days by religion alone. But it could be destroyed by systematic attacks from all sides: religious people who hate hearing about evolution, paranoid people who hate hearing about vaccination, libertarians who hate hearing about global warming, social justice warriors who hate hearing about differences between people, even the average Joe who hates being told he is wrong about anything… all of them can together agree (and vote, democratically) that scientists should just shut up when their results are inconvenient. And if this becomes a social norm, then scientists are pretty much only allowed to agree with the public opinion, and invent some new harmless gadgets.
I had the misfortune to watch part of a comedy tv show in a restaurant. I don’t know which show, but it featured two smart guys who were annoying to the women they were talking with for the high crimes of being funny-looking, not dressing well, and insisting on talking about intellectual matters.
I think one of the things which can take rationality down is that it’s associated with not taking status markers seriously, and people who are invested in status markers defend those markers.
I can remember when being intelligent was considered to be bad (I picked this up from the culture more than from individual people, I think) because it was both unfeminine and unmasculine.
Maybe we live in a short lucky period of time, where people remember that you can make a ton of money by being good with computers and perhaps some other science… and this gives some status to smart people (other than managers). Nobel price is probably helpful here, too, because it’s something that everyone knows, so people know that science can somehow translate to very high status.
Then, why don’t smart people do the same thing? Instead of (or in addition to) competing with each other, why don’t we insist more that being smart is cool (and being stupid is incool)?
Status games are zero-sum on individual level (whether Alice is more popular than Bob, or the other way round, is only important to the Alice and Bob), but when comparing groups, it can have an impact on the whole world. For example, higher status of scientists would probably mean more science, which improves everyone’s life. Just to put things in perspective: we spend $2×10^6 for Khan Academy, but 50×10^9 for the latest Winter Olympics. (I’m choosing two activites than anyone can enjoy; as opposed to a specific school or stadium where the access is limited.) Yeah, that’s factor of 25000. Because running really really fast is so much more imporant that understanding numbers.
To an extent, I think that does happen.
I think that part of the problem is that a lot of people think that “smart=successful=does well in school=the establishment figure”, and that therefore people who don’t do well in school tend to lash out against it for fairly obvious reasons. If a person who can’t do well in school views himself as “not smart”, then for reasons of ego-self-preservation he will tend to decide that “being smart isn’t important” and lash out against things he associates with that trait.
We do live in a short lucky period of time. I agree that people suddenly being able to make a lot of money in IT improved the status of smart people.
As for why smart people don’t do more to improve the status of smart people.… I only have guesses. One is the feeling that status manipulation is unclean—it takes being in the falsehood business. Another is the perception that it’s hard.
I may as well mention treachery—the writers of that tv show got the details right for intelligent talk.
Our short lucky period is embedded in a long unlucky period. As I understand anti-intellectualism (the American variety—I don’t know whether it’s different in other places), it’s theoretically valuing practicality over theory. This is not always the wrong choice, considering the consequences of bad theory, especially state communism.
This might be amusing, considering that a lot of theory went into the surprisingly effective American government. Don’t laugh—the founding fathers had to invent it. So far as I know, there was no prior experience with large-scale democracy.
However, the valuing of practicality only makes sense for people who actually have practical knowledge, and that’s becoming less common because so much more is automated.
I only know a little of the history of how sports came to be hugely important, but I know they weren’t such a big deal in all times and places. We should put them on the list of supernormal stimuli.
That doesn’t sound right to me. “Valuing practicality over theory” is usually called “science”. The slaying of the beautiful hypothesis by a little ugly fact, and all that.
I see anti-intellectualism as consisting of mostly two parts: (1) making smartness to be a bad thing, something to be ashamed of; and (2) suppressing anything outside of groupthink and the general stress on the “us vs them” paradigm.
The greater the inferential distances, the less it seems so. What exactly is the practical aspect of string theory? On the other hand, microwave is pretty useful, but somehow it doesn’t feel scientific. It’s just a technical thing.
It’s like the specialization is too extreme for our intuitions today. It used to be:
Average people who use stuff.
Smart people who do science and create stuff.
But these days it’s more like:
Average people who use stuff.
Skilled people who create stuff.
Smart people who do science… which seems kinda unrelated to the stuff.
The romantic science types like MacGyver or the mad scientists (I’m sure there are many good examples, but they don’t come to my mind right now) are people who study science and then apply it. But in real life, the people who create science, and the people who apply it are not the same.
For example, I can create computer programs, but I never invented anything scientific in computer science. And then there are people who have PhD’s in computer science and publish in peer-reviewed journals, but probably couldn’t make a decent text editor. The link between the top science and doing cool stuff is lost. Einstein can say some weird things about the space-time, but unless he had the Nobel price he couldn’t even become rich from this knowledge. He can’t use his space-time knowledge to build a spaceship or a teleport in his garage. He doesn’t have the power in his hands. A carpenter can make you a new table, but Einstein can’t do anything for you directly.
We don’t see the science directly translated to power, by the scientists. Eisteins are smart, but Zuckerbergs are rich. And even that’s awesome, because Zuckerberg at least is a programmer. It could be worse… you could have a bunch of poorly paid smart programmers (preferably working remotely from some third-world country) making some IT-illiterate boss rich.
Disagree. Here is a blog post by Eric Raymond describing five different types of anti-intellectualism. You’re (1) and (2) correspond roughly to his thalamic and totalizing types respectively.
Here are his descriptions of the other three:
Yeah. When people start using “intelligence” as a label for their ideology, of course the people who dislike the ideology will reject the label. There is a risk of the same thing happening to “rationality”. We have to actively oppose this misuse, because if it becomes popular, most people won’t care about the technical definition of the word.
Traditionalism and skepticism make a lot of sense in a world where many scientific experiments don’t replicate, doesn’t it? It’s like treating all new information as an extremely weak evidence. Which makes sense if you have very low trust of the source that generates the information. And sometimes the sources really are not trustworthy. My only problem with these people is that they don’t understand that some scientific disciplines are more trustworthy than others. On the other hand, even some scientists would object to this.
In my experience these people are pretty good at treating different scientific disciplines differently. Frequently much better than the scientists themselves.
This depends crucially on what you’re counting as large-scale democracy. The Roman Republic in some periods may qualify, although most of the time it seems to have been a de-facto oligarchy and its franchise was always quite limited. Iceland was governed by a representative body, the Althing, between 930 and 1262, but its population has never been very large. Venice had a (rather odd) electoral system during its city-state period. The development of the British Parliament from an advisory council into a full-blown representative body and major seat of government was extremely gradual and started quite early; Wikipedia cites De Montfort’s Parliament in the late 13th century as the first elected one.
I think it’s fair to cite the US under its current constitution as the first modern democratic republic of any great size, but I don’t think I’d call it the first one.
I think the core issue is cooperation. Eliezer’s why our kind cant cooperate provides a perspective.
Making intelligence high status means having public role models who are intellectuals. We can’t really agree on role models.
Sport fans can agree that Tiger Woods is awesome even if the don’t like golf.
Agreeing that an extremely smart charismatic figure like Julian Assange is awesome is much harder because it’s political. Agreeing that Sergey Brin and Larry Page are awesome is political. Agreeing that Peter Thiel is awesome is political.
Steve Job management to have status while expressing his intelligence but he was a Buddhist who painted himself as being serious about beauty.
As a community we also don’t agree that we want to stand our ground on intelligence. When talking about LW PR implications someone argued that being seen as a crowd of people who think that they are smart is bad for LW.
When doing QS presswork I never tried to pretend to be no geek. In one instance I put on EEGs with a friend and danced while throwing the visualization of the EEGs with a projector against the wall behind us. The goal was to stand the ground that being a geek is cool.
On LW people try to tell me that using the age old technology of a bow and practicing firing arrows is cool and that the activity maximizes their coolness function.
If the idea of smart people to be cool is about firing arrows with a bow, why should anyone consider smart people to be cool and high status?
It’s difficult to be a sport star, but it’s easy to recognize a sport star. Doesn’t work the same with science. At least we have the Nobel price to tell us who the cool scientists are, otherwise most people wouldn’t know. But being told is not the same as seeing. People enjoy watching sport. (Actually, it’s only easy to recognize the sport stars in a specific environment. If no one ever organized golf championships, we wouldn’t know who the best golf players are.)
If we could perhaps make the scientists somehow… compete with each other in a few-minutes sessions… doing something that the audience could understand at least on the “who is winning now” level. (This understanding should be supported by expert commenters.) Okay, this is another big problem: science is slow, and people want quick closures. You could make a competition in something science-related, but it would be the true science; the best scientists would not necessarily win. -- Even so, I think it would be nice to have some science-correlated role models. So, inventing a science-ish TV competition is one possible way.
The sport stars are well-compartmentalized. I am using this as on opposite of “political” you mentioned. It’s not necessarily politics in the usual sense of the word, but the truly awesome intelligent people do something significant; and when you do something significant, you are almost guaranteed to be hated by a lot of people, because you disagree with them or even prove them wrong. The sport stars are safe: they stay at their place and usually don’t move outside. So in some sense the sport stars are popular because they are at the same time awesome and completely useless. Admirable, but not threatening.
Perhaps it’s not about what you do, but how you do it, and who you are. As an example, imagine that a movie star would buy an ultra-expensive arrow-firing range; would invite there dozen celebrities and a television, and they would chat, drink and eat, and fire from the bows. It would be cool and high-status, and it could even start a new fashion wave. However, if you do this with a small group of geeks, it will not have the same effect.
The way to be cool is to optimize directly for coolness. Just like in the Paul Graham’s essay. Although he says that this stops when you are out of high school. I’d say that when you are out of the high school, the punishments for not being cool enough simply stop being so severe, so you are allowed to just live your life. But if you want to be cool, it’s stil tough, and it won’t happen accidentally. Firing arrows with a bow per se is not optimized for coolness. It could be upgraded to be a cool project, but that would require lot of resources. Yes, being cool is also expensive.
The only way to be cool is to optimize for the coolness explicitly. Preferably without other people realizing that. I believe this is actually what most people perceived to be cool do, although they would probably deny it. (This creates a problem of how to falsify my hypothesis.)
Cool can mean expensive but it might very well mean that you are simply willing to break some silly convention that other people take for granted but that nobody really cares about.
Let’s say I wanted to be a music star in the 21st century. What do I need? A good scenery for Youtube videos. Song texts that have some message, maybe about the value of Bayes Rule. I don’t need to be able to sing because of auto-tune. If I want to do something QS inspired by could run live data of a QS device through some algorithm that converts it into sound that people can listen to.
Fulfilling those steps takes some effort but the financial part isn’t that big. At the end I do have a project that’s remarkable in Seth Godins sense of the word.
I probably wouldn’t even need to contact bloggers myself but they will come and want to hear from me to write a story about me. Once you are willing to violate a few boundaries coolness happens.
I don’t understand why there nobody who is seriously open about using auto-tune as a way to convert meaningful texts into music. In a world where established stars use it to hide but the establishment treats it as a sign of decadence there a story in using it when you can’t sing at all to express a message that isn’t expressed in today’s media.
And the risk is that a) science gets associated/aligned with political groups and b) someone of the other policical groups gets powerful and—using methods of his opponents—uses the meta-level as a unifier to bring down the science-faction.
Or worse: b) scientist begin passing off said group’s ideology, including the parts that are BS, as “science” and c) it becomes increasingly obvious that the things being mostly loudly proclaimed as “science” are in fact B.S.
What I just notice is that quite a lot of high profile science like the one in cosmology or elementary particles looks like B.S. for the layman. And it may be not that far off. There is a lot of B.S. which forms and derives some inner logic. It’s consistency may not be that much different from ‘real’ theories proposed for e.g. quantum gravity (judging from the differences of the QG theories).
Thus: Having too much ‘high profile’ science and too little ‘real’ results (like cancer cure type) may also hurt science in the public eye (and be exploited by demagogues).
I don’t think this actually causes much of a problem. Having beliefs about things that happen far in the past that trip up the absurdity heuristic certainly hasn’t hurt religion. The biggest problem is BS pronouncements about things that people can readily observe.
Do you believe that b) already happened? If not how high do you consider the likelihood that it will happen in the next 20 (40) years?
For many fields, yes. See here for an insider in one of these fields justifying knowingly passing off BS as science “for the greater good”, i.e., to promote her ideology.
Vladimir_M give some good heuristics for determining which fields are corrupted here.
Specifically this part:
We have the Ethical injunction Sequence explaining some problems with this kind of reasoning. But the obvious consequence is that when you start doing this, and it becomes known (which probably happens soon enough—but unless you succeeded in destroying the Science, it is destined to happen some day), you have done a great damage to the public image of Science as a side effect; which will cause many problems down the line.
As a trivial empathy pump, imagine how would you feel if your political opponents had this opportunity and would have no scruples abusing it. Of course, they would believe they are improving the world by doing it. And their beliefs might be wrong because of some other lies, which they would get from a trusted source. And the only institution for systematically finding the truth would be corrupted, for the supposed “greater good”.
When scientists start doing this, Science is no longer seen as something that can determine whether the sky really is green or blue, but becomes merely another soldier on the Green side.
Also note, that of the two possible outcomes doing great damage to the public image of Science is actually the lesser evil, thus people who care about science should be pushing for this outcome. Unfortunately, since the harm to the reputation of science is more visible than the harm to science, there is a temptation and tendency to avoid exposing this stuff to preserve science’s reputation.
This is a huge mistake, at best this will ultimately blow up in their faces, at worst the result will be science turning into a highly reputed religion whose pronouncements no longer correspond to reality.
Doing that and then defending that position publically on the internet under your own name seems extremely stupid.
I think determining whether ideology is behind questions isn’t easy. One man’s ideology is another man’s common sense.
Probably doesn’t much matter unless you yourself work in one of the allegedly afflicted fields or in a position that’s sensitive to their opinion, i.e. in media or a public-facing administrative or academic role. I’ve occasionally heard of folks outside that category getting doxxed and people trying to get them fired for promoting ideological positions that are Considered Harmful, but on the whole it seems fairly rare.
I don’t mean the personal risk but the damage to the movement itself. It provides someone like Eugine Nier ammunition to talk down on feminism that he otherwise wouldn’t have.
Using that in ammunition in a discussion like this isn’t very damaging, but in the days of blogs it doesn’t take that much to give that ammunition to a blogger who weaves it into a story.
Misunderstood your comment; my criticism didn’t make sense with the correct interpretation. Sorry I didn’t get to the post before you did.
Now that I’ve got it right, though, it seems to me that the behavior you’re talking about might indeed be made sense of in terms of treating scientific integrity as a less sacred value than whatever you’re trying to defend. “Screw inconclusive evidence; people are hurting” is exactly what I’d expect to see from an activist who’d absorbed a meme somewhere about the scientific process being just another frame for looking at the world, and that’s unfortunately not an uncommon one in activist circles. I don’t think there are all that many activists who would explicitly endorse this way of looking at the problem, but you don’t need explicit endorsement to decide some scientific body is untrustworthy for ideological reasons.
More charitably, in the context of social science and medicine, there’s quite a lot of stuff that’s still under dispute or has only weak evidence pointing one way or another (including the linked post) and choosing to favor the interpretation you find more convenient for political reasons doesn’t quite seem to qualify as lying. Particularly since everyone’s got their halo effects coloring everything. JulianMorrison’s comment is the first time I’ve ever seen someone coming out and saying it in public, though.
Not the optimal move, certainly, but I wouldn’t call it extremely stupid.
Changing your frame of looking at the world is like changing clothing.
You don’t go slopply dressed in an environment where everyone wears suits to convince them to follow your political ideology.
I don’t follow hardcore feminism but if I would move in an environment where everyone operates from that frame I wouldn’t wear the scientific method frame. I rather speak about how they are pretty judgemental about people who disagree with them and that there are better methods of dealing with people than being judgemental.
If you think that your scientific frame is the only one there is, then that means that no sign of stupidity when you try to convince hardcore feminists with evidence. For someone on the other hand who comes from a background where they should be aware that there are different frames of looking at the world it’s sloppy.
The problem is that not all frames are created equal. Some are actually useful for discovering the truth and/or improving the world, others are mostly only useful for signalling.
If your goal is to improve the world and the people with whom you are talking are a bunch of feminists getting them on the issue of them judging people is more likely to reach them then getting them on the issue of them not being in line with scientific evidence.
Also if you really believe that the frame of science is more useful for either of those goals where are your numbers. Where are the people that you studied who hold that belief that are more effective at discovering the truth and/or improving the world?
If you don’t have those numbers because nobody really cares about using the scientific method to validate that belief, you have to choices:
Stop burning witches
Admit that witchcraft exists
You can’t really argue that science is the best frame for improving the world and than hold that belief based on nonscientific reasoning that’s backed up by zero data.
It should be possible to find a metric for whether someone uses science as his primary frame and possible to find a way to measure whether an individual improves the world. At least if you do believe in the scientific project than it should be possible to measure such things. If you don’t think they are measurable, there goes your scientific method for finding out the truth.
Would you apply this kind of logic to other groups with a non-scientific frame, say creationists. Judging by your comment here I’m guessing the answer is no. So why are you so willing to adopt the feminist “frame” rather than call them out on their BS?
It depends where and why I meet them. There a lot of value of promoting that the scientific frame of viewing things gets used in science classes. That’s what science classes are for. To the extend that you have good science classes they teach students to use the scientific frame of viewing the world. That’s a cause worth fighting for. Not my fight, but I like the fact that there are people who care about fighting it.
My fight is more about getting science classes to actually teach students to do experiments to learn from empiric reality instead of believing in the authority of their textbooks.
If I have a small talk conversation with a woman between two Salsa dances and she mentions that she is a creationist I have absolutely no interest in “calling her out on her BS”. It would destroy rapport between us that will make the next dance worse. It will also likely be unable to change her view. It might even strengthen her view in creationism ( http://lesswrong.com/r/discussion/lw/jmz/publication_the_antiscience_trope_is_culturally/ ).
A while ago I went to my meditation teacher and told her: “Bob is doing X and X is wrong.” She answered: “Yes, but he’s not at a level where he can learn what do to instead of X. It might take a year or two. If I raise the topic with him before he’s ready that will produce resistance around the topic that might be more harmful than helpful.”
If I want to change someone’s views than I have to understand the person I’m dealing with and what I want out of the situation. That allows for much more effective action than calling the ideas of the other person bullshit.
I once had an online discussion to deconvert a hardcore Darwinist. I succeeded in removing that framework but he replaced it replaced it with Zeitgeist-inspired collectivism. There wasn’t any real progress just a move from believing in one framework to believing in the next. No improvement in general ability to reason and taking empirical evidence seriously.
Today that’s not my goal. If I make someone long for real evidence that holding a scientific frame helps with improving the world instead of holding that belief on faith, I win.
There’s a difference between not correcting someone’s mistake and adopting their frame.
You didn’t even talk about correcting mistakes but about calling out bullshit.
In NLP and hypnosis jargon there the concept of pacing and leading.
In most direct interaction it’s useful to adopt the frame of the person you are dealing with first. That creates rapport that you can then use to lead the person where you want them to go.
I have no attachment to the frame in which I’m operating. That allows me to be conscious about the frame I’m holding. It’s just a map. The map isn’t the territory.
Rationality is about winning and not about signaling something by wearing the right frame that looks cool to fellow rationalists.
And some clothes are actually useful for keeping you warm and dry and comfortable while others are mostly only useful for signalling, so what’s your point? ;-)
Some fashionable frames are the equivalent of tight-lacing and foot binding.
Signalling is useful.
To me it seems these memes are floating around quite frequently. In some circles, all you need to do to discredit science is to say that it was made mostly by white men (focus on ad-hominem and completely ignore the idea of the scientific method). This is a fully general counterargument against any scientific argument you dislike. Of course most of these people are not scientists. But sometimes one of them can decide to do science, for the sake of improving the world.
It does if you proceed to accuse opponents disputing that interpretation of being anti-science.
Predicting future trends is hard but before we do we should ask what we mean with losing science or scientific literacy.
I think a straightforward definition of scientific literacy in a case like global warming would to see whether or not an individual read at least part of the IPCC report. Global warming is an important topic, so it makes sense to inform yourself by reading what the most authoritative source has to say on the issue. I think a lot of scientists fail that test but that they instead get an impression of what the IPCC report says from secondary sources.
If you ask me we don’t live in a time where the average person has a lot of scientific literacy. A few people do read meta-studies on a regular basis to increase their understanding of reality but the average person doesn’t.
It also important to keep in mind that our present idea of science with placebo-controlled trials is difference from the idea of science that existed 100 hears ago. We progressed. LW rationality of doing what increases chances of winning is different than rationality that existed 100 years ago.
We are not at the pinnacle of human knowledge and the kind of scientific literacy that will exist in 100 years is going to be different from the kind that exists at the moment. It’s hard to say how it will be different. I doubt most people 100 years ago would have thought that placebo controlled trials became as central to our idea of science as they did become.
Instead of trying to defend scientific literacy as it is, it’s better to focus on developing it and making it richer. That means having a culture where more people read primary literature. It also means things like Ben Goldacre’s crusade for the opening of clinical trial data. Putting more energy into reproducible scientific results.
I think those fights are much more central for the future of science then the fight against anti-science folks.
Actually ‘fighting anti-science’ folk may make mattres worse.
But improving broad scientific literacy seem to me as an effective means to prevent anti-science memes which might be exploited for political reasons.
Keeping creationism out of science classes takes a certain fight. That’s a worthwhile fight and it’s useful when some pro-science people fight it out.
The fight will angers some religious people but it’s still worthwhile.
On the other hand the fight with external groups shouldn’t take most of the resources. Keeping your own house in order is much more important than winning against other groups.
This has been a background worry for me ever since I saw C.S. Lewis bring up the possibility.
Science is essentially a moral enterprise—scientists (and those who fund them and those who use scientific results) need to engage with the difficult real world rather than just seek status and convenience.
I’m more concerned about science just deteriorating to the point where it isn’t useful (we see a fair amount of that in medical research already) rather than it being taken down by active opposition.
I don’t see the medical case as much different from other science. And it is not clearcut either. The problems being
corporations wanting/needing to reap gains
unhelpful incentive structures
missing meta science (efficiently driving or aggregating research) and missing base level expertise
and of course politics
Historical scientists had the disadvantages of flammable libraries, centralized knowledge and centralized political power.
We don’t have centralized political power now?
If you look at the feat that Julian Assange pulled by understanding the political system and playing it, power isn’t as centralized as it was in the past.
On the other hand Julian Assange did lose his ability to move freely so, it’s not like the state lost all it’s power. http://globalguerrillas.typepad.com/globalguerrillas/2014/02/update-on-one-man-vs-the-world.html is a nice blog that covers how the size of superempowered individuals that do have power in a decentralised way that wasn’t there in the past.
If your science gets censored in France because of France latest idea that censorship is the way to go you are free to move to a different country and there are plenty to choose where you are relatively free to express science.
I try to use language economically, there’s a precision trade-off. On a spectrum from centralized to decentralized, do you think it’s more centralized now than it was in the middle ages?
In a sense, most certainly yes! In the middle ages, each fiefdom was a small city-state, controlling in its own right not all that much territory. There certainly wasn’t the concept of nationalism as we know it today. And even if some duke was technically subservient to a king, that king wasn’t issuing laws that directly impacted the duke’s land on a day to day basis.
This is unlike what we have today: We have countries that span vast areas of land, with all authority reporting back to a central government. Think of how large the US is, and think of the fact that the government in Washington DC has power over it all. That is a centralized government.
It is true that there are state governments, but they are weak. Too weak, in fact. In the US today, the federal government is the final source of authority. The president of the US has far more power over what happens in a given state than a king in the middle ages had over what happened in any feudal dukedom.
And a digital network is potentially more instable than a physical one. If it is easier to replicate data it is also easier to replicate destructors.
I believe Eliezer has accidentally already created a path forward for that eventuality: We instantiate the Bayesian Conspiracy and show off how much better our miracles are than anyone else’s. Brennan’s World, here we come.
But they may not be sufficiently better than a convincing demagogue may unmake them. The art may not be strong enough: http://lesswrong.com/lw/9p/extreme_rationality_its_not_that_great/
As a corollary, how many people trust science more because they hear some misrepresentation that quantum physics says X or Y?
This reminds me. Effective communication and persuasion skills seem like they should be core concepts of rationality. Does CFAR teach effective persuasion in their workshops? I was reading this interview of an FBI hostage negotiator and the techniques that he talks about in hostage negotiation seem like they would be pretty good skills to have, and I wouldn’t want to join the FBI just to get a chance to have a structured environment where I can learn that stuff more effectively.
Well seeing as irrelevant factors seem to have such a big influence on people’s perceptions, it seems to me that the thing to do would be to talk about those factors in relation to anti-vaccers. Less ‘they’re wilfully ignorant of the research’ and more ‘they willfully kick their wives and beat their dogs’.
You can proselytize a person of faith. You can reason with a person of science. There’s nothing to be done with a person of faith who thinks they are a person of science.