I’m not addressing the paper specifically, I’m answering your question more generally. I still think it applies here though. When they identify “misinformation”, are they first looking for things that support the wrong conclusion and then explaining why you shouldn’t believe this wrong thing, or are they first looking at reasoning processes and explaining how to do them better (without tying it to the conclusion they prefer).
For example, do they address any misinformation that would lead people to being misled into thinking global warming is more real/severe than it is? If they don’t and they’re claiming to be about “misinformation” and that they’re not pushing an agenda, then that’s quite suspicious. Maybe they do, I dunno. But that’s where I’d look to tell the difference between what they’re claiming and what Lumifer is accusing them of.
Well, the authors clearly hold that global warming is real and that the evidence for it is very strong. Does that invalidate the paper for you?
The fact that they hold that view does not. It’s possible to agree with someones conclusions and still think they’re being dishonest about how they’re arguing for it, you know. (and also, to disagree with someone’s conclusions but think that they’re at least honest about how they get there)
The fact that it is clear from reading this paper which is supposedly not about what they believe sorta does, depending on how clear they are about it and how they are clear about it. It’s possible for propaganda to contain good arguments, but you do have to be pretty careful with it because you’re getting filtered evidence.
(notice how it applies here. I’m talking about processes not conclusions, and haven’t given any indication of whether or not I buy into global warming—because it doesn’t matter, and if I did it’d just be propaganda slipping out)
When they identify “misinformation”, are they first looking for things that support the wrong conclusion [...] or are they first looking at reasoning processes
What makes misinformation misinformation is that it’s factually wrong, not that the reasoning processes underlying it are bad. (Not to deny the badness of bad reasoning, but it’s a different failure mode.)
do they address any misinformation that would lead people to being misled into thinking global warming is more real/severe than it is?
They pick one single example of misinformation, which is the claim that there is no strong consensus among climate scientists about anthropogenic climate change.
If they don’t and they’re claiming to be about “misinformation” and that they’re not pushing an agenda, then that’s quite suspicious.
It would be quite suspicious if “global warming is real” and “global warming is not real” were two equally credible positions. As it happens, they aren’t. Starting from the premise that global warming is real is no more unreasonable than starting from the premise that evolution is real, and not much more unreasonable than starting from the premise that the earth is not flat.
The fact that it is clear from reading this paper which is supposedly not about what they believe sorta does
I disagree. If you’re going to do an experiment about how to handle disinformation, you need an example of disinformation. You can’t say “X is an instance of disinformation” without making it clear that you believe not-X. Now, I suppose they could have identified denying that there’s a strong consensus on global warming as disinformation while making a show of not saying whether they agree with that consensus or not, but personally I’d regard that more as a futile attempt at hiding their opinions than as creditable neutrality.
I [...] haven’t given any indication of whether or not I buy into global warming
I think you have, actually. If there were a paper about how to help people not be deceived by dishonest creationist propaganda, and someone came along and said “do they address any misinformation that would lead people into being misled into thinking 6-day creation is less true than it is?” and the like, it would be a pretty good bet that that person was a creationist.
Now, of course I could be wrong. If so, then I fear you have been taken in by the rhetoric of the “skeptics”[1] who are very keen to portray the issue as one where it’s reasonable to take either side, where taking for granted that global warming is real is proof of dishonesty or incompetence, etc. That’s not the actual situation. At this point, denial of global warming is about as credible as creationism; it is not a thing scientific integrity means people should treat neutrally.
[1] There don’t seem to be good concise neutral terms for the sides of that debate.
It would be quite suspicious if “global warming is real” and “global warming is not real” were two equally credible positions.
Both are quite simplistic positions. If you look at the IPCC report there are many different claims about global warming effects and those have different probabilities attached to them.
It’s possible to be wrong on some of those probabilities in both directions, but thinking about probabilities is a different mode than “On what side do you happen to be?”
Incidentally, the first comment in this thread to talk in terms of discrete “sides” was not mine above but one of jimmy’s well upthread, and I think most of the ensuing discussion in those terms is a descendant of that. I wonder why you chose my comment in particular to object to.
I don’t know about you, but I don’t have the impression that my comments in this thread are too short.
Yes, the climate is complicated. Yes, there is a lot more to say than “global warming is happening” or “global warming is not happening”. However, it is often convenient to group positions into two main categories: those that say that the climate is warming substantially and human activity is responsible for a lot of that warming, and those that say otherwise.
What makes misinformation misinformation is that it’s factually wrong, not that the reasoning processes underlying it are bad.
Yes, and identifying it is a reasoning process, which they are claiming to teach.
It would be quite suspicious if “global warming is real” and “global warming is not real” were two equally credible positions. As it happens, they aren’t.
Duh.
You can’t say “X is an instance of disinformation” without making it clear that you believe not-X.
Sure, but there’s more than one X at play. You can believe, for example, that “the overwhelming scientific consensus is that global warming is real” is false and that would imply that you believe not-”the overwhelming scientific consensus is that global warming is real”. You’re still completely free to believe that global warming is real.
I think you have, actually.
“What about the misinformation on the atheist side!” is evidence that someone is a creationist to the extent that they cannot separate their beliefs from their principles of reason (which usually people cannot do).
If someone is actually capable of the kind of honesty where they hold their own side to the same standards as the outgroup side, it is no longer evidence of which side they’re on. You’re assuming I don’t hold my own side to the same standards. That’s fine, but you’re wrong. I’d have the same complaints if it were a campaign to “teach them creationist folk how not to be duped by misinformation”, and I am absolutely not a creationist by any means.
I can easily give an example, if you’d like.
If so, then I fear you have been taken in by the rhetoric of the “skeptics”[1] who are very keen to portray the issue as one where it’s reasonable to take either side,
Nothing I am saying is predicated on there being more than one “reasonable” side.
where taking for granted that global warming is real is proof of dishonesty or incompetence, etc
If you take for granted a true thing, it is not proof of dishonesty or incompetence.
However, if you take it for granted and say that there’s only one reasonable side, then it is proof that you’re looking down on the other side. That’s fine too, if you’re ready to own that.
It just becomes dishonest when you try to pretend that you’re not. It becomes dishonest when you say “I’m just helping you spot misinformation, that’s all” when what you’re really trying to do is make sure that they believe Right thoughts like you do, so they don’t fuck up your society by being stupid and wrong.
There’s a difference between helping someone reason better and helping someone come to the beliefs that you believe in, even when you are correct. Saying that you’re doing the former while doing the latter is dishonest, and it doesn’t help if most people fail to make the distinction (or if you somehow can’t fathom that I might be making the distinction myself and criticizing them for honesty rather than for disagreeing with me)
identifying it is a reasoning process, which they are claiming to teach.
I don’t think they are. Teaching people to reason is really hard. They describe what they’re trying to do as “inoculation”, and what they’re claiming to have is not a way of teaching general-purpose reasoning skills that would enable people to identify misinformation of all kinds but a way of conveying factual information that makes people less likely to be deceived by particular instances of misinformation.
“What about the misinformation on the atheist side!” is evidence that someone is a creationist to the extent that they cannot separate their beliefs from their principles of reason
Not only that. Suppose the following is the case (as in fact I think it is): There is lots of creationist misinformation around and it misleads lots of people; there is much less anti-creationist misinformation around and it misleads hardly anyone. In that case, it is perfectly reasonable for non-creationists to try to address the problem of creationist misinformation without also addressing the (non-)problem of anti-creationist misinformation.
I think the situation with global warming is comparable.
You’re assuming I don’t hold my own side to the same standards.
I’m not. Really, truly, I’m not. I’m saying that from where I’m sitting it seems like global-warming-skeptic misinformation is a big problem, and global-warming-believer misinformation is a much much smaller problem, and the most likely reasons for someone to say that discussion of misinformation in this area should be balanced in the sense of trying to address both kinds are (1) that the person is a global-warming skeptic (in which case it is unsurprising that their view of the misinformation situation differs from mine) and (2) that the person is a global-warming believer who has been persuaded by the global-warming skeptics that the question is much more open than (I think) it actually is.
then it is proof that you’re looking down on the other side.
Sure. (Though I’m not sure “looking down on” is quite the right phrase.) So far as I can tell, the authors of the paper we’re talking about don’t make any claim not to be “looking down on” global-warming skeptics. The complaints against them that I thought we were discussing here weren’t about them “looking down on” global-warming skeptics. Lumifer described them as trying to “prevent crimethink”, and that characterization of them as trying to practice Orwellian thought control is what I was arguing against.
It becomes dishonest when you say “I’m just helping you spot misinformation, that’s all” when what you’re really trying to do is make sure that they believe Right thoughts like you do
I think this is a grossly unreasonable description of the situation, and the use of the term “crimethink” (Lumifer’s, originally, but you repeated it) is even more grossly unreasonable. The unreasonableness is mostly connotational rather than denotational; that is, there are doubtless formally-kinda-equivalent things you could say that I would not object to.
So, taking it bit by bit:
when you say “I’m just helping you spot misinformation, that’s all”
They don’t say that. They say: here is a way to help people not be taken in by disinformation on one particular topic. (Their approach could surely be adapted to other particular topics. It could doubtless also be used to help people not be informed by accurate information on a particular topic, though to do that you’d need to lie.) They do not claim, nor has anyone here claimed so far as I know, that they are offering a general-purpose way of distinguishing misinformation from accurate information. That would be a neat thing, but a different and more difficult thing.
make sure that they believe Right thoughts
With one bit of spin removed, this becomes “make sure they are correct rather than incorrect”. With one bit of outright misrepresentation removed, it then becomes “make it more likely that they are correct rather than incorrect”. This seems to me a rather innocuous aim. If I discover that (say) many people think the sun and the moon are the same size, and I write a blog post or something explaining that they’re not even though they subtend about the same angle from earth, I am trying to “make sure that they believe Right thoughts”. But you wouldn’t dream of describing it that way. So what makes that an appropriate description in this case?
(Incidentally, it may be worth clarifying that the specific question about which the authors of the paper want people to “believe Right thoughts” is not global warming but whether there is a clear consensus on global warming among climate scientists.)
crimethink
I’m just going to revisit this because it really is obnoxious. The point of the term “crimethink” in 1984 is that certain kinds of thoughts there were illegal and people found thinking them were liable to be tortured into not thinking them any more. No one is suggesting that it should be illegal to disbelieve in global warming. No one is suggesting that people who disbelieve in global warming should be arrested, or tortured, or have their opinions forcibly changed in any other fashion. The analogy with “crimethink just isn’t there*. Unless you are comfortable saying that “X regards Y as crimethink” just means “X thinks Y is incorrect”, in which case I’d love to hear you justify the terminology.
No one is suggesting that it should be illegal to disbelieve in global warming.
This is factuallyincorrect (and that’s even without touching Twitter and such).
The analogy with “crimethink* just isn’t there.
Oh, all right. You don’t like the word. How did you describe their activity? ”...not a way of teaching general-purpose reasoning skills that would enable people to identify misinformation of all kinds but a way of conveying factual information that makes people less likely to be deceived by particular instances of misinformation.”
Oh, one other thing. I’ve got no problems with the word. What I don’t like is its abuse to describe situations in which the totality of the resemblance to the fiction from which the term derives is this: Some people think a particular thing is true and well supported by evidence, and therefore think it would be better for others to believe it too.
If you think that is what makes the stuff about “crimethink” in 1984 bad, then maybe you need to read it again.
Sure. Just a quick example, because I have other things I need to be doing.
No one is suggesting that it should be illegal to disbelieve in global warming.
That is factually incorrect [with links to two news articles]
I take it that saying “That is factually incorrect” with those links amounts to a claim that the links show that the claim in question is factually incorrect. Neither of your links has anything to do with anyone saying it should be illegal to disbelieve in global warming.
(There were other untruths, half-truths, and other varieties of misdirection in what you said on this, but the above is I think the clearest example.)
[EDITED because I messed up the formatting of the quote blocks. Sorry.]
An unfortunate example because I believe I’m still right and you’re still wrong.
We’ve mentioned what, a California law proposal and a potential FBI investigation? Wait, but there is more! A letter from 20 scientists explicitly asks for a RICO (a US law aimed at criminal organizations such as drug cartels) investigation of deniers. A coalition of Attorney Generals of several US states set up an effort to investigate and prosecute those who “mislead” the public about climate change.
“Was it appropriate to jail the guys from Enron?” Mr. Nye asked in a video interview with Climate Depot’s Marc Morano. “We’ll see what happens. Was it appropriate to jail people from the cigarette industry who insisted that this addictive product was not addictive, and so on?”
“In these cases, for me, as a taxpayer and voter, the introduction of this extreme doubt about climate change is affecting my quality of life as a public citizen,” Mr. Nye said. “So I can see where people are very concerned about this, and they’re pursuing criminal investigations as well as engaging in discussions like this.”
Of course there is James Hansen, e.g. this (note the title):
When you are in that kind of position, as the CEO of one the primary players who have been putting out misinformation even via organisations that affect what gets into school textbooks, then I think that’s a crime.
“What I would challenge you to do is to put a lot of effort into trying to see whether there’s a legal way of throwing our so-called leaders into jail because what they’re doing is a criminal act,” said Dr. Suzuki, a former board member of the Canadian Civil Liberties Association.
“It’s an intergenerational crime in the face of all the knowledge and science from over 20 years.”
The statement elicited rounds of applause.
Here is Lawrence Torcello, Assistant Professor of Philosophy, no less:
What are we to make of those behind the well documented corporate funding of global warming denial? Those who purposefully strive to make sure “inexact, incomplete and contradictory information” is given to the public? I believe we understand them correctly when we know them to be not only corrupt and deceitful, but criminally negligent in their willful disregard for human life. It is time for modern societies to interpret and update their legal systems accordingly.
Nice Gish gallop, but not one of those links contradicts my statement that
No one is suggesting that it should be illegal to disbelieve in global warming.
which is what you called “factually incorrect”. Most of them (all but one, I think) are irrelevant for the exact same reason I already described: what they describe is people suggesting that some of the things the fossil fuel industry has done to promote doubt about global warming may be illegal under laws that already exist and have nothing to do with global warming, because those things amount to false advertising or fraud or whatever.
In fact, these prosecutions, should any occur, would I think have to be predicated on the key people involved not truly disbelieving in global warming. The analogy that usually gets drawn is with the tobacco industry’s campaign against the idea that smoking causes cancer; the executives knew pretty well that smoking probably did cause cancer, and part of the case against them was demonstrating that.
Are you able to see the difference between “it should be illegal to disbelieve in global warming” and “some of the people denying global warming are doing it dishonestly to benefit their business interests, in which case they should be subject to the same sanctions as people who lie about the fuel efficiency of the cars they make or the health effects of the cigarettes they make”?
I’m not sure that responding individually to the steps in a Gish gallop is a good idea, but I’ll do it anyway—but briefly. In each case I’ll quote from the relevant source to indicate how it’s proposing the second of those rather than the first. Italics are mine.
Letter from 20 scientists: “corporations and other organizations that have knowingly deceived the American people about the risks of climate change [...] The methods of these organizations are quite similar to those used earlier by the tobacco industry. A RICO investigation [...] played an important role in stopping the tobacco industry from continuing to deceive the American people about the dangers of smoking.”
Coalition of attorneys general: “investigations into whether fossil fuel companies have misled investors about how climate change impacts their investments and business decisions [...] making sure that companies are honest about what they know about climate change”. (But actually this one seems to be mostly about legislation on actual emissions, rather than about what companies say. Not at all, of course, about what individuals believe.)
Bill Nye (actually the story isn’t really about him; his own comment is super-vague): “did they mislead their investors and overvalue their companies by ignoring the financial costs of climate change and the potential of having to leave fossil fuel assets in the ground? [...] are they engaged in a conspiracy to mislead the public and affect public policy by knowingly manufacturing false doubt about the science of climate change?”
James Hansen: “he will accuse the chief executive officers [...] of being fully aware of the disinformation about climate change they are spreading”
David Suzuki: This is the one exception I mentioned above; Suzuki is (more precisely: was, 9 years ago) attacking politicians rather than fossil fuel companies. It seems to be rather unclear what he has in mind, at least from that report. He’s reported as talking about “what’s going on in Ottawa and Edmonton” and “what they’re doing”, but there are no specifics. What does seem clear is that (1) he’s talking specifically about politicians and (2) it’s “what they’re doing” rather than “what they believe” that he has a problem with. From the fact that he calls it “an intergenerational crime”, it seems like he must be talking about something with actual effects so I’m guessing it’s lax regulation or something he objects to.
Lawrence Torcello (incidentally, why “no less”? An assistant professor is a postdoc; it’s not exactly an exalted position: “corporate funding of global warming denial [...] purposefully strive to make sure “inexact, incomplete and contradictory information” is given to the public [...] not only corrupt and deceitful, but criminally negligent”.
“Deceitful Tongues” paper: “the perpetrators of this deception must have been aware that its foreseeable impacts could be devastating [...] As long as climate change deniers can be shown to have engaged in fraud, that is, knowing and wilful deception, the First Amendment afford them no protection.”
So, after nine attempts, you have given zero examples of anyone suggesting that it should be illegal to disbelieve in global warming. So, are you completely unable to read, or are you lying when you offer them as refutation of my statement that, and again I quote, “no one is suggesting that it should be illegal to disbelieve in global warming”?
(I should maybe repeat here a bit of hedging from elsewhere in the thread. It probably isn’t quite true that no one at all, anywhere in the world has ever suggested that it should be illegal to disbelieve in global warming. Almost any idea, no matter how batshit crazy, has someone somewhere supporting it. So, just for the avoidance of doubt: what I meant is that “it should be illegal to disbelieve in global warming” is like “senior politicians across the world are really alien lizard people”: you can doubtless find people who endorse it, but they will be few in number and probably notably crazy in other ways, and they are in no way representative of believers in global warming or “progressives” or climatologists or any other group you might think it worth criticizing.)
Your first link is to proposed legislation in California. O NOES! Is California going to make it illegal to disbelieve in global warming? Er, no. The proposed law—you can go and read it; it isn’t very long; the actual legislative content is section 3, which is three short paragraphs—has the following effect: If a business engages in “unfair competition, as defined in Section 17200 of the Business and Professions Code” (it turns out this basically means false advertising), and except that the existing state of the law stops it being prosecuted because the offence was too long ago, then the Attorney General is allowed to prosecute it anyway.
I don’t know whether that’s a good idea, but it isn’t anywhere near making it illegal to disbelieve in global warming. It removes one kinda-arbitrary limitation on the circumstances under which businesses can be prosecuted if they lie about global warming for financial gain.
Your second link is similar, except that it doesn’t involve making anything illegal that wasn’t illegal before; the DoJ is considering bringing a civil action (under already-existing law, since the DoJ doesn’t get to make laws) against the fossil fuel industry for, once again, lying about global warming for financial gain.
Here: brainwashing. Do you like this word better?
“Brainwashing” is just as dishonestly bulshitty as “crimethink”, and again so far as I can tell if either term applies here it would apply to (e.g.) pretty much everything that happens in high school science lessons.
We’re not talking about making new laws. We’re talking about taking very wide and flexible existing laws and applying them to particular targets, ones to which they weren’t applied before. The goal, of course, is intimidation and lawfare since the chances of a successful prosecution are slim. The costs of defending, on the other hand, are large.
“Lying for financial gain” is a very imprecise accusation. Your corner chip shop might have a sign which says “Best chips in town!” which is lying for financial gain. Or take non-profits which tend to publish, let’s be polite and say “biased” reports which are, again, lying for financial gain.
You point was that no one suggested going after denialists/sceptics with legal tools and weapons. This is not true.
Your point was that no one suggested going after denialists/sceptics with legal tools and weapons. This is not true.
It also is not my point. There are four major differences between what is suggested by your bloviation about “crimethink” and the reality:
“Crimethink” means you aren’t allowed to think certain things. At most, proposals like the ones you linked to dishonest descriptions of[1] are trying to say you’re not allowed to say certain things.
“Crimethink” is aimed at individuals. At most, proposals like the ones you linked to dishonest descriptions of[1] are trying to say that businesses are not allowed to say certain things.
“Crimethink” applies universally; a good citizen of Airstrip One was never supposed to contemplate the possibility that the Party might be wrong. Proposals like the ones you linked to dishonest descriptions of[1] are concerned only with what businesses are allowed to do in their advertising and similar activities.
“Crimethink” was dealt with by torture, electrical brain-zapping, and other such means of brute-force thought control. Proposals like the ones you linked to dishonest descriptions of[1] would lead at most to the same sort of sanction imposed in other cases of false advertising: businesses found guilty (let me remind you that neither proposal involves any sort of new offences) would get fined.
[1] Actually, the second one was OK. The first one, however, was total bullshit.
“Lying for financial gain” is a very imprecise accusation.
Sure. None the less, there is plenty that it unambiguously doesn’t cover. Including, for instance, “disbelieving in global warming”.
Please stay on topic. This subthread is about your claim that “No one is suggesting that it should be illegal”
A claim I made because you were talking about “crimethink”. And, btw, what was that you were saying elsewhere about other people wanting to set the rules of discourse? I’m sorry if you would prefer me to be forbidden to mention anything not explicit in the particular comment I’m replying to, but I don’t see any reason why I should be.
Are you implying that [...]
No. (Duh.) But I am saying that a law that forbids businesses to say things X for purposes Y in circumstances Z is not the same as a law that forbids individuals to think X.
I don’t think they are. Teaching people to reason is really hard. They describe what they’re trying to do as “inoculation”
Oh. well in that case, if they’re saying “teaching you to not think bad is too hard, we’ll just make sure you don’t believe the wrong things, as determined by us”, then I kinda thought Lumifer’s criticism would have been too obvious to bother asking about.
Suppose the following is the case (as in fact I think it is): There is lots of creationist misinformation around and it misleads lots of people; there is much less anti-creationist misinformation around and it misleads hardly anyone. In that case, it is perfectly reasonable for non-creationists to try to address the problem of creationist misinformation without also addressing the (non-)problem of anti-creationist misinformation.
Oh… yeah, that’s not true at all. If it were true, and 99% of the bullshit were generated by one side, then yes, it would make sense to spend 99% of one’s time addressing bullshit from that one side and it wouldn’t be evidence for pushing an agenda. There’s still other reasons to have a more neutral balance of criticism even when there’s not a neutral balance of bullshit or evidence, but you’re right—if the bullshit is lopsided then the lopsided treatment wouldn’t be evidence of dishonest treatment.
It’s just that bullshit from one’s own side is a whole lot harder to spot because you immediately gloss over it thinking “yep, that’s true” and don’t stop to notice “wait! That’s not valid!”. In every debate I can think of, my own side (or “the correct side”, if that’s something we’re allowed to declare in the face of disagreement) is full of shit too, and I just didn’t notice it years ago.
I’m not. Really, truly, I’m not. [...]it seems like [...] the most likely reasons for someone to say that discussion of misinformation in this area should be balanced in the sense of trying to address both kinds are (1) that the person is a global-warming skeptic (in which case it is unsurprising that their view of the misinformation situation differs from mine) and (2) that the person is a global-warming believer who has been persuaded by the global-warming skeptics that the question is much more open than (I think) it actually is.
This reads to me as “I’m not. Really, truly, I’m not. I’m just [doing exactly what you said I was doing]”. This is a little hard to explain as there is some inferential distance here, but I’ll just say that what I mean by “have given no indication of what I believe” and the reason I think that is important is different from what it looks like to you.
Sure. (Though I’m not sure “looking down on” is quite the right phrase.) So far as I can tell, the authors of the paper we’re talking about don’t make any claim not to be “looking down on” global-warming skeptics. The complaints against them that I thought we were discussing here weren’t about them “looking down on” global-warming skeptics. Lumifer described them as trying to “prevent crimethink”, and that characterization of them as trying to practice Orwellian thought control is what I was arguing against.
Part of “preventing crimethink” is that the people trying to do it usually believe that they are justified in doing so (“above” the people they’re trying to persuade), and also that they are “simply educating the masses”, not “making sure they don’t believe things that we believe [but like, we really believe them and even assert that they are True!]”.
With one bit of spin removed, this becomes “make sure they are correct rather than incorrect”.
This is what it feels like from the inside when you try to enforce your beliefs on people. It feels like the beliefs you have are merely correct, not your own beliefs (that you have good reason to believe you’re right on, etc).
However, you don’t have some privileged access to truth. You have to reason and stuff. If your reasoning is good, you might come to right answers even. If the way that you are trying to make sure they are incorrect is by finding out what is true [according to your own beliefs, of course] and then nudging them towards believing the things that are true (which works out to “things that you believe”), then it is far more accurate to say “make sure they hold the same beliefs as me”, even if you hold the correct beliefs and even if it’s obviously correct and unreasonable to disagree.
And again, just to be clear, this applies to creationism too.
With one bit of outright misrepresentation removed, it then becomes “make it more likely that they are correct rather than incorrect”. This seems to me a rather innocuous aim. If I discover that (say) many people think the sun and the moon are the same size, and I write a blog post or something explaining that they’re not even though they subtend about the same angle from earth, I am trying to “make sure that they believe Right thoughts”. But you wouldn’t dream of describing it that way. So what makes that an appropriate description in this case?
If you simply said “many people think the sun and the moon are the same size, they aren’t and here’s proof”, I’d see you as offering a helpful reason to believe that the sun is bigger.
If it was titled “I’m gonna prevent you from being wrong about the moon/sun size!”, then I’d see your intent a little bit differently. Again, I’m talking about the general principles here and not making claims about what the paper itself actually does (I cannot criticise the paper itself as I have not read it), but it sounded to me like they weren’t just saying “hey guys, look, scientists do actually agree!” and were rather saying “how can we convince people that scientists agree” and taking that agreement as presupposed. “innoculate against this idea” is talking about the idea and the intent to change their belief. If all you are trying to do is offer someone a new perpsective, you can just do that—no reason to talk about how “effective” this might be.
Unless you are comfortable saying that “X regards Y as crimethink” just means “X thinks Y is incorrect”, in which case I’d love to hear you justify the terminology.
Yes, I thought it was obvious and common knowledge that Lumifer was speaking in hyperbole. No, they are not actually saying people should be arrested and tortured and I somehow doubt that is the claim Lumifer was trying to make here.
It’s not “thinks Y is incorrect”, it’s “socially punishes those who disagree”, even if it’s only mild punishment and even if you prefer not to see it that way. If, instead of arguing that they’re wrong you presuppose that they’re wrong and that the only thing up for discussion is how they could come to the wrong conclusion, they’re going to feel like they’re being treated like an idiot. If you frame those who disagree with you as idiots, then even if you have euphemisms for it and try to say “oh, well it’s not your fault that you’re wrong, and everyone is wrong sometimes”, then they are not going to want to interact with you.
Does this make sense?
If you frame them as an idiot, then in order to have a productive conversation with you that isn’t just “nuh uh!”/”yeah huh!”, they have to accept the frame that they’re an idiot, and no one wants to do that. They may be an idiot, and from your perspective it may not be a punishment at all—just that you’re helping them realize their place in society as someone who can’t form beliefs on their own and should just defer to the experts. And you might be right.
Still, by enforcing your frame on them, you are socially punishing them, from their perspective, and this puts pressure on them to “just believe the right things”. It’s not “believe 2+2=5 or the government will torture you”, it’s “believe that this climate change issue is a slam dunk or gjm will publicly imply that you are unreasonable and incapable of figuring out the obvious”, but that pressure is a step in the same direction—whether or not the climate change issue is a slam dunk and whether or not 2+2=5 does not change a thing. If I act to lower the status of people who believe the sky isn’t blue without even hearing out their reasons, then I am policing thoughts, and it becomes real hard to be in my social circle if you don’t share this communal (albeit true) belief. This has costs even when the communal beliefs are true. At the point where I start thinking less of people and imposing social costs on them for not sharing my beliefs (and not their inability to defend their own or update), I am disconnecting the truth finding mechanism and banking on my own beliefs being true enough on their own. This is far more costly than it seems like it should be for more than one reason—the obvious one being that people draw this line waaaaaaay too early, and very often are wrong about things where they stop tracking the distinction between “I believe X” and “X is true”.
And yes, there are alternative ways of going about it that don’t require you to pretend that “all opinions are equally valid” or that it you don’t think it would be better if more people agreed with you or any of that nonsense.
Oh. well in that case, if they’re saying “teaching you to not think bad is too hard, we’ll just make sure you don’t believe the wrong things, as determined by us”, then I kinda thought Lumifer’s criticism would have been too obvious to bother asking about.
Those awful geography teachers, making sure their pupils don’t believe the wrong things (as determined by them) about what city is the capital of Australia! Those horrible people at snopes.com, making sure people don’t believe the wrong things (as determined by them) about whether Procter & Gamble is run by satanists!
What makes Lumifer’s criticism not “too obvious to bother about” is not doubt about whether the people he’s criticizing are aiming to influence other people’s opinions. It’s whether there’s something improper about that.
yeah, that’s not true at all.
In your opinion, is anti-creationist misinformation as serious a problem as creationist misinformation? (10% as serious?)
This is what it feels like from the inside when you try to enforce your beliefs on people.
Yes, it is. But it’s also what it feels like from the inside in plenty of other situations that don’t involve enforcing anything, and it’s also what it feels like from the inside when the beliefs in question are so firmly established that no reasonable person could object to calling them “facts” as well as “beliefs”. (That doesn’t stop them being beliefs, of course.)
(The argument “You are saying X. X is what you would say if you were doing Y. Therefore, you are doing Y.” is not a sound one.)
it is far more accurate to say “make sure they hold the same beliefs as me”
The trouble is that the argument you have offered for this is so general that it applies e.g. to teaching people about arithmetic. I don’t disagree that it’s possible, and not outright false, to portray what an elementary school teacher is doing as “make sure these five-year-olds hold the same beliefs about addition as me”; but I think it’s misleading for two reasons. Firstly, because it suggests that their goal is “have the children agree with me” rather than “have the children be correct”. (To distinguish, ask: Suppose it eventually turns out somehow that you’re wrong about this, but you never find that out. Would it be better if the children end up with right beliefs that differ from yours, or wrong ones that match yours? Of course they will say they prefer the former. So, I expect, will most people trying to propagate opinions that are purely political; I am not claiming that answering this way is evidence of any extraordinary virtue. But I think it makes it wrong to suggest that what they want is to be agreed with.) Secondly, because it suggests (on Gricean grounds) that there actually is, or is quite likely to be, a divergence between “the beliefs I hold” and “the truth” in their case. When it comes to arithmetic, that isn’t the case.
Now, the fact (if you agree with me that it’s a fact; maybe you don’t) that the argument leads to a bad place when applied to teaching arithmetic doesn’t guarantee that it does so when it comes to global warming. But if not, there must be a relevant difference between the two. In that case, what do you think the relevant differences are?
If it was titled “I’m gonna prevent you from being wrong about the moon/sun size!”, then I’d see your intent a little bit differently.
All the talk of “preventing” and other coercion is stuff that you and Lumifer have made up. It’s not real.
it sounded to me like they weren’t just saying “hey guys, look, scientists do actually agree!” and were rather saying “how can we convince people that scientists agree” and taking that agreement as presupposed.
You know, you could actually just read the paper. It’s publicly available and it isn’t very long. Anyway: there are two different audiences involved here, and it looks to me (not just from the fragment I just quoted, but from what you say later on) as if you are mixing them up a bit.
The paper is (implicitly) addressed to people who agree with its authors about global warming. It takes it as read that global warming is real, not as some sort of nasty coercive attempt to make its readers agree with that but because the particular sort of “inoculation” it’s about will mostly be of interest to people who take that position. (And perhaps also because intelligent readers who disagree will readily see how one might apply its principles to other issues, or other sides of the same issue if it happens that the authors are wrong about global warming.)
The paper describes various kinds of interaction between (by assumption, global-warming-believing) scientists and the public. So:
Those interactions are addressed to people who do not necessarily agree with the paper’s authors about global warming. In fact, the paper is mostly interested in people who have neither strong opinions nor expertise in the field. The paper doesn’t advocate treating those people coercively; it doesn’t advocate trying to make them feel shame if they are inclined to disagree with the authors; it doesn’t advocate trying to impose social costs for disagreeing; it doesn’t advocate saying or implying that anyone is an idiot.
So. Yes, the paper treats global warming as a settled issue. That would be kinda rude, and probably counterproductive, if it were addressed to an audience a nontrivial fraction of which disagrees; but it isn’t. It would be an intellectual mistake if in fact the evidence for global warming weren’t strong enough to make it a settled issue; but in fact it is. (In my opinion, which is what’s relevant for whether I am troubled by their writing what they do.)
Yes, I thought it was obvious and common knowledge that Lumifer was speaking in hyperbole.
I don’t (always) object to hyperbole. The trouble is that so far as I can tell, nothing that would make the associations of “crimethink” appropriate is true in this case. (By which, for the avoidance of doubt, I mean not only “they aren’t advocating torturing and brain-raping people to make them believe in global warming”, for instance, but “they aren’t advocating any sort of coercive behaviour at all”. And likewise for the other implications of “crimethink”.) The problem isn’t that it’s hyperbole, it’s that it’s not even an exaggeration of something real.
It’s not “thinks Y is incorrect”, it’s “socially punishes those who disagree”
Except that this “social punishment” is not something in any way proposed or endorsed by the paper Lumifer responded to by complaining about “crimethink”. He just made that up. (And you were apparently happy to go along with it despite having, by your own description, not actually read the paper.)
If, instead of arguing that they’re wrong you presuppose that they’re wrong and that the only thing up for discussion is how they could come to the wrong conclusion, they’re going to feel like they’re being treated like an idiot.
No doubt. But, once again, none of that is suggested or endorsed by the paper; neither does it make sense to complain that the paper is itself practising that behaviour, because it is not written for an audience of global-warming skeptics.
You might, of course, want to argue that I am doing that, right here in this thread. I don’t think that would be an accurate account of things, as it happens, but in any case I am not here concerned to defend myself. Lumifer complained that the paper was treating global warming skepticism as “crimethink”, and that’s the accusation I was addressing. If you want to drop that subject and discuss whether my approach in this thread is a good one, I can’t stop you, but it seems like a rather abrupt topic shift.
If I act to lower the status of people who believe the sky isn’t blue without even hearing out their reasons, then I am policing thoughts
OK, I guess, though “policing thoughts” seems to me excessively overheated language. But, again, this argument can be applied (as you yourself observe) to absolutely anything. In practice, we generally don’t feel the need to avoid saying straightforwardly that the sky is blue, or that 150 million years ago there were dinosaurs. That does impose some social cost on people who think the sky is red or that life on earth began 6000 years ago; but the reason for not hedging all the time with “as some of us believe”, etc., isn’t (usually) a deliberate attempt to impose social costs; it’s that it’s clearer and easier and (for the usual Gricean reasons: if you hedge, many in your audience will draw the conclusion that there must be serious doubt about the matter) less liable to mislead people about the actual state of expert knowledge if we just say “the sky is blue” or “such-and-such dinosaurs were around 150 million years ago”.
But, again, if we’re discussing—as I thought we were—the paper linked upthread, this is all irrelevant for the reasons given above. (If, on the other hand, we’ve dropped that subject and are now discussing whether gjm is a nasty rude evil thought-policer, then I will just remark that I do in fact generally go out of my way to acknowledge that some people do not believe in anthropogenic climate change; but sometimes, as e.g. when Lumifer starts dropping ridiculous accusations about “crimethink”, I am provoked into being a bit more outspoken than usual. And what I am imposing (infinitesimal) social costs for here is not “expressing skepticism about global warming”; it’s “being dickish about global warming” and, in fact, “attempting to impose social costs for unapologetically endorsing the consensus view on global warming”, the latter being what I think Lumifer has been trying to do in this thread.)
I’m not criticizing the article, nor am I criticizing you. I’m criticizing a certain way of approaching things like this. I purposely refrain from staking a claim on whether it applies to the article or to you because I’m not interested in convincing you that it does or even determining for sure whether it does. I get the impression that it does apply, but who knows—I haven’t read the article and I can’t read your mind. If it doesn’t, then congrats, my criticism doesn’t apply to you.
You’re thinking is on a very similar track to mine when you suggest the test “assuming you’re wrong, do you want them to agree or be right?”. The difference is that I don’t think that people saying “be right, of course” is meaningful at all. I think you gotta look at what actually happens when they’re confronted with new evidence that they are in fact wrong. If, when you’re sufficiently confident, you drop the distinction between your map and the territory, not just in loose speech but in internal representation, then you lose the ability to actually notice when you’re wrong and your actions will not match your words. This happens all the time.
I’ve never had a geography or arithmetic class suffer from that failure mode, and most of the time I disagreed with my teachers they responded in a way that actually helped us figure out which of us were right. However in geometry, power electronics, and philosophy, I have run into this failure mode where when I disagree all they can think of is “how do I convince him he’s wrong” rather than “let me address his point and see where that leads”—but that’s because those particular teachers sucked and not a fault of teaching in general. With respect to that paper, the title does seem to imply that they’ve dropped that distinction. It is a very common on that topic for people to drop the distinction and refuse to pick it up, so I’m guessing that’s what they’re doing there. Who knows though, maybe they’re saints. If so, good for them.
In practice, we generally don’t feel the need to avoid saying straightforwardly that the sky is blue, or that 150 million years ago there were dinosaurs. That does impose some social cost on people who think the sky is red or that life on earth began 6000 years ago;
Agreed.
I can straightforwardly say to you that there were dinosaurs millions of years ago because I expect that you’ll be with me on that and I don’t particularly care about alienating some observer who might disagree with us on that and is sensitive to that kind of thing. The important point is that the moment I find out that I’m actually interacting with someone who disagrees about what I presupposed, I stop presupposing that, apologize, and get curious—no matter how “wrong” they are, from my own viewpoint. It doesn’t matter if the topic is creationism or global warming or whether they should drive home blackout drunk because they’re upset.
A small minority of the times I wont, and instead I’ll inform them that I’m not interested in interacting with them because they’re an idiot. That’s a valid response too, in the right circumstance. This is imposing social costs for beliefs, and I’m actually totally fine with it. I just want to be really sure that I am aware of what I’m doing, why I’m doing it, and that I have a keen eye out for the signs that I was missing something.
What I don’t ever want to do is base my interactions with someone on the presupposition that they’re wrong and/or unreasonable. If I’m going to choose to interact with them, I’m going to try to meet them where they’re at. This is true even when I can put on a convincing face and hide how I really see them. This is true even when I’m talking to some third party about how I plan to interact with someone else. If I’m planning on interacting someone, I’m not presupposing they’re wrong/unreasonable. Because doing that would make me less persuasive in the cases I’m right and less likely to notice in the cases I’m not. There’s literally no upside and there’s downside whether or not I am, in fact, right.
In your opinion, is anti-creationist misinformation as serious a problem as creationist misinformation? (10% as serious?)
I wouldn’t ask this question in the first place.
Does this make sense?
Yes. It doesn’t surprise me that you believe that.
I’m not criticizing the article, nor am I criticizing you. I’m criticizing a certain way of approaching things like this.
That seems like the sort of thing that really needs stating up front. It’s that Gricean implicature thing again: If someone writes something about goldfish and you respond with “It’s really stupid to think that goldfish live in salt water”, it’s reasonable (unless there’s some other compelling explanation for why you bothered to say that) to infer that you think they think goldfish think in salt water.
(And this sort of assumption of relevance is a good thing. It makes discussions more concise.)
The difference is that I don’t think that people saying “be right, of course” is meaningful at all. I think you gotta look at what actually happens when they’re confronted with new evidence that they are in fact wrong.
For sure that’s far more informative. But, like it or not, that’s not information you usually have available.
If [...] you drop the distinction between your map and the territory [...]
Yup, it’s a thing that happens, and it’s a problem (how severe a problem depends on how well being “sufficiently confident” correlates, for the person in question, with actually being right).
With respect to that paper, the title does seem to imply that they’ve dropped that distinction.
As you say, there’s an important difference between dropping it externally and dropping it internally. I don’t know of any reliable way to tell when the former indicates the latter and when it doesn’t. Nor do I have a good way to tell whether the authors have strong enough evidence that dropping the distinction internally is “safe”, that they’re sufficiently unlikely to turn out to be wrong on the object level.
My own guess is that (1) it’s probably pretty safe to drop it when it comes to the high-level question “is climate change real?”, (2) the question w.r.t. which the authors actually show good evidence of having dropped the distinction is actually not that but “is there a strong expert consensus that climate change is real?”, and (3) it’s probably very safe to drop the distinction on that one; if climate change turns out not to be real then the failure mode is “all the experts got it wrong”, not “there was a fake expert consensus”. So I don’t know whether the authors are “saints” but I don’t see good reason to think they’re doing anything that’s likely to come back to bite them.
the moment I find out that I’m actually interacting with someone who disagrees about what I presupposed, I stop presupposing that [...]
I think this is usually the correct strategy, and it is generally mine too. Not 100% always, however. Example: Suppose that for some reason you are engaged in a public debate about the existence of God, and at some point the person you’re debating with supports one of his arguments with some remark to the effect that of course scientists mostly agree that so-called dinosaur fossils are really the bones of the biblical Leviathan, laid down on seabeds and on the land during Noah’s flood. The correct response to this is much more likely to be “No, sorry, that’s just flatly wrong” than “gosh, that’s interesting, do tell me more so I can correct my misconceptions”.
I wouldn’t ask this question in the first place.
That’s OK, you don’t need to; I already did. I was hoping you might answer it.
Yes. It doesn’t surprise me that you believe that.
So, given what you were saying earlier about “imposing social costs”, about not presupposing people are unreasonable, about interacting with people respectfully if at all … You do know how that “It doesn’t surprise me …” remark comes across, and intend it that way, right?
(In case the answer is no: It comes across as very, very patronizing; as suggesting that you have understood how I, poor fool that I am, have come to believe the stupid things I believe; but that they aren’t worth actually engaging with in any way. Also, it is very far from clear what “that” actually refers to.)
That seems like the sort of thing that really needs stating up front. It’s that Gricean implicature thing again: If someone writes something about goldfish and you respond with “It’s really stupid to think that goldfish live in salt water”, it’s reasonable (unless there’s some other compelling explanation for why you bothered to say that) to infer that you think they think goldfish think in salt water.
(And this sort of assumption of relevance is a good thing. It makes discussions more concise.)
If someone writes “it’s stupid to think that goldfish live in saltwater” there’s probably a reason they say this, and it’s generally not a bad guess that they think you think they can live in salt water. However, it is still a guess and to respond as if they are affirmatively claiming that you believe this is putting words in their mouth that they did not say and can really mess with conversations, as it has here.
For sure that’s far more informative. But, like it or not, that’s not information you usually have available.
Agree to disagree.
My own guess is that (1) it’s probably pretty safe
A big part of my argument is that it doesn’t matter if Omega comes down and tells you that you’re right. It’s still a bad idea.
Another big part is that even when people guess that they’re probably pretty safe, they end up being wrong a really significant point of the time, and that from the outside view it is a bad idea to drop the distinction simply because you feel it is “probably pretty safe”—especially when there is absolutely no reason to do it and still reason not to even if you’re correct on the matter. (also, people are still often wrong even when they say “yeah, but that’s different. They’re overconfident, I’m actually safe”)
I don’t see good reason to think they’re doing anything that’s likely to come back to bite them.
I note that you don’t. I do.
That’s OK, you don’t need to; I already did. I was hoping you might answer it.
The point is that I don’t see it as worth thinking about. I don’t know what I would do with the answer. It’s not like I have a genie that is offering me the chance to eliminate the problems caused by one side or the other, but that I have to pick.
There are a lot of nuances in things like this, and making people locally more correct is not even always a good thing. I haven’t seen any evidence that you appreciate this point, and until I do I can only assume that this is because you don’t. It doesn’t seem that we agree on what the answer to that question would mean, and until we’re on the same page there it doesn’t make any sense to try to answer it.
So, given what you were saying earlier about “imposing social costs”, about not presupposing people are unreasonable, about interacting with people respectfully if at all … You do know how that “It doesn’t surprise me …” remark comes across, and intend it that way, right?
(In case the answer is no: It comes across as very, very patronizing; as suggesting that you have understood how I, poor fool that I am, have come to believe the stupid things I believe; but that they aren’t worth actually engaging with in any way. Also, it is very far from clear what “that” actually refers to.)
I am very careful with what I presuppose, and what I said does not actually presuppose what you say it does. It’s not presupposing that you are wrong or not worth engaging with. It does imply that as it looks to me—and I do keep this distinction in mind when saying this—as it looks to me, it was not worth it for me to engage with you on that level at the time I said it. Notice that I am engaging with you and doing my best to get to the source of our actual disagreement—it’s just not on the level you were responding on. Before engaging with why you think my argument is wrong, I want to have some indication that you actually understand what my argument is, that’s all, and I haven’t seen it. This seems far less insulting to me than “poor fool who I am willing to presuppose believes stupid things and is not worth engaging with in any way”. Either way though, as a neutral matter of fact, I wasn’t surprised by anything you said so take it how you would like.
I’m not presupposing that you’re not worth engaging with on that level, but I am refusing to accept your presupposition that you are worth engaging with on that level. That’s up for debate, as far as I’m concerned, and I’m open to you being right here. My stance is that is never a good idea to presuppose things that you can predict your conversation partner will disagree with unless you don’t mind them writing you off as an arrogant fool and disengaging, but that you never have to accept their presuppositions out of politeness. Do you see why this distinction is important to me?
I was aware that what I said was likely to provoke offense, and I would like to avoid that if possible. It’s just that, if you are going to read into what I say and treat it as if I am actively claiming things when you just have shaky reason to suspect that I privately believe them, then you’re making me choose between “doing a lot of legwork to prevent gjm from unfairly interpreting me” or “letting gjm unfairly interpret me and get offended by things I didn’t say”. I have tried to make it clear that I’m only saying what I’m saying, and that the typical inferences aren’t going to hold true, and at some point I gotta just let you interpret things how you will and then let you know that again, I didn’t claim anything other than what I claimed.
However, it is still a guess and to respond as if they are affirmatively claiming that you believe this is putting words in their mouth that they did not say and can really mess with conversations, as it has here.
In my experience, when it messes with conversations it is usually because one party is engaging in what I would characterize as bad-faith conversational manoeuvres.
I haven’t seen any evidence that you appreciate this point
I’m not sure there’s anything I could say or do that you would take as such evidence. (General remark: throughout this discussion you appear to have been assuming I fail to understand things that I do in fact understand. I do not expect you to believe me when I say that. More specific remark: I do in fact appreciate that point, but I don’t expect you to believe me about that either.)
I want to have some indication that you actually understand what my argument is, that’s all, and I haven’t seen it.
I am generally unenthusiastic about this sort of attempt to seize the intellectual high ground by fiat, not least because it is unanswerable if you choose to make it so; I remark that there are two ways for one person’s argument not to be well understood by another; and it seems to me that the underlying problem here is that from the outset you have proceeded on the assumption that I am beneath your intellectual level and need educating rather than engaging. However, I will on this occasion attempt to state your position and see whether you consider my attempt adequate. (If not, I suggest you write me off as too stupid to bother discussing with and we can stop.) I will be hampered here and there by the fact that in many places you have left important bits of your argument implicit, chosen not to oblige when I’ve asked you questions aimed at clarifying them, and objected when I have made guesses.
So. Suppose we have people A and B. A believes a proposition P (for application to the present discussion, take P to be something like “the earth’s climate has warmed dramatically over the last 50 years, largely because of human activity, and is likely to continue doing so unless we change what we’re doing”) and is very confident that P is correct. B, for all A knows, may be confident of not-P, or much less confident of P than A is, or not have any opinion on the topic just yet. The first question at issue is: How should A speak of P, in discussion with B (or with C, with B in the audience)? And, underlying it: How should A think of P, internally?
“Internally” A’s main options are (1) to treat P as something still potentially up for grabs or (2) to treat it as something so firmly established that A need no longer bother paying attention to how evidence and arguments for and against P stack up. With unlimited computational resources and perfect reasoning skills, #1 would be unambiguously better in all cases (with possible exceptions only for things, if any there be, so fundamental that A literally has no way of continuing to think if they turn out wrong); in practice, #2 is sometimes defensible for the sake of efficiency or (perhaps) if there’s a serious danger of being manipulated by a super-clever arguer who wants A to be wrong. The first of those reasons is far, far more common; I don’t know whether the second is ever really sufficient grounds for treating something as unquestionable. (But e.g. this sort of concern is one reason why some religious people take that attitude to the dogmas of their faith: they literally think there is a vastly superhuman being actively trying to get them to hold wrong beliefs.)
“Externally” A’s main options are (1) to talk of P as a disputable matter, to be careful to say things like “since I think P” rather than “since P”, etc., when talking to B; and (2) to talk as if A and B can both take it for granted that P is correct. There is some scope for intermediate behaviours, such as mostly talking as if P can be taken for granted but ocasionally making remarks like “I do understand that P is disputed in some quarters” or “Of course I know you don’t agree about this, but it’s so much less cumbersome not to shoehorn qualifications into every single sentence”. There is also a “strong” form of #2 where A says or implies that no reasonable person would reject P, that P-rejecters are stupid or dishonest or crazy or whatever.
Your principal point is, in these terms, that “internally” #2 is very dangerous, even in cases where A is extremely confident that contrary evidence is not going to come along, and that “externally” #2 is something of a hostile act if in fact B doesn’t share A’s opinion because it means that B has to choose between acquiescing while A talks as if everyone knows that P, or else making a fuss and disagreeing and quite possibly being seen as rude. (And also because this sort of pressure may produce an actual inclination on B’s part to accept P, without any actual argument or evidence having been presented.) Introducing this sort of social pressure can make collective truthseeking less effective because it pushes B’s thinking around in (I think you might say, though for my part I would want to add some nuance) ways basically uncorrelated with truth. (There’s another opposite one, just as uncorrelated with truth or more so, which I don’t recall you mentioning: B may perceive A as hostile and therefore refuse to listen even if A has very strong evidence or arguments to offer.) And you make the secondary point that internal #2 and external #2 tend to spill over into one another, so that each also brings along the other’s dangers.
We are agreed that internal #2 is risky and external #2 is potentially (for want of a better term) rude, “strong” external #2 especially so. We may disagree on just how high the bar should be for choosing either internal or external #1, and we probably do disagree more specifically on how high it should be in the case where P is the proposition about global warming mentioned above.
(We may also disagree about whether it is likely that the authors of the paper we were discussing are guilty of choosing, or advocating that their readers choose, some variety of #2 when actually #1 would be better; about whether it is likely that I am; about whether it makes sense to apply terms like “crimethink” when someone adopts external #2; and/or about how good the evidence for that global-warming proposition actually is. But I take it none of that is what you wish to be considered “your argument” in the present context.)
In support of the claim that internal #2 is dangerous far more often than A might suppose, you observe (in addition to what I’ve already said above) that people are very frequently very overconfident about their beliefs; that viewed externally, A’s supreme confidence in P doesn’t actually make it terribly unlikely that A is wrong about P. Accordingly, you suggest, A is making a mistake in adopting internal #2 even if it seems to A that the evidence and arguments for P are so overwhelming that no one sane could really disagree—especially if there are in fact lots of people, in all other respects apparently sane, who do disagree. I am not sure whether you hold that internal #2 is always an error; I think everything you’ve said is compatible with that position but you haven’t explicitly claimed it and I can think of good reasons for not holding it.
In support of the claim that external #2 is worse than A might suppose, you observe that (as mentioned above) doing it imposes social costs on dissenters, thereby making it harder for them to think independently and also making it more likely that they will just go away and deprive A’s community of whatever insight they might offer. And (if I am interpreting correctly one not-perfectly-clear thing you said) that doing this amounts to deciding not to care about contrary evidence and arguments, in other words to implicitly adopting internal #2 with all its dangers. You’ve made it explicitly that you’re not claiming that external #2 is always a bad idea; on the face of it you’ve suggested that external #2 is fine provided A clearly understands that it involves (so to speak) throwing B to the wolves; my guess is that in fact you consider it usually not fine to do that; but you haven’t made it clear (at least to me) what you consider a good way to decide whether it is. It is, of course, clear that you don’t consider that great confidence about P on A’s part is in itself sufficient justification. (For the avoidance of doubt, nor do I; but I think I am willing to give it more weight than you are.)
That’ll do for now. I have not attempted to summarize everything you’ve said, and perhaps I haven’t correctly identified what subset you consider “your argument” for present purposes. (In particular, I have ignored everything that appears to me to be directed at specific (known or conjectured) intellectual or moral failings of the authors of the paper, or of me, and attended to the more general point.)
something like “the earth’s climate has warmed dramatically over the last 50 years, largely because of human activity, and is likely to continue doing so unless we change what we’re doing”
Without restarting the discussion, let me point out what I see to be the source of many difficulties. You proposed a single statement to which you, presumably, want to attach some single truth value. However your statement consists of multiple claims from radically different categories.
“the earth’s climate has warmed dramatically over the last 50 years” is a claim of an empirical fact. It’s relatively easy to discuss it and figure out whether it’s true.
“largely because of human activity” is a causal theory claim. This is much MUCH more complex than the preceding claim, especially given the understanding (existing on LW) that conclusions about causation do not necessarily fall out of descriptive models.
“and is likely to continue doing so” is a forecast. Forecasts, of course, cannot be proved or disproved in the present. We can talk about our confidence in a particular forecast which is also not exactly a trivial topic.
Jamming three very different claims together and treating them as a single statement doesn’t look helpful to me.
a single statement to which you, presumably, want to attach some single truth value
It would be a probability, actually, and it would need a lot of tightening up before it would make any sense even to try to attach any definite probability to it. (Though I might be happy to say things like “any reasonable tightening-up will yield a statement to which I assign p>=0.9 or so”.)
your statement consists of multiple claims from radically different categories
Yes, it does.
For the avoidance of doubt, in writing down a conjunction of three simpler propositions I was not making any sort of claim that they are of the same sort, or that they are equally probable, or that they are equivalent to one another, or that it would not often be best to treat individual ones (or indeed further-broken-down ones) separately.
Jamming three very different claims together and treating them as a single statement doesn’t look helpful to me.
It seems perfectly reasonable to me. It would be unhelpful to insist that the subsidiary claims can’t be considered separately (though each of them is somewhat dependent on its predecessors; it doesn’t make sense to ask why the climate has been warming if in fact it hasn’t, and it’s risky at best to forecast something whose causes and mechanisms are a mystery to you) but, I repeat, I am not in any way doing that. It would be unhelpful to conflate the evidence for one sub-claim with that for another; that’s another thing I am not (so far as I know) doing. But … unhelpful simply to write down a conjunction of three closely related claims? Really?
In what sense (other than writing it down, and suggesting that it summarizes what is generally meant by “global warming” when people say they do or don’t believe it) am I treating it as a single unit?
“the earth’s climate has warmed dramatically over the last 50 years” is a claim of an empirical fact.
“The earth’s climate has warmed by about x °C over the last 50 years” is a claim of an empirical fact. “It is dramatic for a planet to warm by about x °C in 50 years” is an expression of the speaker’s sense of drama.
I’m not sure there’s anything I could say or do that you would take as such evidence.
What you say below (“I do in fact appreciate that point”) is all it takes for this.
(General remark: throughout this discussion you appear to have been assuming I fail to understand things that I do in fact understand. I do not expect you to believe me when I say that.
For what it’s worth, I feel the same way about this. From my perspective, it looks like you are assuming that I don’t get things that I do get, are assuming I’m assuming things things I am not assuming, saying thing things I’m not saying, not addressing my important points, being patronizing yourself, “gish galloping”, and generally arguing in bad faith. I just had not made a big stink about it because I didn’t anticipate that you wanted my perspective on this or that it would cause you to rethink anything.
Being wrong about what one understands is common too (illusion of transparency, and all that), but I absolutely do take this as very significant evidence as it does differentiate you from a hypothetical person who is so wrapped up in ego defense that they don’t want to address this question.
I am generally unenthusiastic about this sort of attempt to seize the intellectual high ground by fiat, not least because it is unanswerable if you choose to make it so;
Can you explain what you mean by “attempt to seize the intellectual high ground” and “it is unanswerable”, as it applies here? I don’t think I follow. I don’t think I’m “attempting to seize” anything, and have no idea what the question that “unanswerable” applies to is.
it seems to me that the underlying problem here is that from the outset you have proceeded on the assumption that I am beneath your intellectual level and need educating rather than engaging.
Is this “educating” as in scare quotes “educating”/”correcting your foolish wrong thoughts”, or as in the “I thought you might be interested in hearing what I have to say about the topic, so I shared” kind of educating? I’ll agree that it’s the latter, but I wouldn’t put “beneath [my] intellectual level” on it. You asked a question, I had an answer, I thought you wanted it. Asking questions does not make people inferior or “beneath” anyone else, in my opinion.
However, if you mean “you don’t seem interested in my rebuttle”, then you’re right, I was not. I have put a ton of thought into the ethics of persuasion over the last several years, and there aren’t really any questions here that I don’t feel like I have a pretty darn solid answer to. Additionally, if you don’t already think about these problems the way that I do, it’s actually really difficult to convey my perspective, even if communication is flowing smoothly. And it often doesn’t, because it’s also really really easy to think I’m talking about something else, leading to the illusion that my point has been understood. This combination makes run-of-the-mill disagreement quite uninteresting, and I only engaged because I mistook your original question for “I would like to learn how to differentiate between teaching and thought-policing”, not “I would like to argue that they aren’t thought policing and that you’re wrong to think they are”.
And again, I do not think it warrants accusations of “patronizing you poor, poor fool” for privately holding the current best guess that this disagreement is more likely to be about you misunderstanding my point than about me hallucinating something in their title. Am I allowed to believe I’m probably right, or do I have to believe that you’re probably right and that I’m probably wrong? Are you allowed to believe that you’re probably right?
However, I will on this occasion attempt to state your position and see whether you consider my attempt adequate.
It is far enough off that I can’t endorse it as “getting” where I’m coming from. For example, “being seen as rude”, itself, is so not what it’s about. There are often two very different ways of looking at things that can produce superficially similar prescriptions for fundamentally different reasons. It looks like you understand the one I do not hold, but do not realize that there is another, completely different, reason to not want to do #2 externally.
However, I do appreciate it as an intellectually honest attempt to check your understanding of my views and it does capture the weight of the main points themselves well enough that I’m curious to hear where you disagree (or if you don’t disagree with it as stated).
Somewhat relatedly but somewhat separately, I’m interested to hear how you think it applies to how you’ve approached things here. From my perspective, you’re doing a whole lot of the external #2 at me. Do you agree and think it’s justified? If so, how? Do you not see yourself as doing external #2 here? If so, do you understand how it looks that way to me?
Given this summary of my view, I do think I see why you don’t see it as suggesting that the researchers were making any mistake. The reason I do think they’re making a mistake is not present in your description of my views.
I will be hampered here and there by the fact that in many places you have [...] chosen not to oblige when I’ve asked you questions aimed at clarifying them, and objected when I have made guesses.
Hold on.
I gotta stop you there because that’s extremely unfair. I haven’t answered every question you’ve asked, but I have addressed most, if not all of them (and if there’s one I missed that you would like me to address, ask and I will). I also specifically addressed the fact that I don’t have a problem with you making guesses but that I don’t see it as very charitable or intellectually honest when you go above and beyond and respond as if
I had actively claimed those things.
You’ve made it explicitly that you’re not claiming that external #2 is always a bad idea; on the face of it you’ve suggested that external #2 is fine provided
This is a very understandable reading of what I said, but no. I do not agree that what you call “external #2” is ever a good thing to do either. I also would not frame it that way in the first place.
I did not accuse you of that. I don’t think you’ve done that. I said that Lumifer did it because, well, he did: I said “no one is proposing X”, he said “what about A and B”, I pointed out that A and B were not in fact proposing X, and he posted another seven instances of … people not proposing X. A long sequence of bad arguments, made quickly but slower to answer: that is exactly what a Gish gallop is. I don’t think you’ve been doing that, I don’t think Lumifer usually does it, but on this occasion he did.
I am generally unenthusiastic about this sort of attempt to seize the intellectual high ground by fiat, not least because it is unanswerable if you choose to make it so;
Can you explain what you mean by “attempt to seize the intellectual high ground” and “it is unanswerable”, as it applies here?
“Attempting to seize the intellectual high ground” = “attempting to frame the situation as one in which you are saying clever sensible things that the other guy is too stupid or blinkered or whatever to understand. “Unanswerable if you choose to make it so” because when you say “I don’t think you have grasped my argument”, any response I make can be answered with “No, sorry, I was right: you didn’t understand my argument”—regardless of what I actually have understood or not understood. (I suppose one indication of good or bad faith on your part, in that case, would be whether you then explain what it is that I allegedly didn’t understand.)
Am I allowed to believe that I’m probably right [...]?
I am greatly saddened, and somewhat puzzled, that you apparently think I might think the answer is no. (Actually, I don’t think you think I might think the answer is no; I think you are grandstanding.) Anyway, for the avoidance of doubt, I have not the slightest interest in telling anyone else what they are allowed to believe, and if (e.g.) what I have said upthread about that paper about global warming has led you to think otherwise then either I have written unclearly or you have read uncharitably or both.
For example, “being seen as rude”, itself, is so not what it’s about.
The problem here is unclarity on my part or obtuseness on yours, rather than obtuseness on my part or unclarity on yours :-).The bit about “being seen as rude” was not intended as a statement of your views or of your argument; it was part of my initial sketch of the class of situations to which those views and that argument apply. The point at which I start sketching what I think you were saying is where I say “Your principal point is, in these terms, …”.
The reason I do think they’re making a mistake is not present in your description of my views.
Well, I was (deliberately) attempting to describe what I took to be your position on the general issue, rather than on what the authors of the article might or might not have done. (I am not all that interested in what you think they have done, since you’ve said you haven’t actually looked at the article.) But it’s entirely possible that I’ve failed to notice some key part of your argument, or forgotten to mention it even though if I’d been cleverer I would have. I don’t suppose you’d like to explain what it is that I’ve missed?
This is a very understandable reading of what I said, but no. I do not agree that what you call “external #2” is ever a good thing to do either.
Just in case anyone other than us is reading this, I would like to suggest that those hypothetical readers might like to look back at what I actually wrote and how you quoted it, and notice in particular that I explicitly said that I think your position probably isn’t the one that “on the face of it you’ve suggested”. (Though it was not previously clear to me that you think “external #2″ is literally never a good idea. One reason is that it looks to me—and still does after going back and rereading—as if you explicitly said that you sometimes do it and consider it reasonable. See here and search for “A small minority”.)
As to the other things you’ve said (e.g., asking whether and where and why I disagree with your position), I would prefer to let that wait until you have helped me fix whatever errors you have discerned in my understanding of your position and your argument. Having gone to the trouble of laying it out, it seems like it would be a waste not to do that, don’t you think?
You’ve made specific mention of two errors. One (see above) wasn’t ever meant to be describing your position, so that’s OK. The other is that my description doesn’t mention “the reason I do think they’re making a mistake” (they = authors of that article whose title you’ve read); I don’t know whether that’s an error on my part, or merely something I didn’t think warranted mentioning, but the easiest way to find out would be for you to say what that reason is.
Your other comments give the impression that there are other deficiencies (e.g., “It is far enough off that I can’t endorse it as “getting” where I’m coming from.” and “It looks like you understand the one I do not hold, but do not realize that there is another, completely different, reason to not want to do #2 externally.”) and I don’t think it makes any sense to proceed without fixing this. (Where “this” is probably a lack of understanding on my part, but might also turn out to be that for one reason or another I didn’t mention it, or that I wasn’t clear enough in my description of what I took to be your position.) If we can’t get to a point where we are both satisfied that I understand you adequately, we should give up.
Just in case anyone other than us is reading this,
For whatever little it’s worth, I read the first few plies of these subthreads, and skimmed the last few.
From my partial reading, it’s unclear to me that Lumifer is/was actually lying (being deliberately deceptive). More likely, in my view, is/was that Lumifer sincerely thinks spurious your distinction between (1) criminalizing disbelief in global warming, and (2) criminalizing the promulgation of assertions that global warming isn’t real in order to gain an unfair competitive advantage in a marketplace. I think Lumifer is being wrong & silly about that, but sincerely wrong & silly. On the “crimethink” accusation as applied to the paper specifically, Lumifer plainly made a cheap shot, and you were right to question it.
As for your disagreement with jimmy, I’m inclined to say you have the better of the argument, but I might be being overly influenced by (1) my dim view of jimmy’s philosophy/sociology of argument, at least as laid out above, (2) my incomplete reading of the discussion, and (3) my knowledge of your track record as someone who is relatively often correct, and open to dissecting disagreement with others, often to a painstaking extent.
I would like to quibble here that I’m not trying to argue anything, and that if gjm had said “I don’t think the authors are doing anything nearly equivalent to crimethink and would like to see you argue that they are”, I wouldn’t have engaged because I’m not interested in asserting that they are.
I’d call it more “[...] of deliberately avoiding argument in favor of “sharing honestly held beliefs for what they’re taken to be worth”, to those that are interested”. If they’re taken (by you, gjm, whoever) to be worth zero and there’s no interest in hearing them and updating on them, that’s totally cool by me.
I am greatly saddened, and somewhat puzzled, that you apparently think I might think the answer is no. (Actually, I don’t think you think I might think the answer is no; I think you are grandstanding.)
It’s neither. I have a hard time imagining that you could say no. I was just making sure to cover all the bases because I also have a hard time imagining that you could still say that I’m actively trying to claim anything after I’ve addressed that a couple times.
I bring it up because at this point, I’m not sure how you can simultaneously hold the views “he can believe whatever he wants”, “he hasn’t done anything in addition that suggests judgement too” (which I get that you haven’t yet agreed to, but you haven’t addressed my arguments that I haven’t yet either), and then accuse me of trying to claim the intellectual high ground without cognitive dissonance. I’m giving you a chance to either teach me something new (i.e. “how gjm can simultaneously hold these views congruently”), or, in the case that you can’t, the chance for you to realize it.
The bit about “being seen as rude” was not intended as a statement of your views or of your argument; it was part of my initial sketch of the class of situations to which those views and that argument apply. The point at which I start sketching what I think you were saying is where I say “Your principal point is, in these terms, …”.
Quoting you, “Your principal point is, in these terms, that [...] and that “externally” #2 is something of a hostile act if in fact B doesn’t share A’s opinion because it means that B has to choose between acquiescing while A talks as if everyone knows that P, or else making a fuss and disagreeing and quite possibly being seen as rude.” (emphasis mine)
That looks like it’s intended to be a description of my views to me, given that it directly follows the point where you start sketching out what my views are, following a “because”, and before the first period.
Even if it’s not, though, if you’re saying it as part of a sketch of the situation, it’s one that anyone who sees things the way I do can see that I won’t find it to be a relevant part of the situation, and the fact that you mention it—even if it were just part of that sketch—indicates that either you’re missing this or that you see that you’re giving a sketch that I don’t agree with as if my disagreement is irrelevant.
Well, I was (deliberately) attempting to describe what I took to be your position on the general issue, rather than on what the authors of the article might or might not have done.
Right. I think it is the correct approach to describe my position in general. However, the piece of my general position that would come into play in this specific instance was not present so if you apply those views as stated, of course you wouldn’t have a problem with what the authors have done in this specific instance.
(I am not all that interested in what you think they have done, since you’ve said you haven’t actually looked at the article.)
I am also not interested in what (I think) they have done in the article. I have said this already, but I’ll agree again if you’d like. You’re right to not be interested in this.
I don’t suppose you’d like to explain what it is that I’ve missed?
Honestly, I would love to. I don’t think I’m capable of explaining it to you as of where we stand right now. Other people, yes. Once we get to the bottom of our disagreement, yes. Not until then though.
This conversation has been fascinating to me, but it has also been a bit fatiguing to make the same points and not see them addressed. I’m not sure we’ll make it that far, but it’d be awesome if we do.
notice in particular that I explicitly said that I think your position probably isn’t the one that “on the face of it you’ve suggested”.
Yes, I noticed that qualification and agree. On the face of it, it certainly does look that way. That’s what I meant by “a very understandable reading”.
However, the preceding line is “You’ve made it explicitly that you’re not claiming that external #2 is always a bad idea”, and that is not true. I said “A small minority of the times I wont [...]”, and what follows is not explicitly “external #2”. I can see how you would group what follows with “external #2”, but I do not. This is what I mean when I say that I predict you will assume that you’re understanding what I’m saying when you do not.
As to the other things you’ve said (e.g., asking whether and where and why I disagree with your position), I would prefer to let that wait until you have helped me fix whatever errors you have discerned in my understanding of your position and your argument.
This seems backwards to me. Again, with the double cruxing, you have to agree on F before you can agree on E before you can agree on D before you can even think about agreeing on the original topic. This reads to me like you saying you want me to explain why we disagree on B before you address C.
Having gone to the trouble of laying it out, it seems like it would be a waste not to do that, don’t you think?
Not necessarily. I think it’s perfectly fine to be uninterested in helping you fix the errors I discern in the understanding of my argument, unless I had already gone out of my way to give you reason to believe I would if you layed out your understanding for me. Especially if I don’t think you’ll be completely charitable.
I haven’t gone out of my way to give you reason to believe I would, since I wasn’t sure at the time, but I’ll state my stance explicitly now. This conversation has been fascinating to me. It has also been a bit fatiguing, and I’m unsure of how long I want to continue this. To the extent that it actually seems we can come to the bottom of our disagreement, I am interested in continuing. If we get to the point where you’re interested in hearing it and I think it will be fruitful, I will try to explain the difference between my view and your attempt to describe them.
As I see it now, we can’t get there until I understand why you treat what I see as “privately holding my beliefs, and not working to hide them from (possibly fallacious) inference” as if it is “actively presupposing that my beliefs are correct, and judging anyone who disagrees as ‘below me’”. I also don’t think we can get there until we can agree on a few other things that I’ve brought up and haven’t seen addressed.
Either way, thanks for the in depth engagement. I do appreciate it.
On “being seen as rude”: I beg your pardon, I was misremembering exactly what I had written at each point. However, I still can’t escape the feeling that you are either misunderstanding or (less likely) being deliberately obscure, because what you actually say about this seems to me to assume that I was presenting “being seen as rude” as a drawback of doing what I called “external #2”, whereas what I was actually saying is that one problem with “external #2″ is that it forces someone who disagrees to do something that could be seen as rude; that’s one mechanism by which the social pressure you mentioned earlier is applied.
To the extent that it actually seems we can come to the bottom of our disagreement, I am interested in continuing.
Except that what you are actually doing is repeatedly telling me that I have not understood you correctly, and not lifting a finger to indicate what a correct understanding might be and how it might differ from mine. You keep talking about inferential distances that might prevent me understanding you, but seem to make no effort even to begin closing the alleged gap.
In support of this, in the other half of your reply you say I “seem to be acting as if it’s impossible to be on step two honestly and that I must be trying to hide from engagement if I am not yet ready to move on to step three”; well, if you say that’s how it seems to you then I dare say it’s true, but I am pretty sure I haven’t said it’s “impossible to be on step two honestly” because I don’t believe that, and I’m pretty sure I haven’t said that you “must be trying to hide from engagement” because my actual position is that you seem to be behaving in a way consistent with that but of course there are other possibilities. And you say that I “should probably make room for both possibilities” (i.e., that you do, or that you don’t, see things I don’t); which is odd because I do in fact agree that both are possibilities.
So. Are you interested in actually making progress on any of this stuff, or not?
In support of this, in the other half of your reply you say I “seem to be acting as if it’s impossible to be on step two honestly and that I must be trying to hide from engagement if I am not yet ready to move on to step three”; well, if you say that’s how it seems to you then I dare say it’s true, but I am pretty sure I haven’t said it’s “impossible to be on step two honestly” because I don’t believe that, and I’m pretty sure I haven’t said that you “must be trying to hide from engagement” because my actual position is that you seem to be behaving in a way consistent with that but of course there are other possibilities. And you say that I “should probably make room for both possibilities” (i.e., that you do, or that you don’t, see things I don’t); which is odd because I do in fact agree that both are possibilities.
Right. I’m not accusing you of doing it. You didn’t say it outright, I don’t expect you to endorse that description, and I don’t see any reason even to start to form an opinion on whether it accurately describes your behavior or not. I was saying it as more of a “hey, here’s what you look like to me. I know (suspect?) this isn’t what you look like to you, so how do you see it and how do I square this with that?”. I just honestly don’t know how to square these things.
If, hypothetically, I’m on step two because I honestly believe that if I tried to explain my views you would likely prematurely assume that you get it and that it makes more sense to address this meta level first, and if, hypothetically, I’m even right and have good reasons to believe I’m right… what’s your prescription? What should I do, if that were the case? What could I do to make it clear that am arguing in good faith, if that were the case?
So. Are you interested in actually making progress on any of this stuff, or not?
If you can tell me where to start that doesn’t presuppose that my beliefs are wrong or that I’ve been arguing in bad faith, I would love to. Where would you have me start?
I just honestly don’t know how to square these things.
Whereas I honestly don’t know how to help you square them, because I don’t see anything in what I wrote that seems like it would make a reasonable person conclude that I think it’s impossible to be on your “step 2” honestly, or that I think you “must be trying to hide from engagement” (as opposed to might be, which I do think).
If [...] I honestly believe that [...] you would likely prematurely assume that you get it [...] what’s your prescription? [...] What could I do to make clear that I am arguing in good faith [...]?
My general prescription for this sort of situation (and I remark that not only do I hope I would apply it with roles reversed, but that’s pretty much what I am doing in this discussion) is: proceed on the working assumption that the other guy isn’t too stupid/blinkered/crazy/whatever to appreciate your points, and get on with it; or, if you can’t honestly give that assumption high enough probability to make it worth trying, drop the discussion altogether.
(This is also, I think, the best thing you could do to make it clear, or at any rate markedly more probable to doubtful onlookers, that you’re arguing in good faith.)
If you can tell me where to start that doesn’t presuppose that my beliefs are wrong or that I’ve been arguing in bad faith, I would love to. Where would you have me start?
The same place as I’ve been asking you to start for a while: you say I haven’t understood some important parts of your position, so clarify those parts of your position for me. Adopt the working assumption that I’m not crazy, evil or stupid but that I’ve missed or misunderstood something, and Sure, it might not work: I might just be too obtuse to get it; in that case that fact will become apparent (at least to you) and you can stop wasting your time. Or it might turn out—as, outside view, it very frequently does when someone smart has partially understood something and you explain to them the things you think they’ve missed—that I will understand; or—as, outside view, is also not so rare—that actually I understood OK already and there was some sort of miscommunication. In either of those cases we can get on with addressing whatever actual substantive disagreements we turn out to have, and maybe at least one of us will learn something.
(In addition to the pessimistic option of just giving up, and the intermediate option of making the working assumption that I’ve not understood your position perfectly but am correctible, there is also the optimistic option of making the working assumption that actually I’ve understood it better than you think, and proceeding accordingly. I wouldn’t recommend that option given my impression of your impression of my epistemic state, but there are broadly-similar situations in which I would so I thought I should mention it.)
My general prescription for this sort of situation [...] is: proceed on the working assumption that the other guy isn’t too stupid/blinkered/crazy/whatever to appreciate your points, and get on with it; or, if you can’t honestly give that assumption high enough probability to make it worth trying, drop the discussion altogether.
All of the options you explicitly list imply disrespect. If I saw all other options as implying disrespect as well, I would agree that “if you can’t honestly give that assumption high enough probability to make it worth trying, [it’s best to] drop the discussion altogether”.
However, I see it as possible to have both mutual respect and predictably counterproductive object level discussion. Because of this, I see potential for fruitful avenues other than “plow on the object level and hope it works out, or bail”. I have had many conversations with people whom I respect (and who by all means seem to feel respected by me) where we have done this to good results—and I’ve been on the other side too, again, without feeling like I was being disrespected.
Your responses have all been consistent with acting like I must be framing you as stupid/blinkered/crazy/otherwise-unworthy-of-respect if I don’t think object level discussion is the best next step. Is there a reason you haven’t addressed the possibility that I’m being sincere and that my disinterest in “just explaining my view” at this point isn’t predicated on me concluding that you’re stupid/blinkered/crazy/otherwise-unworthy-of-respect? Even to say that you hear me but conclude that I must be lying/crazy since that’s obviously too unlikely to be worth considering?
The same place as I’ve been asking you to start for a while: [...] clarify those parts of your position for me. Adopt the working assumption that I’m not crazy, evil or stupid but that I’ve missed or misunderstood something, and Sure, it might not work: I might just be too obtuse to get it; in that case that fact will become apparent (at least to you) and you can stop wasting your time.
The thing is, that does presuppose that my belief that “in this case, as with many others with large inferential distance, trying to simply clarify my position will result in more misunderstanding than understanding, on expectation, and therefore is not a good idea—even if the other person isn’t stupid/blinkered/crazy/otherwise-undeserving-of-respect” is wrong. Er.. unless you’re saying “sure, you might be right, and maybe it could work your way and couldn’t work my way, but I’m still unwilling to take that seriously enough to even consider doing things your way. My way or it ain’t happenin’.”
If it’s the latter case, and if, as you seem to imply, this is a general rule you live by, I’m not sure what your plan is for dealing with the possibility of object level blind spots—but I guess I don’t have to. Either way, it’s a fair response here, if that’s the decision you want to make—we can agree to disagree here too.
Anyway, if you’re writing all these words because you actually want to know how the heck I see it, then I’ll see what I can do. It might take a while because I expect it to take a decent amount of work and probably end up long, but I promise I will work at it. If, on the other hand, you’re just trying to do an extremely thorough job at making it clear that you’re not closed to my arguments, then I’d be happy to leave it as “you’re unwilling to consider doing things my way”+”I’m unwilling to do things your way until we can agree that your way is the better choice”, if that is indeed a fair description of your stance.
(Sorta separately, I’m sure I’d have a bunch of questions on how you see things, if you’d have any interest in explaining your perspective)
All the options you explicitly list imply disrespect
Well, the one I’m actually proposing doesn’t, but I guess you mean the others do. I’m not sure they exactly do, though I certainly didn’t make any effort to frame them in tactfully respect-maximizing terms; in any case, it’s certainly not far off to say they all imply disrespect. I agree that there are situations in which you can’t explain something without preparation without any disrespect to the other guy being called for; but that’s because what happened was
jimmy says some things
gjm response
jimmy starts saying things like “Before engaging with why you think my argument is wrong, I want to have some indication that you actually understand what my argument is, that’s all, and I haven’t seen it.”
rather than, say,
jimmy says “so I have a rather complicated and subtle argument to make, so I’m going to have to begin with some preliminaries*.
When what happens is that you begin by making your argument and then start saying: nope, you didn’t understand it—and when your reaction to a good-faith attempt at dealing with the alleged misunderstanding is anything other than “oh, OK, let me try to explain more clearly”—I think it does imply something like disrespect; at least, as much like disrespect as those options I listed above. Because what you’re saying is: you had something to say that you thought was appropriate for your audience, and not the sort of thing that needed advance warning that it was extra-subtle; but now you’ve found that I don’t understand it and (you at least suspect) I’m not likely to understand it even if you explain it.
That is, it means that something about me renders me unlikely—even when this is locally the sole goal of the discussion, and I have made it clear that I am prepared to go to substantial lengths to seem mutual understanding—to be able to understand this thing that you want to say, and that you earlier thought was a reasonable thing to say without laying a load of preparatory groundwork.
Is there a reason you haven’t addressed the possibility that [...] my disinterest [...] isn’t predicated on me concluding that you’re stupid/blinkered/crazy/otherwise-unworthy-of-respect?
See above for why I haven’t considered it likely; the reason I haven’t (given that) addressed it is that there’s never time to address everything.
If there is a specific hypothesis in this class that you would like us to entertain, perhaps you should consider saying what it is.
The thing is, that does presuppose that my belief that [...] is wrong.
No, it presupposes that it could be wrong. (I would say it carries less presumption that it’s wrong than your last several comments in this thread carry presumption that it’s right.) The idea is: It could be wrong, in which case giving it a go will bring immediate benefit; it could be wrong but we could be (mutually) reasonable enough to see that it’s right when we give it a go and that doesn’t work, in which case giving it a go will get us past the meta-level stuff about whether I’m likely to be unable to understand. Or, of course, it could go the other way.
I’m not sure what your plan is for dealing with the possibility of object-level blind spots
When one is suspected, look at it up close and see whether it really is one. Which, y’know, is what I’m suggesting here.
if you’re writing all these words because you actually want to know how the heck I see it [...] I expect it to take a decent amount of work
What I was hoping to know, in the first instance, is what I have allegedly misunderstood in what you wrote before. You know, where you said things of the form “your description doesn’t even contain my actual reason for saying X”—which I took, for reasons that still look solid to me, to indicate that you had already given your actual reason.
If the only way for you to explain all my serious misunderstandings of what you wrote is for you to write an effortful lengthy essay about your general view … well, I expect it would be interesting. But on the face of it that seems like more effort than it should actually take. And if the reason why it should take all that effort is that, in essence, I have (at least in your opinion) understood so little of your position that there’s no point trying to correct me rather than trying again from scratch at much greater length then I honestly don’t know why you’re still in this discussion.
I’m sure I’d have a bunch of questions on how you see things, if you’d have any interest in explaining your perspective
I am happy to answer questions. I’ve had it pretty much up to here (you’ll have to imagine a suitable gesture) with meta-level discussion about what either of us may or may not be capable of understanding, though, so if the questions you want to ask are about what you think of me or what I think of you or what I think you think I think you think I am capable of understanding, then let’s give that a miss.
rather than, say,
jimmy says “so I have a rather complicated and subtle argument to make, so I’m going to have to begin with some preliminaries*.
I suppose I could have said “so I have a rather complicated and subtle argument to make. I would have to begin with some preliminaries and it would end up being kinda long and take a lot of work, so I’m not sure it’s worth it unless you really want to hear it”, and in a lot of ways I expect that would have gone better. I probably will end up doing this next time.
However in a couple key ways, it wouldn’t have, which is why I didn’t take that approach this time. And that itself is a complicated and subtle argument to make.
EDIT: I should clarify. I don’t necessarily think I made the right choice here, and it is something I’m still thinking about. However, it was an explicit choice and I had reasons.
When what happens is that you begin by making your argument and then start saying: nope, you didn’t understand it—and when your reaction to a good-faith attempt at dealing with the alleged misunderstanding is anything other than “oh, OK, let me try to explain more clearly”—I think it does imply something like disrespect; at least, as much like disrespect as those options I listed above.
Right, and I think this is our fundamental disagreement right here. I don’t think it implies any disrespect at all, but I’m happy to leave it here if you want.
Because what you’re saying is: [...] That is, it means that something about me renders me unlikely [...] to be able to understand this thing that you want to say, and that you earlier thought was a reasonable thing to say without laying a load of preparatory groundwork.
I see where you’re coming from, but I don’t think arguments with subtle backing always need that warning, nor do they always need to be intended to be fully understood in order to be worth saying. This means that “I can’t give you an explanation you’ll understand without a ton of work” doesn’t single you out nearly as much as you’d otherwise think.
I can get into this if you’d like, but it’d just be more meta shit, and at this point my solution is starting to converge with yours: “do the damn write up or shut up, jimmy”
See above for why I haven’t considered it likely; the reason I haven’t (given that) addressed it is that there’s never time to address everything.
I agree that you can’t address everything (nor have I), but this one stands out as the one big one I keep getting back to—and one where if you addressed it, this whole thing would resolve pretty much right away.
It seems like now that you have, we’re probably gonna end up at something more or less along the lines of “we disagree whether “mutual respect” and “knowably unable to progress on the object level” go together to a non-negligable amount, at least as it applies here, and gjm is uninterested in resolving this disagreement”. That’s an acceptable ending for me, so long as you know that it is a genuine belief of mine and that I’m not just trying to weasel around denying that I’ve been showing disrespect and shit.
No, it presupposes that it could be wrong.
I thought I addressed that possibility with the “err, or this” bit.
When one is suspected, look at it up close and see whether it really is one. Which, y’know, is what I’m suggesting here.
I was talking about the ones where that won’t work, which I see as a real thing though you might not.
If the only way for you to explain all my serious misunderstandings of what you wrote is for you to write an effortful lengthy essay about your general view … well, I expect it would be interesting.
If I ever end up writing it up, I’ll let you know.
But on the face of it that seems like more effort than it should actually take. And if the reason why it should take all that effort is that, in essence, I have (at least in your opinion) understood so little of your position that there’s no point trying to correct me rather than trying again from scratch at much greater length then I honestly don’t know why you’re still in this discussion.
:)
That’d probably have to be a part of the write up, as it calls on all the same concepts
“Attempting to seize the intellectual high ground” = [...] any response I make can be answered with “No, sorry, I was right: you didn’t understand my argument”—regardless of what I actually have understood or not understood.
The first part I feel like I’ve already addressed and haven’t seen a response to (the difference between staking active claims vs speaking from a place that you choose to draw (perhaps fallacious) inferences from and then treat as if they’re active claims).
The second part is interesting though. It’s pretty darn answerable to me! I didn’t realize that you thought that I might hear an answer that perfectly paces my views and then just outright lie “nope, that’s not it!”. If that’s something you think I could even conceivably do, I’m baffled as to why you’d be putting energy into interacting with me!
But yes, it does place the responsibility on me of deciding whether you understand my pov and reporting honestly on the matter. And yes, not all people will want to be completely honest on the matter. And yes, I realize that you don’t have reason to be convinced that I will be, and that’s okay.
However, it would be very stupid of me not to be. I can hide away in my head for as long as I want, and if no matter how hard you try, and no matter how obvious the signs become, if I’m willing to ignore them all I can believe my believies for as long as I want and pretend that I’m some sort of wise guru on the mountain top, and that everyone else just lacks my wisdom. You’re right, if I want to hide from the truth and never give you the opportunity to convince me that I’m wrong, I can. And that would be bad.
But I don’t see what solution you have to this, as if the inferential distance is larger than you realize, then your method of “then explain what it is that I allegedly didn’t understand” can’t work because if you’re still expecting a short inferential distance then you will have to either conclude that I’m speaking gibberish or that I’m wrong—even if I’m not.
It’s like the “double crux” thing. We’re working our way down the letters, and you’re saying “if you think I don’t understand your pov you should explain where I’m wrong!” and I’m saying “if I thought that you would be able to judge what I’m saying without other hidden disagreements predictably leading to faulty judgements, then I would agree that is a good idea”. I can’t just believe it’s a good idea when I don’t, and yes, that looks the same as “I’m unwilling to stick my neck out because I secretly know I’m wrong”. However, it’s a necessary thing whenever the inferential distance is larger than one party expects, or when one party believes it to be so (and if you don’t believe that I believe that it is… I guess I’d be interested in hearing why). We can’t shortcut the process by pointing at it being “unanswerable”. It is what it is.
It’d be nice if this weren’t ever an issue, but ultimately I think it’s fine because there’s no free lunch. If I feel cognitive dissonance and don’t admit that you have a point, it tends to show, and that would make me look bad. If it doesn’t show somehow, I still fail to convince anyone of anything. I still fail to give anyone any reason to believe I’m some wise guru on the mountaintop even if I really really want them to believe that. It’s not going to work, because I’m not doing anything to distinguish myself from that poser that has nothing interesting to say.
If I want to actually be able to claim status, and not retreat to some hut muttering at how all the meanies won’t give me the status that I deserve, I have to actually stick my neck out and say something useful and falsifiable at some point. I get that—which is why I keep making the distinction between actively staking claims and refusing to accept false presuppositions.
The thing is, my first priority is actually being right. My second priority is making sure that I don’t give people a reason to falsely conclude that I’m wrong and that I am unaware of or/unable to deal with the fact that they think that. My third priority is that I actually get to speak on the object level and be useful. I’m on step two now. You seem to be acting as if it’s impossible to be on step two honestly and that I must be trying to hide from engagement if I am not yet ready to move on to step three with you. I don’t know what else to tell you. I don’t agree.
If you don’t want to automatically accept that I see things you don’t (and that these things are hard to clearly communicate to someone with your views), then that’s fine. I certainly don’t insist that you accept that I do. Heck, I encourage skepticism. However, I’m not really sure how you can know that I don’t, and it seems like you should probably make room for both possibilities if you want to have a productive conversation with me (and it’s fine if you don’t).
The main test that I use in distinguishing between wise old men on mountain tops and charlatans is whether my doubt in them provokes cognitive signs of cognitive dissonance—but there are both false positives and false negatives there. A second test I use is to see whether this guy has any real world results that impress me. A fourth is to see whether I can get him to say anything useful to me. A fourth test is whether there are in fact times that I end up eventually seeing things his way on my own.
It’s not always easy, and I’ve realized again and again that even foolish people are wiser than I give them credit for, so at this point I’m really hesitant to rule that out so that I can actively deny their implicit claim to status. I prefer to just not actively grant them, and say something like “yes, you might be really wise, but I can’t see that you’re not a clown, and until I do I’m going to have to assign a higher probability to the latter. If you can give me some indication that you’re not a clown, I would appreciate that, and I understand that if you don’t it is no proof that you are”.
If you go back through my comments on LW (note: I am not actually suggesting you do this; there are a lot of them, as you know) you will find that in this sort of context I almost always say explicitly something like “evidence and arguments”, precisely because I am not confused about the difference between the two. Sometimes I am lazy. This was one of those times.
Bad arguments and bad evidence can serve equally well in a Gish gallop.
I’m not addressing the paper specifically, I’m answering your question more generally. I still think it applies here though. When they identify “misinformation”, are they first looking for things that support the wrong conclusion and then explaining why you shouldn’t believe this wrong thing, or are they first looking at reasoning processes and explaining how to do them better (without tying it to the conclusion they prefer).
For example, do they address any misinformation that would lead people to being misled into thinking global warming is more real/severe than it is? If they don’t and they’re claiming to be about “misinformation” and that they’re not pushing an agenda, then that’s quite suspicious. Maybe they do, I dunno. But that’s where I’d look to tell the difference between what they’re claiming and what Lumifer is accusing them of.
The fact that they hold that view does not. It’s possible to agree with someones conclusions and still think they’re being dishonest about how they’re arguing for it, you know. (and also, to disagree with someone’s conclusions but think that they’re at least honest about how they get there)
The fact that it is clear from reading this paper which is supposedly not about what they believe sorta does, depending on how clear they are about it and how they are clear about it. It’s possible for propaganda to contain good arguments, but you do have to be pretty careful with it because you’re getting filtered evidence.
(notice how it applies here. I’m talking about processes not conclusions, and haven’t given any indication of whether or not I buy into global warming—because it doesn’t matter, and if I did it’d just be propaganda slipping out)
What makes misinformation misinformation is that it’s factually wrong, not that the reasoning processes underlying it are bad. (Not to deny the badness of bad reasoning, but it’s a different failure mode.)
They pick one single example of misinformation, which is the claim that there is no strong consensus among climate scientists about anthropogenic climate change.
It would be quite suspicious if “global warming is real” and “global warming is not real” were two equally credible positions. As it happens, they aren’t. Starting from the premise that global warming is real is no more unreasonable than starting from the premise that evolution is real, and not much more unreasonable than starting from the premise that the earth is not flat.
I disagree. If you’re going to do an experiment about how to handle disinformation, you need an example of disinformation. You can’t say “X is an instance of disinformation” without making it clear that you believe not-X. Now, I suppose they could have identified denying that there’s a strong consensus on global warming as disinformation while making a show of not saying whether they agree with that consensus or not, but personally I’d regard that more as a futile attempt at hiding their opinions than as creditable neutrality.
I think you have, actually. If there were a paper about how to help people not be deceived by dishonest creationist propaganda, and someone came along and said “do they address any misinformation that would lead people into being misled into thinking 6-day creation is less true than it is?” and the like, it would be a pretty good bet that that person was a creationist.
Now, of course I could be wrong. If so, then I fear you have been taken in by the rhetoric of the “skeptics”[1] who are very keen to portray the issue as one where it’s reasonable to take either side, where taking for granted that global warming is real is proof of dishonesty or incompetence, etc. That’s not the actual situation. At this point, denial of global warming is about as credible as creationism; it is not a thing scientific integrity means people should treat neutrally.
[1] There don’t seem to be good concise neutral terms for the sides of that debate.
Both are quite simplistic positions. If you look at the IPCC report there are many different claims about global warming effects and those have different probabilities attached to them.
It’s possible to be wrong on some of those probabilities in both directions, but thinking about probabilities is a different mode than “On what side do you happen to be?”
Incidentally, the first comment in this thread to talk in terms of discrete “sides” was not mine above but one of jimmy’s well upthread, and I think most of the ensuing discussion in those terms is a descendant of that. I wonder why you chose my comment in particular to object to.
I don’t know about you, but I don’t have the impression that my comments in this thread are too short.
Yes, the climate is complicated. Yes, there is a lot more to say than “global warming is happening” or “global warming is not happening”. However, it is often convenient to group positions into two main categories: those that say that the climate is warming substantially and human activity is responsible for a lot of that warming, and those that say otherwise.
Yes, and identifying it is a reasoning process, which they are claiming to teach.
Duh.
Sure, but there’s more than one X at play. You can believe, for example, that “the overwhelming scientific consensus is that global warming is real” is false and that would imply that you believe not-”the overwhelming scientific consensus is that global warming is real”. You’re still completely free to believe that global warming is real.
“What about the misinformation on the atheist side!” is evidence that someone is a creationist to the extent that they cannot separate their beliefs from their principles of reason (which usually people cannot do).
If someone is actually capable of the kind of honesty where they hold their own side to the same standards as the outgroup side, it is no longer evidence of which side they’re on. You’re assuming I don’t hold my own side to the same standards. That’s fine, but you’re wrong. I’d have the same complaints if it were a campaign to “teach them creationist folk how not to be duped by misinformation”, and I am absolutely not a creationist by any means.
I can easily give an example, if you’d like.
Nothing I am saying is predicated on there being more than one “reasonable” side.
If you take for granted a true thing, it is not proof of dishonesty or incompetence.
However, if you take it for granted and say that there’s only one reasonable side, then it is proof that you’re looking down on the other side. That’s fine too, if you’re ready to own that.
It just becomes dishonest when you try to pretend that you’re not. It becomes dishonest when you say “I’m just helping you spot misinformation, that’s all” when what you’re really trying to do is make sure that they believe Right thoughts like you do, so they don’t fuck up your society by being stupid and wrong.
There’s a difference between helping someone reason better and helping someone come to the beliefs that you believe in, even when you are correct. Saying that you’re doing the former while doing the latter is dishonest, and it doesn’t help if most people fail to make the distinction (or if you somehow can’t fathom that I might be making the distinction myself and criticizing them for honesty rather than for disagreeing with me)
I don’t think they are. Teaching people to reason is really hard. They describe what they’re trying to do as “inoculation”, and what they’re claiming to have is not a way of teaching general-purpose reasoning skills that would enable people to identify misinformation of all kinds but a way of conveying factual information that makes people less likely to be deceived by particular instances of misinformation.
Not only that. Suppose the following is the case (as in fact I think it is): There is lots of creationist misinformation around and it misleads lots of people; there is much less anti-creationist misinformation around and it misleads hardly anyone. In that case, it is perfectly reasonable for non-creationists to try to address the problem of creationist misinformation without also addressing the (non-)problem of anti-creationist misinformation.
I think the situation with global warming is comparable.
I’m not. Really, truly, I’m not. I’m saying that from where I’m sitting it seems like global-warming-skeptic misinformation is a big problem, and global-warming-believer misinformation is a much much smaller problem, and the most likely reasons for someone to say that discussion of misinformation in this area should be balanced in the sense of trying to address both kinds are (1) that the person is a global-warming skeptic (in which case it is unsurprising that their view of the misinformation situation differs from mine) and (2) that the person is a global-warming believer who has been persuaded by the global-warming skeptics that the question is much more open than (I think) it actually is.
Sure. (Though I’m not sure “looking down on” is quite the right phrase.) So far as I can tell, the authors of the paper we’re talking about don’t make any claim not to be “looking down on” global-warming skeptics. The complaints against them that I thought we were discussing here weren’t about them “looking down on” global-warming skeptics. Lumifer described them as trying to “prevent crimethink”, and that characterization of them as trying to practice Orwellian thought control is what I was arguing against.
I think this is a grossly unreasonable description of the situation, and the use of the term “crimethink” (Lumifer’s, originally, but you repeated it) is even more grossly unreasonable. The unreasonableness is mostly connotational rather than denotational; that is, there are doubtless formally-kinda-equivalent things you could say that I would not object to.
So, taking it bit by bit:
They don’t say that. They say: here is a way to help people not be taken in by disinformation on one particular topic. (Their approach could surely be adapted to other particular topics. It could doubtless also be used to help people not be informed by accurate information on a particular topic, though to do that you’d need to lie.) They do not claim, nor has anyone here claimed so far as I know, that they are offering a general-purpose way of distinguishing misinformation from accurate information. That would be a neat thing, but a different and more difficult thing.
With one bit of spin removed, this becomes “make sure they are correct rather than incorrect”. With one bit of outright misrepresentation removed, it then becomes “make it more likely that they are correct rather than incorrect”. This seems to me a rather innocuous aim. If I discover that (say) many people think the sun and the moon are the same size, and I write a blog post or something explaining that they’re not even though they subtend about the same angle from earth, I am trying to “make sure that they believe Right thoughts”. But you wouldn’t dream of describing it that way. So what makes that an appropriate description in this case?
(Incidentally, it may be worth clarifying that the specific question about which the authors of the paper want people to “believe Right thoughts” is not global warming but whether there is a clear consensus on global warming among climate scientists.)
I’m just going to revisit this because it really is obnoxious. The point of the term “crimethink” in 1984 is that certain kinds of thoughts there were illegal and people found thinking them were liable to be tortured into not thinking them any more. No one is suggesting that it should be illegal to disbelieve in global warming. No one is suggesting that people who disbelieve in global warming should be arrested, or tortured, or have their opinions forcibly changed in any other fashion. The analogy with “crimethink just isn’t there*. Unless you are comfortable saying that “X regards Y as crimethink” just means “X thinks Y is incorrect”, in which case I’d love to hear you justify the terminology.
This is factually incorrect (and that’s even without touching Twitter and such).
Oh, all right. You don’t like the word. How did you describe their activity? ”...not a way of teaching general-purpose reasoning skills that would enable people to identify misinformation of all kinds but a way of conveying factual information that makes people less likely to be deceived by particular instances of misinformation.”
Here: brainwashing. Do you like this word better?
Oh, one other thing. I’ve got no problems with the word. What I don’t like is its abuse to describe situations in which the totality of the resemblance to the fiction from which the term derives is this: Some people think a particular thing is true and well supported by evidence, and therefore think it would be better for others to believe it too.
If you think that is what makes the stuff about “crimethink” in 1984 bad, then maybe you need to read it again.
As usual, I like my points very very sharp, oversaturated to garish colours, and waved around with wild abandon :-)
You don’t.
Or, to put it differently, I prefer not to lie.
Would you like to point out to me where I lied, with quotes and all?
Sure. Just a quick example, because I have other things I need to be doing.
I take it that saying “That is factually incorrect” with those links amounts to a claim that the links show that the claim in question is factually incorrect. Neither of your links has anything to do with anyone saying it should be illegal to disbelieve in global warming.
(There were other untruths, half-truths, and other varieties of misdirection in what you said on this, but the above is I think the clearest example.)
[EDITED because I messed up the formatting of the quote blocks. Sorry.]
An unfortunate example because I believe I’m still right and you’re still wrong.
We’ve mentioned what, a California law proposal and a potential FBI investigation? Wait, but there is more! A letter from 20 scientists explicitly asks for a RICO (a US law aimed at criminal organizations such as drug cartels) investigation of deniers. A coalition of Attorney Generals of several US states set up an effort to investigate and prosecute those who “mislead” the public about climate change.
There’s Bill Nye:
Of course there is James Hansen, e.g. this (note the title):
or take David Suzuki:
Here is Lawrence Torcello, Assistant Professor of Philosophy, no less:
Hell, there is a paper in a legal journal: Deceitful Tongues: Is Climate Change Denial a Crime? (by the way, the paper says “yes”).
Sorry, you are wrong.
Nice Gish gallop, but not one of those links contradicts my statement that
which is what you called “factually incorrect”. Most of them (all but one, I think) are irrelevant for the exact same reason I already described: what they describe is people suggesting that some of the things the fossil fuel industry has done to promote doubt about global warming may be illegal under laws that already exist and have nothing to do with global warming, because those things amount to false advertising or fraud or whatever.
In fact, these prosecutions, should any occur, would I think have to be predicated on the key people involved not truly disbelieving in global warming. The analogy that usually gets drawn is with the tobacco industry’s campaign against the idea that smoking causes cancer; the executives knew pretty well that smoking probably did cause cancer, and part of the case against them was demonstrating that.
Are you able to see the difference between “it should be illegal to disbelieve in global warming” and “some of the people denying global warming are doing it dishonestly to benefit their business interests, in which case they should be subject to the same sanctions as people who lie about the fuel efficiency of the cars they make or the health effects of the cigarettes they make”?
I’m not sure that responding individually to the steps in a Gish gallop is a good idea, but I’ll do it anyway—but briefly. In each case I’ll quote from the relevant source to indicate how it’s proposing the second of those rather than the first. Italics are mine.
Letter from 20 scientists: “corporations and other organizations that have knowingly deceived the American people about the risks of climate change [...] The methods of these organizations are quite similar to those used earlier by the tobacco industry. A RICO investigation [...] played an important role in stopping the tobacco industry from continuing to deceive the American people about the dangers of smoking.”
Coalition of attorneys general: “investigations into whether fossil fuel companies have misled investors about how climate change impacts their investments and business decisions [...] making sure that companies are honest about what they know about climate change”. (But actually this one seems to be mostly about legislation on actual emissions, rather than about what companies say. Not at all, of course, about what individuals believe.)
Bill Nye (actually the story isn’t really about him; his own comment is super-vague): “did they mislead their investors and overvalue their companies by ignoring the financial costs of climate change and the potential of having to leave fossil fuel assets in the ground? [...] are they engaged in a conspiracy to mislead the public and affect public policy by knowingly manufacturing false doubt about the science of climate change?”
James Hansen: “he will accuse the chief executive officers [...] of being fully aware of the disinformation about climate change they are spreading”
David Suzuki: This is the one exception I mentioned above; Suzuki is (more precisely: was, 9 years ago) attacking politicians rather than fossil fuel companies. It seems to be rather unclear what he has in mind, at least from that report. He’s reported as talking about “what’s going on in Ottawa and Edmonton” and “what they’re doing”, but there are no specifics. What does seem clear is that (1) he’s talking specifically about politicians and (2) it’s “what they’re doing” rather than “what they believe” that he has a problem with. From the fact that he calls it “an intergenerational crime”, it seems like he must be talking about something with actual effects so I’m guessing it’s lax regulation or something he objects to.
Lawrence Torcello (incidentally, why “no less”? An assistant professor is a postdoc; it’s not exactly an exalted position: “corporate funding of global warming denial [...] purposefully strive to make sure “inexact, incomplete and contradictory information” is given to the public [...] not only corrupt and deceitful, but criminally negligent”.
“Deceitful Tongues” paper: “the perpetrators of this deception must have been aware that its foreseeable impacts could be devastating [...] As long as climate change deniers can be shown to have engaged in fraud, that is, knowing and wilful deception, the First Amendment afford them no protection.”
So, after nine attempts, you have given zero examples of anyone suggesting that it should be illegal to disbelieve in global warming. So, are you completely unable to read, or are you lying when you offer them as refutation of my statement that, and again I quote, “no one is suggesting that it should be illegal to disbelieve in global warming”?
(I should maybe repeat here a bit of hedging from elsewhere in the thread. It probably isn’t quite true that no one at all, anywhere in the world has ever suggested that it should be illegal to disbelieve in global warming. Almost any idea, no matter how batshit crazy, has someone somewhere supporting it. So, just for the avoidance of doubt: what I meant is that “it should be illegal to disbelieve in global warming” is like “senior politicians across the world are really alien lizard people”: you can doubtless find people who endorse it, but they will be few in number and probably notably crazy in other ways, and they are in no way representative of believers in global warming or “progressives” or climatologists or any other group you might think it worth criticizing.)
I was never a fan of beating my head against a brick wall.
Tap.
Your first link is to proposed legislation in California. O NOES! Is California going to make it illegal to disbelieve in global warming? Er, no. The proposed law—you can go and read it; it isn’t very long; the actual legislative content is section 3, which is three short paragraphs—has the following effect: If a business engages in “unfair competition, as defined in Section 17200 of the Business and Professions Code” (it turns out this basically means false advertising), and except that the existing state of the law stops it being prosecuted because the offence was too long ago, then the Attorney General is allowed to prosecute it anyway.
I don’t know whether that’s a good idea, but it isn’t anywhere near making it illegal to disbelieve in global warming. It removes one kinda-arbitrary limitation on the circumstances under which businesses can be prosecuted if they lie about global warming for financial gain.
Your second link is similar, except that it doesn’t involve making anything illegal that wasn’t illegal before; the DoJ is considering bringing a civil action (under already-existing law, since the DoJ doesn’t get to make laws) against the fossil fuel industry for, once again, lying about global warming for financial gain.
“Brainwashing” is just as dishonestly bulshitty as “crimethink”, and again so far as I can tell if either term applies here it would apply to (e.g.) pretty much everything that happens in high school science lessons.
Let me quote you yourself, with some emphasis:
We’re not talking about making new laws. We’re talking about taking very wide and flexible existing laws and applying them to particular targets, ones to which they weren’t applied before. The goal, of course, is intimidation and lawfare since the chances of a successful prosecution are slim. The costs of defending, on the other hand, are large.
“Lying for financial gain” is a very imprecise accusation. Your corner chip shop might have a sign which says “Best chips in town!” which is lying for financial gain. Or take non-profits which tend to publish, let’s be polite and say “biased” reports which are, again, lying for financial gain.
You point was that no one suggested going after denialists/sceptics with legal tools and weapons. This is not true.
It also is not my point. There are four major differences between what is suggested by your bloviation about “crimethink” and the reality:
“Crimethink” means you aren’t allowed to think certain things. At most, proposals like the ones you linked to dishonest descriptions of[1] are trying to say you’re not allowed to say certain things.
“Crimethink” is aimed at individuals. At most, proposals like the ones you linked to dishonest descriptions of[1] are trying to say that businesses are not allowed to say certain things.
“Crimethink” applies universally; a good citizen of Airstrip One was never supposed to contemplate the possibility that the Party might be wrong. Proposals like the ones you linked to dishonest descriptions of[1] are concerned only with what businesses are allowed to do in their advertising and similar activities.
“Crimethink” was dealt with by torture, electrical brain-zapping, and other such means of brute-force thought control. Proposals like the ones you linked to dishonest descriptions of[1] would lead at most to the same sort of sanction imposed in other cases of false advertising: businesses found guilty (let me remind you that neither proposal involves any sort of new offences) would get fined.
[1] Actually, the second one was OK. The first one, however, was total bullshit.
Sure. None the less, there is plenty that it unambiguously doesn’t cover. Including, for instance, “disbelieving in global warming”.
Please stay on topic. This subthread is about your claim that “No one is suggesting that it should be illegal”
Are you implying that you can disbelieve all you want deep in your heart but as soon as you open your mouth you’re fair game?
A claim I made because you were talking about “crimethink”. And, btw, what was that you were saying elsewhere about other people wanting to set the rules of discourse? I’m sorry if you would prefer me to be forbidden to mention anything not explicit in the particular comment I’m replying to, but I don’t see any reason why I should be.
No. (Duh.) But I am saying that a law that forbids businesses to say things X for purposes Y in circumstances Z is not the same as a law that forbids individuals to think X.
Oh. well in that case, if they’re saying “teaching you to not think bad is too hard, we’ll just make sure you don’t believe the wrong things, as determined by us”, then I kinda thought Lumifer’s criticism would have been too obvious to bother asking about.
Oh… yeah, that’s not true at all. If it were true, and 99% of the bullshit were generated by one side, then yes, it would make sense to spend 99% of one’s time addressing bullshit from that one side and it wouldn’t be evidence for pushing an agenda. There’s still other reasons to have a more neutral balance of criticism even when there’s not a neutral balance of bullshit or evidence, but you’re right—if the bullshit is lopsided then the lopsided treatment wouldn’t be evidence of dishonest treatment.
It’s just that bullshit from one’s own side is a whole lot harder to spot because you immediately gloss over it thinking “yep, that’s true” and don’t stop to notice “wait! That’s not valid!”. In every debate I can think of, my own side (or “the correct side”, if that’s something we’re allowed to declare in the face of disagreement) is full of shit too, and I just didn’t notice it years ago.
This reads to me as “I’m not. Really, truly, I’m not. I’m just [doing exactly what you said I was doing]”. This is a little hard to explain as there is some inferential distance here, but I’ll just say that what I mean by “have given no indication of what I believe” and the reason I think that is important is different from what it looks like to you.
Part of “preventing crimethink” is that the people trying to do it usually believe that they are justified in doing so (“above” the people they’re trying to persuade), and also that they are “simply educating the masses”, not “making sure they don’t believe things that we believe [but like, we really believe them and even assert that they are True!]”.
This is what it feels like from the inside when you try to enforce your beliefs on people. It feels like the beliefs you have are merely correct, not your own beliefs (that you have good reason to believe you’re right on, etc). However, you don’t have some privileged access to truth. You have to reason and stuff. If your reasoning is good, you might come to right answers even. If the way that you are trying to make sure they are incorrect is by finding out what is true [according to your own beliefs, of course] and then nudging them towards believing the things that are true (which works out to “things that you believe”), then it is far more accurate to say “make sure they hold the same beliefs as me”, even if you hold the correct beliefs and even if it’s obviously correct and unreasonable to disagree.
And again, just to be clear, this applies to creationism too.
If you simply said “many people think the sun and the moon are the same size, they aren’t and here’s proof”, I’d see you as offering a helpful reason to believe that the sun is bigger.
If it was titled “I’m gonna prevent you from being wrong about the moon/sun size!”, then I’d see your intent a little bit differently. Again, I’m talking about the general principles here and not making claims about what the paper itself actually does (I cannot criticise the paper itself as I have not read it), but it sounded to me like they weren’t just saying “hey guys, look, scientists do actually agree!” and were rather saying “how can we convince people that scientists agree” and taking that agreement as presupposed. “innoculate against this idea” is talking about the idea and the intent to change their belief. If all you are trying to do is offer someone a new perpsective, you can just do that—no reason to talk about how “effective” this might be.
Yes, I thought it was obvious and common knowledge that Lumifer was speaking in hyperbole. No, they are not actually saying people should be arrested and tortured and I somehow doubt that is the claim Lumifer was trying to make here.
It’s not “thinks Y is incorrect”, it’s “socially punishes those who disagree”, even if it’s only mild punishment and even if you prefer not to see it that way. If, instead of arguing that they’re wrong you presuppose that they’re wrong and that the only thing up for discussion is how they could come to the wrong conclusion, they’re going to feel like they’re being treated like an idiot. If you frame those who disagree with you as idiots, then even if you have euphemisms for it and try to say “oh, well it’s not your fault that you’re wrong, and everyone is wrong sometimes”, then they are not going to want to interact with you.
Does this make sense?
If you frame them as an idiot, then in order to have a productive conversation with you that isn’t just “nuh uh!”/”yeah huh!”, they have to accept the frame that they’re an idiot, and no one wants to do that. They may be an idiot, and from your perspective it may not be a punishment at all—just that you’re helping them realize their place in society as someone who can’t form beliefs on their own and should just defer to the experts. And you might be right.
Still, by enforcing your frame on them, you are socially punishing them, from their perspective, and this puts pressure on them to “just believe the right things”. It’s not “believe 2+2=5 or the government will torture you”, it’s “believe that this climate change issue is a slam dunk or gjm will publicly imply that you are unreasonable and incapable of figuring out the obvious”, but that pressure is a step in the same direction—whether or not the climate change issue is a slam dunk and whether or not 2+2=5 does not change a thing. If I act to lower the status of people who believe the sky isn’t blue without even hearing out their reasons, then I am policing thoughts, and it becomes real hard to be in my social circle if you don’t share this communal (albeit true) belief. This has costs even when the communal beliefs are true. At the point where I start thinking less of people and imposing social costs on them for not sharing my beliefs (and not their inability to defend their own or update), I am disconnecting the truth finding mechanism and banking on my own beliefs being true enough on their own. This is far more costly than it seems like it should be for more than one reason—the obvious one being that people draw this line waaaaaaay too early, and very often are wrong about things where they stop tracking the distinction between “I believe X” and “X is true”.
And yes, there are alternative ways of going about it that don’t require you to pretend that “all opinions are equally valid” or that it you don’t think it would be better if more people agreed with you or any of that nonsense.
Does this make sense?
Those awful geography teachers, making sure their pupils don’t believe the wrong things (as determined by them) about what city is the capital of Australia! Those horrible people at snopes.com, making sure people don’t believe the wrong things (as determined by them) about whether Procter & Gamble is run by satanists!
What makes Lumifer’s criticism not “too obvious to bother about” is not doubt about whether the people he’s criticizing are aiming to influence other people’s opinions. It’s whether there’s something improper about that.
In your opinion, is anti-creationist misinformation as serious a problem as creationist misinformation? (10% as serious?)
Yes, it is. But it’s also what it feels like from the inside in plenty of other situations that don’t involve enforcing anything, and it’s also what it feels like from the inside when the beliefs in question are so firmly established that no reasonable person could object to calling them “facts” as well as “beliefs”. (That doesn’t stop them being beliefs, of course.)
(The argument “You are saying X. X is what you would say if you were doing Y. Therefore, you are doing Y.” is not a sound one.)
The trouble is that the argument you have offered for this is so general that it applies e.g. to teaching people about arithmetic. I don’t disagree that it’s possible, and not outright false, to portray what an elementary school teacher is doing as “make sure these five-year-olds hold the same beliefs about addition as me”; but I think it’s misleading for two reasons. Firstly, because it suggests that their goal is “have the children agree with me” rather than “have the children be correct”. (To distinguish, ask: Suppose it eventually turns out somehow that you’re wrong about this, but you never find that out. Would it be better if the children end up with right beliefs that differ from yours, or wrong ones that match yours? Of course they will say they prefer the former. So, I expect, will most people trying to propagate opinions that are purely political; I am not claiming that answering this way is evidence of any extraordinary virtue. But I think it makes it wrong to suggest that what they want is to be agreed with.) Secondly, because it suggests (on Gricean grounds) that there actually is, or is quite likely to be, a divergence between “the beliefs I hold” and “the truth” in their case. When it comes to arithmetic, that isn’t the case.
Now, the fact (if you agree with me that it’s a fact; maybe you don’t) that the argument leads to a bad place when applied to teaching arithmetic doesn’t guarantee that it does so when it comes to global warming. But if not, there must be a relevant difference between the two. In that case, what do you think the relevant differences are?
All the talk of “preventing” and other coercion is stuff that you and Lumifer have made up. It’s not real.
You know, you could actually just read the paper. It’s publicly available and it isn’t very long. Anyway: there are two different audiences involved here, and it looks to me (not just from the fragment I just quoted, but from what you say later on) as if you are mixing them up a bit.
The paper is (implicitly) addressed to people who agree with its authors about global warming. It takes it as read that global warming is real, not as some sort of nasty coercive attempt to make its readers agree with that but because the particular sort of “inoculation” it’s about will mostly be of interest to people who take that position. (And perhaps also because intelligent readers who disagree will readily see how one might apply its principles to other issues, or other sides of the same issue if it happens that the authors are wrong about global warming.)
The paper describes various kinds of interaction between (by assumption, global-warming-believing) scientists and the public. So:
Those interactions are addressed to people who do not necessarily agree with the paper’s authors about global warming. In fact, the paper is mostly interested in people who have neither strong opinions nor expertise in the field. The paper doesn’t advocate treating those people coercively; it doesn’t advocate trying to make them feel shame if they are inclined to disagree with the authors; it doesn’t advocate trying to impose social costs for disagreeing; it doesn’t advocate saying or implying that anyone is an idiot.
So. Yes, the paper treats global warming as a settled issue. That would be kinda rude, and probably counterproductive, if it were addressed to an audience a nontrivial fraction of which disagrees; but it isn’t. It would be an intellectual mistake if in fact the evidence for global warming weren’t strong enough to make it a settled issue; but in fact it is. (In my opinion, which is what’s relevant for whether I am troubled by their writing what they do.)
I don’t (always) object to hyperbole. The trouble is that so far as I can tell, nothing that would make the associations of “crimethink” appropriate is true in this case. (By which, for the avoidance of doubt, I mean not only “they aren’t advocating torturing and brain-raping people to make them believe in global warming”, for instance, but “they aren’t advocating any sort of coercive behaviour at all”. And likewise for the other implications of “crimethink”.) The problem isn’t that it’s hyperbole, it’s that it’s not even an exaggeration of something real.
Except that this “social punishment” is not something in any way proposed or endorsed by the paper Lumifer responded to by complaining about “crimethink”. He just made that up. (And you were apparently happy to go along with it despite having, by your own description, not actually read the paper.)
No doubt. But, once again, none of that is suggested or endorsed by the paper; neither does it make sense to complain that the paper is itself practising that behaviour, because it is not written for an audience of global-warming skeptics.
You might, of course, want to argue that I am doing that, right here in this thread. I don’t think that would be an accurate account of things, as it happens, but in any case I am not here concerned to defend myself. Lumifer complained that the paper was treating global warming skepticism as “crimethink”, and that’s the accusation I was addressing. If you want to drop that subject and discuss whether my approach in this thread is a good one, I can’t stop you, but it seems like a rather abrupt topic shift.
OK, I guess, though “policing thoughts” seems to me excessively overheated language. But, again, this argument can be applied (as you yourself observe) to absolutely anything. In practice, we generally don’t feel the need to avoid saying straightforwardly that the sky is blue, or that 150 million years ago there were dinosaurs. That does impose some social cost on people who think the sky is red or that life on earth began 6000 years ago; but the reason for not hedging all the time with “as some of us believe”, etc., isn’t (usually) a deliberate attempt to impose social costs; it’s that it’s clearer and easier and (for the usual Gricean reasons: if you hedge, many in your audience will draw the conclusion that there must be serious doubt about the matter) less liable to mislead people about the actual state of expert knowledge if we just say “the sky is blue” or “such-and-such dinosaurs were around 150 million years ago”.
But, again, if we’re discussing—as I thought we were—the paper linked upthread, this is all irrelevant for the reasons given above. (If, on the other hand, we’ve dropped that subject and are now discussing whether gjm is a nasty rude evil thought-policer, then I will just remark that I do in fact generally go out of my way to acknowledge that some people do not believe in anthropogenic climate change; but sometimes, as e.g. when Lumifer starts dropping ridiculous accusations about “crimethink”, I am provoked into being a bit more outspoken than usual. And what I am imposing (infinitesimal) social costs for here is not “expressing skepticism about global warming”; it’s “being dickish about global warming” and, in fact, “attempting to impose social costs for unapologetically endorsing the consensus view on global warming”, the latter being what I think Lumifer has been trying to do in this thread.)
Does this make sense?
I’m not criticizing the article, nor am I criticizing you. I’m criticizing a certain way of approaching things like this. I purposely refrain from staking a claim on whether it applies to the article or to you because I’m not interested in convincing you that it does or even determining for sure whether it does. I get the impression that it does apply, but who knows—I haven’t read the article and I can’t read your mind. If it doesn’t, then congrats, my criticism doesn’t apply to you.
You’re thinking is on a very similar track to mine when you suggest the test “assuming you’re wrong, do you want them to agree or be right?”. The difference is that I don’t think that people saying “be right, of course” is meaningful at all. I think you gotta look at what actually happens when they’re confronted with new evidence that they are in fact wrong. If, when you’re sufficiently confident, you drop the distinction between your map and the territory, not just in loose speech but in internal representation, then you lose the ability to actually notice when you’re wrong and your actions will not match your words. This happens all the time.
I’ve never had a geography or arithmetic class suffer from that failure mode, and most of the time I disagreed with my teachers they responded in a way that actually helped us figure out which of us were right. However in geometry, power electronics, and philosophy, I have run into this failure mode where when I disagree all they can think of is “how do I convince him he’s wrong” rather than “let me address his point and see where that leads”—but that’s because those particular teachers sucked and not a fault of teaching in general. With respect to that paper, the title does seem to imply that they’ve dropped that distinction. It is a very common on that topic for people to drop the distinction and refuse to pick it up, so I’m guessing that’s what they’re doing there. Who knows though, maybe they’re saints. If so, good for them.
Agreed.
I can straightforwardly say to you that there were dinosaurs millions of years ago because I expect that you’ll be with me on that and I don’t particularly care about alienating some observer who might disagree with us on that and is sensitive to that kind of thing. The important point is that the moment I find out that I’m actually interacting with someone who disagrees about what I presupposed, I stop presupposing that, apologize, and get curious—no matter how “wrong” they are, from my own viewpoint. It doesn’t matter if the topic is creationism or global warming or whether they should drive home blackout drunk because they’re upset.
A small minority of the times I wont, and instead I’ll inform them that I’m not interested in interacting with them because they’re an idiot. That’s a valid response too, in the right circumstance. This is imposing social costs for beliefs, and I’m actually totally fine with it. I just want to be really sure that I am aware of what I’m doing, why I’m doing it, and that I have a keen eye out for the signs that I was missing something.
What I don’t ever want to do is base my interactions with someone on the presupposition that they’re wrong and/or unreasonable. If I’m going to choose to interact with them, I’m going to try to meet them where they’re at. This is true even when I can put on a convincing face and hide how I really see them. This is true even when I’m talking to some third party about how I plan to interact with someone else. If I’m planning on interacting someone, I’m not presupposing they’re wrong/unreasonable. Because doing that would make me less persuasive in the cases I’m right and less likely to notice in the cases I’m not. There’s literally no upside and there’s downside whether or not I am, in fact, right.
I wouldn’t ask this question in the first place.
Yes. It doesn’t surprise me that you believe that.
That seems like the sort of thing that really needs stating up front. It’s that Gricean implicature thing again: If someone writes something about goldfish and you respond with “It’s really stupid to think that goldfish live in salt water”, it’s reasonable (unless there’s some other compelling explanation for why you bothered to say that) to infer that you think they think goldfish think in salt water.
(And this sort of assumption of relevance is a good thing. It makes discussions more concise.)
For sure that’s far more informative. But, like it or not, that’s not information you usually have available.
Yup, it’s a thing that happens, and it’s a problem (how severe a problem depends on how well being “sufficiently confident” correlates, for the person in question, with actually being right).
As you say, there’s an important difference between dropping it externally and dropping it internally. I don’t know of any reliable way to tell when the former indicates the latter and when it doesn’t. Nor do I have a good way to tell whether the authors have strong enough evidence that dropping the distinction internally is “safe”, that they’re sufficiently unlikely to turn out to be wrong on the object level.
My own guess is that (1) it’s probably pretty safe to drop it when it comes to the high-level question “is climate change real?”, (2) the question w.r.t. which the authors actually show good evidence of having dropped the distinction is actually not that but “is there a strong expert consensus that climate change is real?”, and (3) it’s probably very safe to drop the distinction on that one; if climate change turns out not to be real then the failure mode is “all the experts got it wrong”, not “there was a fake expert consensus”. So I don’t know whether the authors are “saints” but I don’t see good reason to think they’re doing anything that’s likely to come back to bite them.
I think this is usually the correct strategy, and it is generally mine too. Not 100% always, however. Example: Suppose that for some reason you are engaged in a public debate about the existence of God, and at some point the person you’re debating with supports one of his arguments with some remark to the effect that of course scientists mostly agree that so-called dinosaur fossils are really the bones of the biblical Leviathan, laid down on seabeds and on the land during Noah’s flood. The correct response to this is much more likely to be “No, sorry, that’s just flatly wrong” than “gosh, that’s interesting, do tell me more so I can correct my misconceptions”.
That’s OK, you don’t need to; I already did. I was hoping you might answer it.
So, given what you were saying earlier about “imposing social costs”, about not presupposing people are unreasonable, about interacting with people respectfully if at all … You do know how that “It doesn’t surprise me …” remark comes across, and intend it that way, right?
(In case the answer is no: It comes across as very, very patronizing; as suggesting that you have understood how I, poor fool that I am, have come to believe the stupid things I believe; but that they aren’t worth actually engaging with in any way. Also, it is very far from clear what “that” actually refers to.)
If someone writes “it’s stupid to think that goldfish live in saltwater” there’s probably a reason they say this, and it’s generally not a bad guess that they think you think they can live in salt water. However, it is still a guess and to respond as if they are affirmatively claiming that you believe this is putting words in their mouth that they did not say and can really mess with conversations, as it has here.
Agree to disagree.
A big part of my argument is that it doesn’t matter if Omega comes down and tells you that you’re right. It’s still a bad idea.
Another big part is that even when people guess that they’re probably pretty safe, they end up being wrong a really significant point of the time, and that from the outside view it is a bad idea to drop the distinction simply because you feel it is “probably pretty safe”—especially when there is absolutely no reason to do it and still reason not to even if you’re correct on the matter. (also, people are still often wrong even when they say “yeah, but that’s different. They’re overconfident, I’m actually safe”)
I note that you don’t. I do.
The point is that I don’t see it as worth thinking about. I don’t know what I would do with the answer. It’s not like I have a genie that is offering me the chance to eliminate the problems caused by one side or the other, but that I have to pick.
There are a lot of nuances in things like this, and making people locally more correct is not even always a good thing. I haven’t seen any evidence that you appreciate this point, and until I do I can only assume that this is because you don’t. It doesn’t seem that we agree on what the answer to that question would mean, and until we’re on the same page there it doesn’t make any sense to try to answer it.
I am very careful with what I presuppose, and what I said does not actually presuppose what you say it does. It’s not presupposing that you are wrong or not worth engaging with. It does imply that as it looks to me—and I do keep this distinction in mind when saying this—as it looks to me, it was not worth it for me to engage with you on that level at the time I said it. Notice that I am engaging with you and doing my best to get to the source of our actual disagreement—it’s just not on the level you were responding on. Before engaging with why you think my argument is wrong, I want to have some indication that you actually understand what my argument is, that’s all, and I haven’t seen it. This seems far less insulting to me than “poor fool who I am willing to presuppose believes stupid things and is not worth engaging with in any way”. Either way though, as a neutral matter of fact, I wasn’t surprised by anything you said so take it how you would like.
I’m not presupposing that you’re not worth engaging with on that level, but I am refusing to accept your presupposition that you are worth engaging with on that level. That’s up for debate, as far as I’m concerned, and I’m open to you being right here. My stance is that is never a good idea to presuppose things that you can predict your conversation partner will disagree with unless you don’t mind them writing you off as an arrogant fool and disengaging, but that you never have to accept their presuppositions out of politeness. Do you see why this distinction is important to me?
I was aware that what I said was likely to provoke offense, and I would like to avoid that if possible. It’s just that, if you are going to read into what I say and treat it as if I am actively claiming things when you just have shaky reason to suspect that I privately believe them, then you’re making me choose between “doing a lot of legwork to prevent gjm from unfairly interpreting me” or “letting gjm unfairly interpret me and get offended by things I didn’t say”. I have tried to make it clear that I’m only saying what I’m saying, and that the typical inferences aren’t going to hold true, and at some point I gotta just let you interpret things how you will and then let you know that again, I didn’t claim anything other than what I claimed.
In my experience, when it messes with conversations it is usually because one party is engaging in what I would characterize as bad-faith conversational manoeuvres.
I’m not sure there’s anything I could say or do that you would take as such evidence. (General remark: throughout this discussion you appear to have been assuming I fail to understand things that I do in fact understand. I do not expect you to believe me when I say that. More specific remark: I do in fact appreciate that point, but I don’t expect you to believe me about that either.)
I am generally unenthusiastic about this sort of attempt to seize the intellectual high ground by fiat, not least because it is unanswerable if you choose to make it so; I remark that there are two ways for one person’s argument not to be well understood by another; and it seems to me that the underlying problem here is that from the outset you have proceeded on the assumption that I am beneath your intellectual level and need educating rather than engaging. However, I will on this occasion attempt to state your position and see whether you consider my attempt adequate. (If not, I suggest you write me off as too stupid to bother discussing with and we can stop.) I will be hampered here and there by the fact that in many places you have left important bits of your argument implicit, chosen not to oblige when I’ve asked you questions aimed at clarifying them, and objected when I have made guesses.
So. Suppose we have people A and B. A believes a proposition P (for application to the present discussion, take P to be something like “the earth’s climate has warmed dramatically over the last 50 years, largely because of human activity, and is likely to continue doing so unless we change what we’re doing”) and is very confident that P is correct. B, for all A knows, may be confident of not-P, or much less confident of P than A is, or not have any opinion on the topic just yet. The first question at issue is: How should A speak of P, in discussion with B (or with C, with B in the audience)? And, underlying it: How should A think of P, internally?
“Internally” A’s main options are (1) to treat P as something still potentially up for grabs or (2) to treat it as something so firmly established that A need no longer bother paying attention to how evidence and arguments for and against P stack up. With unlimited computational resources and perfect reasoning skills, #1 would be unambiguously better in all cases (with possible exceptions only for things, if any there be, so fundamental that A literally has no way of continuing to think if they turn out wrong); in practice, #2 is sometimes defensible for the sake of efficiency or (perhaps) if there’s a serious danger of being manipulated by a super-clever arguer who wants A to be wrong. The first of those reasons is far, far more common; I don’t know whether the second is ever really sufficient grounds for treating something as unquestionable. (But e.g. this sort of concern is one reason why some religious people take that attitude to the dogmas of their faith: they literally think there is a vastly superhuman being actively trying to get them to hold wrong beliefs.)
“Externally” A’s main options are (1) to talk of P as a disputable matter, to be careful to say things like “since I think P” rather than “since P”, etc., when talking to B; and (2) to talk as if A and B can both take it for granted that P is correct. There is some scope for intermediate behaviours, such as mostly talking as if P can be taken for granted but ocasionally making remarks like “I do understand that P is disputed in some quarters” or “Of course I know you don’t agree about this, but it’s so much less cumbersome not to shoehorn qualifications into every single sentence”. There is also a “strong” form of #2 where A says or implies that no reasonable person would reject P, that P-rejecters are stupid or dishonest or crazy or whatever.
Your principal point is, in these terms, that “internally” #2 is very dangerous, even in cases where A is extremely confident that contrary evidence is not going to come along, and that “externally” #2 is something of a hostile act if in fact B doesn’t share A’s opinion because it means that B has to choose between acquiescing while A talks as if everyone knows that P, or else making a fuss and disagreeing and quite possibly being seen as rude. (And also because this sort of pressure may produce an actual inclination on B’s part to accept P, without any actual argument or evidence having been presented.) Introducing this sort of social pressure can make collective truthseeking less effective because it pushes B’s thinking around in (I think you might say, though for my part I would want to add some nuance) ways basically uncorrelated with truth. (There’s another opposite one, just as uncorrelated with truth or more so, which I don’t recall you mentioning: B may perceive A as hostile and therefore refuse to listen even if A has very strong evidence or arguments to offer.) And you make the secondary point that internal #2 and external #2 tend to spill over into one another, so that each also brings along the other’s dangers.
We are agreed that internal #2 is risky and external #2 is potentially (for want of a better term) rude, “strong” external #2 especially so. We may disagree on just how high the bar should be for choosing either internal or external #1, and we probably do disagree more specifically on how high it should be in the case where P is the proposition about global warming mentioned above.
(We may also disagree about whether it is likely that the authors of the paper we were discussing are guilty of choosing, or advocating that their readers choose, some variety of #2 when actually #1 would be better; about whether it is likely that I am; about whether it makes sense to apply terms like “crimethink” when someone adopts external #2; and/or about how good the evidence for that global-warming proposition actually is. But I take it none of that is what you wish to be considered “your argument” in the present context.)
In support of the claim that internal #2 is dangerous far more often than A might suppose, you observe (in addition to what I’ve already said above) that people are very frequently very overconfident about their beliefs; that viewed externally, A’s supreme confidence in P doesn’t actually make it terribly unlikely that A is wrong about P. Accordingly, you suggest, A is making a mistake in adopting internal #2 even if it seems to A that the evidence and arguments for P are so overwhelming that no one sane could really disagree—especially if there are in fact lots of people, in all other respects apparently sane, who do disagree. I am not sure whether you hold that internal #2 is always an error; I think everything you’ve said is compatible with that position but you haven’t explicitly claimed it and I can think of good reasons for not holding it.
In support of the claim that external #2 is worse than A might suppose, you observe that (as mentioned above) doing it imposes social costs on dissenters, thereby making it harder for them to think independently and also making it more likely that they will just go away and deprive A’s community of whatever insight they might offer. And (if I am interpreting correctly one not-perfectly-clear thing you said) that doing this amounts to deciding not to care about contrary evidence and arguments, in other words to implicitly adopting internal #2 with all its dangers. You’ve made it explicitly that you’re not claiming that external #2 is always a bad idea; on the face of it you’ve suggested that external #2 is fine provided A clearly understands that it involves (so to speak) throwing B to the wolves; my guess is that in fact you consider it usually not fine to do that; but you haven’t made it clear (at least to me) what you consider a good way to decide whether it is. It is, of course, clear that you don’t consider that great confidence about P on A’s part is in itself sufficient justification. (For the avoidance of doubt, nor do I; but I think I am willing to give it more weight than you are.)
That’ll do for now. I have not attempted to summarize everything you’ve said, and perhaps I haven’t correctly identified what subset you consider “your argument” for present purposes. (In particular, I have ignored everything that appears to me to be directed at specific (known or conjectured) intellectual or moral failings of the authors of the paper, or of me, and attended to the more general point.)
Without restarting the discussion, let me point out what I see to be the source of many difficulties. You proposed a single statement to which you, presumably, want to attach some single truth value. However your statement consists of multiple claims from radically different categories.
“the earth’s climate has warmed dramatically over the last 50 years” is a claim of an empirical fact. It’s relatively easy to discuss it and figure out whether it’s true.
“largely because of human activity” is a causal theory claim. This is much MUCH more complex than the preceding claim, especially given the understanding (existing on LW) that conclusions about causation do not necessarily fall out of descriptive models.
“and is likely to continue doing so” is a forecast. Forecasts, of course, cannot be proved or disproved in the present. We can talk about our confidence in a particular forecast which is also not exactly a trivial topic.
Jamming three very different claims together and treating them as a single statement doesn’t look helpful to me.
It would be a probability, actually, and it would need a lot of tightening up before it would make any sense even to try to attach any definite probability to it. (Though I might be happy to say things like “any reasonable tightening-up will yield a statement to which I assign p>=0.9 or so”.)
Yes, it does.
For the avoidance of doubt, in writing down a conjunction of three simpler propositions I was not making any sort of claim that they are of the same sort, or that they are equally probable, or that they are equivalent to one another, or that it would not often be best to treat individual ones (or indeed further-broken-down ones) separately.
It seems perfectly reasonable to me. It would be unhelpful to insist that the subsidiary claims can’t be considered separately (though each of them is somewhat dependent on its predecessors; it doesn’t make sense to ask why the climate has been warming if in fact it hasn’t, and it’s risky at best to forecast something whose causes and mechanisms are a mystery to you) but, I repeat, I am not in any way doing that. It would be unhelpful to conflate the evidence for one sub-claim with that for another; that’s another thing I am not (so far as I know) doing. But … unhelpful simply to write down a conjunction of three closely related claims? Really?
You can, of course, write down anything you want. But I believe that treating that conjunction as a single “unit” is unhelpful, yes.
In what sense (other than writing it down, and suggesting that it summarizes what is generally meant by “global warming” when people say they do or don’t believe it) am I treating it as a single unit?
As I mentioned, I don’t want to restart the discussion. Feel free to discard my observation if you don’t find it useful.
“The earth’s climate has warmed by about x °C over the last 50 years” is a claim of an empirical fact. “It is dramatic for a planet to warm by about x °C in 50 years” is an expression of the speaker’s sense of drama.
Yeah, sure, but I’m skipping over the drama. If we ever find ourselves debating this, I’m sure that X is will get established pretty quickly.
What you say below (“I do in fact appreciate that point”) is all it takes for this.
For what it’s worth, I feel the same way about this. From my perspective, it looks like you are assuming that I don’t get things that I do get, are assuming I’m assuming things things I am not assuming, saying thing things I’m not saying, not addressing my important points, being patronizing yourself, “gish galloping”, and generally arguing in bad faith. I just had not made a big stink about it because I didn’t anticipate that you wanted my perspective on this or that it would cause you to rethink anything.
Being wrong about what one understands is common too (illusion of transparency, and all that), but I absolutely do take this as very significant evidence as it does differentiate you from a hypothetical person who is so wrapped up in ego defense that they don’t want to address this question.
Can you explain what you mean by “attempt to seize the intellectual high ground” and “it is unanswerable”, as it applies here? I don’t think I follow. I don’t think I’m “attempting to seize” anything, and have no idea what the question that “unanswerable” applies to is.
However, if you mean “you don’t seem interested in my rebuttle”, then you’re right, I was not. I have put a ton of thought into the ethics of persuasion over the last several years, and there aren’t really any questions here that I don’t feel like I have a pretty darn solid answer to. Additionally, if you don’t already think about these problems the way that I do, it’s actually really difficult to convey my perspective, even if communication is flowing smoothly. And it often doesn’t, because it’s also really really easy to think I’m talking about something else, leading to the illusion that my point has been understood. This combination makes run-of-the-mill disagreement quite uninteresting, and I only engaged because I mistook your original question for “I would like to learn how to differentiate between teaching and thought-policing”, not “I would like to argue that they aren’t thought policing and that you’re wrong to think they are”.
And again, I do not think it warrants accusations of “patronizing you poor, poor fool” for privately holding the current best guess that this disagreement is more likely to be about you misunderstanding my point than about me hallucinating something in their title. Am I allowed to believe I’m probably right, or do I have to believe that you’re probably right and that I’m probably wrong? Are you allowed to believe that you’re probably right?
It is far enough off that I can’t endorse it as “getting” where I’m coming from. For example, “being seen as rude”, itself, is so not what it’s about. There are often two very different ways of looking at things that can produce superficially similar prescriptions for fundamentally different reasons. It looks like you understand the one I do not hold, but do not realize that there is another, completely different, reason to not want to do #2 externally.
However, I do appreciate it as an intellectually honest attempt to check your understanding of my views and it does capture the weight of the main points themselves well enough that I’m curious to hear where you disagree (or if you don’t disagree with it as stated).
Somewhat relatedly but somewhat separately, I’m interested to hear how you think it applies to how you’ve approached things here. From my perspective, you’re doing a whole lot of the external #2 at me. Do you agree and think it’s justified? If so, how? Do you not see yourself as doing external #2 here? If so, do you understand how it looks that way to me?
Given this summary of my view, I do think I see why you don’t see it as suggesting that the researchers were making any mistake. The reason I do think they’re making a mistake is not present in your description of my views.
Hold on.
I gotta stop you there because that’s extremely unfair. I haven’t answered every question you’ve asked, but I have addressed most, if not all of them (and if there’s one I missed that you would like me to address, ask and I will). I also specifically addressed the fact that I don’t have a problem with you making guesses but that I don’t see it as very charitable or intellectually honest when you go above and beyond and respond as if I had actively claimed those things.
This is a very understandable reading of what I said, but no. I do not agree that what you call “external #2” is ever a good thing to do either. I also would not frame it that way in the first place.
I did not accuse you of that. I don’t think you’ve done that. I said that Lumifer did it because, well, he did: I said “no one is proposing X”, he said “what about A and B”, I pointed out that A and B were not in fact proposing X, and he posted another seven instances of … people not proposing X. A long sequence of bad arguments, made quickly but slower to answer: that is exactly what a Gish gallop is. I don’t think you’ve been doing that, I don’t think Lumifer usually does it, but on this occasion he did.
“Attempting to seize the intellectual high ground” = “attempting to frame the situation as one in which you are saying clever sensible things that the other guy is too stupid or blinkered or whatever to understand. “Unanswerable if you choose to make it so” because when you say “I don’t think you have grasped my argument”, any response I make can be answered with “No, sorry, I was right: you didn’t understand my argument”—regardless of what I actually have understood or not understood. (I suppose one indication of good or bad faith on your part, in that case, would be whether you then explain what it is that I allegedly didn’t understand.)
I am greatly saddened, and somewhat puzzled, that you apparently think I might think the answer is no. (Actually, I don’t think you think I might think the answer is no; I think you are grandstanding.) Anyway, for the avoidance of doubt, I have not the slightest interest in telling anyone else what they are allowed to believe, and if (e.g.) what I have said upthread about that paper about global warming has led you to think otherwise then either I have written unclearly or you have read uncharitably or both.
The problem here is unclarity on my part or obtuseness on yours, rather than obtuseness on my part or unclarity on yours :-).The bit about “being seen as rude” was not intended as a statement of your views or of your argument; it was part of my initial sketch of the class of situations to which those views and that argument apply. The point at which I start sketching what I think you were saying is where I say “Your principal point is, in these terms, …”.
Well, I was (deliberately) attempting to describe what I took to be your position on the general issue, rather than on what the authors of the article might or might not have done. (I am not all that interested in what you think they have done, since you’ve said you haven’t actually looked at the article.) But it’s entirely possible that I’ve failed to notice some key part of your argument, or forgotten to mention it even though if I’d been cleverer I would have. I don’t suppose you’d like to explain what it is that I’ve missed?
Just in case anyone other than us is reading this, I would like to suggest that those hypothetical readers might like to look back at what I actually wrote and how you quoted it, and notice in particular that I explicitly said that I think your position probably isn’t the one that “on the face of it you’ve suggested”. (Though it was not previously clear to me that you think “external #2″ is literally never a good idea. One reason is that it looks to me—and still does after going back and rereading—as if you explicitly said that you sometimes do it and consider it reasonable. See here and search for “A small minority”.)
As to the other things you’ve said (e.g., asking whether and where and why I disagree with your position), I would prefer to let that wait until you have helped me fix whatever errors you have discerned in my understanding of your position and your argument. Having gone to the trouble of laying it out, it seems like it would be a waste not to do that, don’t you think?
You’ve made specific mention of two errors. One (see above) wasn’t ever meant to be describing your position, so that’s OK. The other is that my description doesn’t mention “the reason I do think they’re making a mistake” (they = authors of that article whose title you’ve read); I don’t know whether that’s an error on my part, or merely something I didn’t think warranted mentioning, but the easiest way to find out would be for you to say what that reason is.
Your other comments give the impression that there are other deficiencies (e.g., “It is far enough off that I can’t endorse it as “getting” where I’m coming from.” and “It looks like you understand the one I do not hold, but do not realize that there is another, completely different, reason to not want to do #2 externally.”) and I don’t think it makes any sense to proceed without fixing this. (Where “this” is probably a lack of understanding on my part, but might also turn out to be that for one reason or another I didn’t mention it, or that I wasn’t clear enough in my description of what I took to be your position.) If we can’t get to a point where we are both satisfied that I understand you adequately, we should give up.
For whatever little it’s worth, I read the first few plies of these subthreads, and skimmed the last few.
From my partial reading, it’s unclear to me that Lumifer is/was actually lying (being deliberately deceptive). More likely, in my view, is/was that Lumifer sincerely thinks spurious your distinction between (1) criminalizing disbelief in global warming, and (2) criminalizing the promulgation of assertions that global warming isn’t real in order to gain an unfair competitive advantage in a marketplace. I think Lumifer is being wrong & silly about that, but sincerely wrong & silly. On the “crimethink” accusation as applied to the paper specifically, Lumifer plainly made a cheap shot, and you were right to question it.
As for your disagreement with jimmy, I’m inclined to say you have the better of the argument, but I might be being overly influenced by (1) my dim view of jimmy’s philosophy/sociology of argument, at least as laid out above, (2) my incomplete reading of the discussion, and (3) my knowledge of your track record as someone who is relatively often correct, and open to dissecting disagreement with others, often to a painstaking extent.
This is helpful; thanks.
I, also, appreciate this comment.
I would like to quibble here that I’m not trying to argue anything, and that if gjm had said “I don’t think the authors are doing anything nearly equivalent to crimethink and would like to see you argue that they are”, I wouldn’t have engaged because I’m not interested in asserting that they are.
I’d call it more “[...] of deliberately avoiding argument in favor of “sharing honestly held beliefs for what they’re taken to be worth”, to those that are interested”. If they’re taken (by you, gjm, whoever) to be worth zero and there’s no interest in hearing them and updating on them, that’s totally cool by me.
(comment split because it got too long)
It’s neither. I have a hard time imagining that you could say no. I was just making sure to cover all the bases because I also have a hard time imagining that you could still say that I’m actively trying to claim anything after I’ve addressed that a couple times.
I bring it up because at this point, I’m not sure how you can simultaneously hold the views “he can believe whatever he wants”, “he hasn’t done anything in addition that suggests judgement too” (which I get that you haven’t yet agreed to, but you haven’t addressed my arguments that I haven’t yet either), and then accuse me of trying to claim the intellectual high ground without cognitive dissonance. I’m giving you a chance to either teach me something new (i.e. “how gjm can simultaneously hold these views congruently”), or, in the case that you can’t, the chance for you to realize it.
Quoting you, “Your principal point is, in these terms, that [...] and that “externally” #2 is something of a hostile act if in fact B doesn’t share A’s opinion because it means that B has to choose between acquiescing while A talks as if everyone knows that P, or else making a fuss and disagreeing and quite possibly being seen as rude.” (emphasis mine)
That looks like it’s intended to be a description of my views to me, given that it directly follows the point where you start sketching out what my views are, following a “because”, and before the first period.
Even if it’s not, though, if you’re saying it as part of a sketch of the situation, it’s one that anyone who sees things the way I do can see that I won’t find it to be a relevant part of the situation, and the fact that you mention it—even if it were just part of that sketch—indicates that either you’re missing this or that you see that you’re giving a sketch that I don’t agree with as if my disagreement is irrelevant.
Right. I think it is the correct approach to describe my position in general. However, the piece of my general position that would come into play in this specific instance was not present so if you apply those views as stated, of course you wouldn’t have a problem with what the authors have done in this specific instance.
I am also not interested in what (I think) they have done in the article. I have said this already, but I’ll agree again if you’d like. You’re right to not be interested in this.
Honestly, I would love to. I don’t think I’m capable of explaining it to you as of where we stand right now. Other people, yes. Once we get to the bottom of our disagreement, yes. Not until then though.
This conversation has been fascinating to me, but it has also been a bit fatiguing to make the same points and not see them addressed. I’m not sure we’ll make it that far, but it’d be awesome if we do.
Yes, I noticed that qualification and agree. On the face of it, it certainly does look that way. That’s what I meant by “a very understandable reading”.
However, the preceding line is “You’ve made it explicitly that you’re not claiming that external #2 is always a bad idea”, and that is not true. I said “A small minority of the times I wont [...]”, and what follows is not explicitly “external #2”. I can see how you would group what follows with “external #2”, but I do not. This is what I mean when I say that I predict you will assume that you’re understanding what I’m saying when you do not.
This seems backwards to me. Again, with the double cruxing, you have to agree on F before you can agree on E before you can agree on D before you can even think about agreeing on the original topic. This reads to me like you saying you want me to explain why we disagree on B before you address C.
Not necessarily. I think it’s perfectly fine to be uninterested in helping you fix the errors I discern in the understanding of my argument, unless I had already gone out of my way to give you reason to believe I would if you layed out your understanding for me. Especially if I don’t think you’ll be completely charitable.
I haven’t gone out of my way to give you reason to believe I would, since I wasn’t sure at the time, but I’ll state my stance explicitly now. This conversation has been fascinating to me. It has also been a bit fatiguing, and I’m unsure of how long I want to continue this. To the extent that it actually seems we can come to the bottom of our disagreement, I am interested in continuing. If we get to the point where you’re interested in hearing it and I think it will be fruitful, I will try to explain the difference between my view and your attempt to describe them.
As I see it now, we can’t get there until I understand why you treat what I see as “privately holding my beliefs, and not working to hide them from (possibly fallacious) inference” as if it is “actively presupposing that my beliefs are correct, and judging anyone who disagrees as ‘below me’”. I also don’t think we can get there until we can agree on a few other things that I’ve brought up and haven’t seen addressed.
Either way, thanks for the in depth engagement. I do appreciate it.
On “being seen as rude”: I beg your pardon, I was misremembering exactly what I had written at each point. However, I still can’t escape the feeling that you are either misunderstanding or (less likely) being deliberately obscure, because what you actually say about this seems to me to assume that I was presenting “being seen as rude” as a drawback of doing what I called “external #2”, whereas what I was actually saying is that one problem with “external #2″ is that it forces someone who disagrees to do something that could be seen as rude; that’s one mechanism by which the social pressure you mentioned earlier is applied.
Except that what you are actually doing is repeatedly telling me that I have not understood you correctly, and not lifting a finger to indicate what a correct understanding might be and how it might differ from mine. You keep talking about inferential distances that might prevent me understanding you, but seem to make no effort even to begin closing the alleged gap.
In support of this, in the other half of your reply you say I “seem to be acting as if it’s impossible to be on step two honestly and that I must be trying to hide from engagement if I am not yet ready to move on to step three”; well, if you say that’s how it seems to you then I dare say it’s true, but I am pretty sure I haven’t said it’s “impossible to be on step two honestly” because I don’t believe that, and I’m pretty sure I haven’t said that you “must be trying to hide from engagement” because my actual position is that you seem to be behaving in a way consistent with that but of course there are other possibilities. And you say that I “should probably make room for both possibilities” (i.e., that you do, or that you don’t, see things I don’t); which is odd because I do in fact agree that both are possibilities.
So. Are you interested in actually making progress on any of this stuff, or not?
Right. I’m not accusing you of doing it. You didn’t say it outright, I don’t expect you to endorse that description, and I don’t see any reason even to start to form an opinion on whether it accurately describes your behavior or not. I was saying it as more of a “hey, here’s what you look like to me. I know (suspect?) this isn’t what you look like to you, so how do you see it and how do I square this with that?”. I just honestly don’t know how to square these things.
If, hypothetically, I’m on step two because I honestly believe that if I tried to explain my views you would likely prematurely assume that you get it and that it makes more sense to address this meta level first, and if, hypothetically, I’m even right and have good reasons to believe I’m right… what’s your prescription? What should I do, if that were the case? What could I do to make it clear that am arguing in good faith, if that were the case?
If you can tell me where to start that doesn’t presuppose that my beliefs are wrong or that I’ve been arguing in bad faith, I would love to. Where would you have me start?
Whereas I honestly don’t know how to help you square them, because I don’t see anything in what I wrote that seems like it would make a reasonable person conclude that I think it’s impossible to be on your “step 2” honestly, or that I think you “must be trying to hide from engagement” (as opposed to might be, which I do think).
My general prescription for this sort of situation (and I remark that not only do I hope I would apply it with roles reversed, but that’s pretty much what I am doing in this discussion) is: proceed on the working assumption that the other guy isn’t too stupid/blinkered/crazy/whatever to appreciate your points, and get on with it; or, if you can’t honestly give that assumption high enough probability to make it worth trying, drop the discussion altogether.
(This is also, I think, the best thing you could do to make it clear, or at any rate markedly more probable to doubtful onlookers, that you’re arguing in good faith.)
The same place as I’ve been asking you to start for a while: you say I haven’t understood some important parts of your position, so clarify those parts of your position for me. Adopt the working assumption that I’m not crazy, evil or stupid but that I’ve missed or misunderstood something, and Sure, it might not work: I might just be too obtuse to get it; in that case that fact will become apparent (at least to you) and you can stop wasting your time. Or it might turn out—as, outside view, it very frequently does when someone smart has partially understood something and you explain to them the things you think they’ve missed—that I will understand; or—as, outside view, is also not so rare—that actually I understood OK already and there was some sort of miscommunication. In either of those cases we can get on with addressing whatever actual substantive disagreements we turn out to have, and maybe at least one of us will learn something.
(In addition to the pessimistic option of just giving up, and the intermediate option of making the working assumption that I’ve not understood your position perfectly but am correctible, there is also the optimistic option of making the working assumption that actually I’ve understood it better than you think, and proceeding accordingly. I wouldn’t recommend that option given my impression of your impression of my epistemic state, but there are broadly-similar situations in which I would so I thought I should mention it.)
All of the options you explicitly list imply disrespect. If I saw all other options as implying disrespect as well, I would agree that “if you can’t honestly give that assumption high enough probability to make it worth trying, [it’s best to] drop the discussion altogether”.
However, I see it as possible to have both mutual respect and predictably counterproductive object level discussion. Because of this, I see potential for fruitful avenues other than “plow on the object level and hope it works out, or bail”. I have had many conversations with people whom I respect (and who by all means seem to feel respected by me) where we have done this to good results—and I’ve been on the other side too, again, without feeling like I was being disrespected.
Your responses have all been consistent with acting like I must be framing you as stupid/blinkered/crazy/otherwise-unworthy-of-respect if I don’t think object level discussion is the best next step. Is there a reason you haven’t addressed the possibility that I’m being sincere and that my disinterest in “just explaining my view” at this point isn’t predicated on me concluding that you’re stupid/blinkered/crazy/otherwise-unworthy-of-respect? Even to say that you hear me but conclude that I must be lying/crazy since that’s obviously too unlikely to be worth considering?
The thing is, that does presuppose that my belief that “in this case, as with many others with large inferential distance, trying to simply clarify my position will result in more misunderstanding than understanding, on expectation, and therefore is not a good idea—even if the other person isn’t stupid/blinkered/crazy/otherwise-undeserving-of-respect” is wrong. Er.. unless you’re saying “sure, you might be right, and maybe it could work your way and couldn’t work my way, but I’m still unwilling to take that seriously enough to even consider doing things your way. My way or it ain’t happenin’.”
If it’s the latter case, and if, as you seem to imply, this is a general rule you live by, I’m not sure what your plan is for dealing with the possibility of object level blind spots—but I guess I don’t have to. Either way, it’s a fair response here, if that’s the decision you want to make—we can agree to disagree here too.
Anyway, if you’re writing all these words because you actually want to know how the heck I see it, then I’ll see what I can do. It might take a while because I expect it to take a decent amount of work and probably end up long, but I promise I will work at it. If, on the other hand, you’re just trying to do an extremely thorough job at making it clear that you’re not closed to my arguments, then I’d be happy to leave it as “you’re unwilling to consider doing things my way”+”I’m unwilling to do things your way until we can agree that your way is the better choice”, if that is indeed a fair description of your stance.
(Sorta separately, I’m sure I’d have a bunch of questions on how you see things, if you’d have any interest in explaining your perspective)
Well, the one I’m actually proposing doesn’t, but I guess you mean the others do. I’m not sure they exactly do, though I certainly didn’t make any effort to frame them in tactfully respect-maximizing terms; in any case, it’s certainly not far off to say they all imply disrespect. I agree that there are situations in which you can’t explain something without preparation without any disrespect to the other guy being called for; but that’s because what happened was
jimmy says some things
gjm response
jimmy starts saying things like “Before engaging with why you think my argument is wrong, I want to have some indication that you actually understand what my argument is, that’s all, and I haven’t seen it.”
rather than, say,
jimmy says “so I have a rather complicated and subtle argument to make, so I’m going to have to begin with some preliminaries*.
When what happens is that you begin by making your argument and then start saying: nope, you didn’t understand it—and when your reaction to a good-faith attempt at dealing with the alleged misunderstanding is anything other than “oh, OK, let me try to explain more clearly”—I think it does imply something like disrespect; at least, as much like disrespect as those options I listed above. Because what you’re saying is: you had something to say that you thought was appropriate for your audience, and not the sort of thing that needed advance warning that it was extra-subtle; but now you’ve found that I don’t understand it and (you at least suspect) I’m not likely to understand it even if you explain it.
That is, it means that something about me renders me unlikely—even when this is locally the sole goal of the discussion, and I have made it clear that I am prepared to go to substantial lengths to seem mutual understanding—to be able to understand this thing that you want to say, and that you earlier thought was a reasonable thing to say without laying a load of preparatory groundwork.
See above for why I haven’t considered it likely; the reason I haven’t (given that) addressed it is that there’s never time to address everything.
If there is a specific hypothesis in this class that you would like us to entertain, perhaps you should consider saying what it is.
No, it presupposes that it could be wrong. (I would say it carries less presumption that it’s wrong than your last several comments in this thread carry presumption that it’s right.) The idea is: It could be wrong, in which case giving it a go will bring immediate benefit; it could be wrong but we could be (mutually) reasonable enough to see that it’s right when we give it a go and that doesn’t work, in which case giving it a go will get us past the meta-level stuff about whether I’m likely to be unable to understand. Or, of course, it could go the other way.
When one is suspected, look at it up close and see whether it really is one. Which, y’know, is what I’m suggesting here.
What I was hoping to know, in the first instance, is what I have allegedly misunderstood in what you wrote before. You know, where you said things of the form “your description doesn’t even contain my actual reason for saying X”—which I took, for reasons that still look solid to me, to indicate that you had already given your actual reason.
If the only way for you to explain all my serious misunderstandings of what you wrote is for you to write an effortful lengthy essay about your general view … well, I expect it would be interesting. But on the face of it that seems like more effort than it should actually take. And if the reason why it should take all that effort is that, in essence, I have (at least in your opinion) understood so little of your position that there’s no point trying to correct me rather than trying again from scratch at much greater length then I honestly don’t know why you’re still in this discussion.
I am happy to answer questions. I’ve had it pretty much up to here (you’ll have to imagine a suitable gesture) with meta-level discussion about what either of us may or may not be capable of understanding, though, so if the questions you want to ask are about what you think of me or what I think of you or what I think you think I think you think I am capable of understanding, then let’s give that a miss.
I suppose I could have said “so I have a rather complicated and subtle argument to make. I would have to begin with some preliminaries and it would end up being kinda long and take a lot of work, so I’m not sure it’s worth it unless you really want to hear it”, and in a lot of ways I expect that would have gone better. I probably will end up doing this next time.
However in a couple key ways, it wouldn’t have, which is why I didn’t take that approach this time. And that itself is a complicated and subtle argument to make.
EDIT: I should clarify. I don’t necessarily think I made the right choice here, and it is something I’m still thinking about. However, it was an explicit choice and I had reasons.
Right, and I think this is our fundamental disagreement right here. I don’t think it implies any disrespect at all, but I’m happy to leave it here if you want.
I see where you’re coming from, but I don’t think arguments with subtle backing always need that warning, nor do they always need to be intended to be fully understood in order to be worth saying. This means that “I can’t give you an explanation you’ll understand without a ton of work” doesn’t single you out nearly as much as you’d otherwise think.
I can get into this if you’d like, but it’d just be more meta shit, and at this point my solution is starting to converge with yours: “do the damn write up or shut up, jimmy”
I agree that you can’t address everything (nor have I), but this one stands out as the one big one I keep getting back to—and one where if you addressed it, this whole thing would resolve pretty much right away.
It seems like now that you have, we’re probably gonna end up at something more or less along the lines of “we disagree whether “mutual respect” and “knowably unable to progress on the object level” go together to a non-negligable amount, at least as it applies here, and gjm is uninterested in resolving this disagreement”. That’s an acceptable ending for me, so long as you know that it is a genuine belief of mine and that I’m not just trying to weasel around denying that I’ve been showing disrespect and shit.
I thought I addressed that possibility with the “err, or this” bit.
I was talking about the ones where that won’t work, which I see as a real thing though you might not.
If I ever end up writing it up, I’ll let you know.
:)
That’d probably have to be a part of the write up, as it calls on all the same concepts
The first part I feel like I’ve already addressed and haven’t seen a response to (the difference between staking active claims vs speaking from a place that you choose to draw (perhaps fallacious) inferences from and then treat as if they’re active claims).
The second part is interesting though. It’s pretty darn answerable to me! I didn’t realize that you thought that I might hear an answer that perfectly paces my views and then just outright lie “nope, that’s not it!”. If that’s something you think I could even conceivably do, I’m baffled as to why you’d be putting energy into interacting with me!
But yes, it does place the responsibility on me of deciding whether you understand my pov and reporting honestly on the matter. And yes, not all people will want to be completely honest on the matter. And yes, I realize that you don’t have reason to be convinced that I will be, and that’s okay.
However, it would be very stupid of me not to be. I can hide away in my head for as long as I want, and if no matter how hard you try, and no matter how obvious the signs become, if I’m willing to ignore them all I can believe my believies for as long as I want and pretend that I’m some sort of wise guru on the mountain top, and that everyone else just lacks my wisdom. You’re right, if I want to hide from the truth and never give you the opportunity to convince me that I’m wrong, I can. And that would be bad.
But I don’t see what solution you have to this, as if the inferential distance is larger than you realize, then your method of “then explain what it is that I allegedly didn’t understand” can’t work because if you’re still expecting a short inferential distance then you will have to either conclude that I’m speaking gibberish or that I’m wrong—even if I’m not.
It’s like the “double crux” thing. We’re working our way down the letters, and you’re saying “if you think I don’t understand your pov you should explain where I’m wrong!” and I’m saying “if I thought that you would be able to judge what I’m saying without other hidden disagreements predictably leading to faulty judgements, then I would agree that is a good idea”. I can’t just believe it’s a good idea when I don’t, and yes, that looks the same as “I’m unwilling to stick my neck out because I secretly know I’m wrong”. However, it’s a necessary thing whenever the inferential distance is larger than one party expects, or when one party believes it to be so (and if you don’t believe that I believe that it is… I guess I’d be interested in hearing why). We can’t shortcut the process by pointing at it being “unanswerable”. It is what it is.
It’d be nice if this weren’t ever an issue, but ultimately I think it’s fine because there’s no free lunch. If I feel cognitive dissonance and don’t admit that you have a point, it tends to show, and that would make me look bad. If it doesn’t show somehow, I still fail to convince anyone of anything. I still fail to give anyone any reason to believe I’m some wise guru on the mountaintop even if I really really want them to believe that. It’s not going to work, because I’m not doing anything to distinguish myself from that poser that has nothing interesting to say.
If I want to actually be able to claim status, and not retreat to some hut muttering at how all the meanies won’t give me the status that I deserve, I have to actually stick my neck out and say something useful and falsifiable at some point. I get that—which is why I keep making the distinction between actively staking claims and refusing to accept false presuppositions.
The thing is, my first priority is actually being right. My second priority is making sure that I don’t give people a reason to falsely conclude that I’m wrong and that I am unaware of or/unable to deal with the fact that they think that. My third priority is that I actually get to speak on the object level and be useful. I’m on step two now. You seem to be acting as if it’s impossible to be on step two honestly and that I must be trying to hide from engagement if I am not yet ready to move on to step three with you. I don’t know what else to tell you. I don’t agree.
If you don’t want to automatically accept that I see things you don’t (and that these things are hard to clearly communicate to someone with your views), then that’s fine. I certainly don’t insist that you accept that I do. Heck, I encourage skepticism. However, I’m not really sure how you can know that I don’t, and it seems like you should probably make room for both possibilities if you want to have a productive conversation with me (and it’s fine if you don’t).
The main test that I use in distinguishing between wise old men on mountain tops and charlatans is whether my doubt in them provokes cognitive signs of cognitive dissonance—but there are both false positives and false negatives there. A second test I use is to see whether this guy has any real world results that impress me. A fourth is to see whether I can get him to say anything useful to me. A fourth test is whether there are in fact times that I end up eventually seeing things his way on my own.
It’s not always easy, and I’ve realized again and again that even foolish people are wiser than I give them credit for, so at this point I’m really hesitant to rule that out so that I can actively deny their implicit claim to status. I prefer to just not actively grant them, and say something like “yes, you might be really wise, but I can’t see that you’re not a clown, and until I do I’m going to have to assign a higher probability to the latter. If you can give me some indication that you’re not a clown, I would appreciate that, and I understand that if you don’t it is no proof that you are”.
I think you’re much confused between arguments and evidence in support of a single argument.
If you go back through my comments on LW (note: I am not actually suggesting you do this; there are a lot of them, as you know) you will find that in this sort of context I almost always say explicitly something like “evidence and arguments”, precisely because I am not confused about the difference between the two. Sometimes I am lazy. This was one of those times.
Bad arguments and bad evidence can serve equally well in a Gish gallop.