Warning—for reasons that I haven’t quite understood, talking about diet can be almost as bad as politics in its potential for controversy and mind-killing.
I believe Steve Sailer hypothesized that every controversial issue has at least one unpleasant truth which people are subconsciously aware of (or afraid of) but reluctant to face. This, according to the hypothesis, generates the cognitive dissonance which generates the mind-killing emotion.
In these discussions of diet and obesity, I wonder what percentage of the participants are obese/overweight and troubled by their obesity/overweight.
every controversial issue has at least one unpleasant truth which people are subconsciously aware of (or afraid of) but reluctant to face.
Both sides of a controversy may assert that they are in possession of an “unpleasant truth” that the other side is afraid to admit to. Conspiracy theories work pretty well as “unpleasant truths”, for instance; as do prejudices, eschatological or afterlife beliefs, and so on.
So that doesn’t seem to be a good guide to who is correct when you’re within the controversy.
So that doesn’t seem to be a good guide to who is correct when you’re within the controversy.
I agree 100%. And in fact (well at least in my opinion) there are basically no shortcuts to determining who is correct. Because once a shortcut is announced, the fellow who is in the wrong will attempt to twist things to make the shortcut appear to favor him.
1) As a heuristic, that seems like rhetoric designed to create a bias towards certain viewpoints , not rationality. In every controversy there are two sides that believe different things, and often at least one side often believes that the other is missing some sort of “uncomfortable truth” which collapses their worldview. There’s no a-priori reason to assume uncomfortable-truth-proposers are the correct ones in a given controversy.
2) There may or may not be be “uncomfortable truths” around obesity relating to its relationship to willpower, etc. However, that’s not sufficient to explain why talking about the fine points of high fat vs. high carb diets should ignite controversy. If the title of this post was “Obese are fat due to X moral failure”, I’d understand the controversy, but the outcome of the carb vs. fat debate won’t negatively impact any demographic, except for various business interests and concerns about animal rights or environmentalism. (And that doesn’t really seem to be the debate driver either)
How do people’s identities get tangled up with a carb vs. fat position, and the question of whether or not a high-fat-low-carb diet allows for a higher caloric consumption without gaining more weight? At the face of it, the question seems specific and non-threatening—no threat of moral failure, no criticism… only the possibility that some people might benefit by changing their diet composition.
As a heuristic, that seems like rhetoric designed to create a bias towards certain viewpoints , not rationality. In every controversy there are two sides that believe different things, and often at least one side often believes that the other is missing some sort of “uncomfortable truth” which collapses their worldview. There’s no a-priori reason to assume uncomfortable-truth-proposers are the correct ones in a given controversy.
Putting aside whether it’s a useful heuristic or not, one can ask if it is in fact true. Put another way, why is it that some disputes excite a lot more emotion than others.
Next, if Sailer’s hypothesis is correct, one can then ask if his observation can be used as a heuristic to figure out which side is more likely to be correct. As I pointed out in another post, I think the answer is “No.” I think that in general, there are no proxies to figuring out which side of a controversy is correct. Because proxies are vulnerable to munchkinism.
There may or may not be be “uncomfortable truths” around obesity relating to its relationship to willpower, etc. However, that’s not sufficient to explain why talking about the fine points of high fat vs. high carb diets should ignite controversy.
I’m not sure about that; my impression is that a lot of people—including fat people—have a lot invested emotionally in their views on which diets are effective and which aren’t. If someone (1) is fat; and (2) has beliefs about which diets are effective, it seems that there would be a lot of opportunity for cognitive dissonance.
Next, if Sailer’s hypothesis is correct, one can then ask if his observation can be used as a heuristic to figure out which side is more likely to be correct. As I pointed out in another post, I think the answer is “No.”
I think the answer is “yes”, so let’s clarify what we mean: I read the phrase
I believe Steve Sailer hypothesized that every controversial issue has at least one unpleasant truth which people are subconsciously aware of (or afraid of) but reluctant to face.
as
“unpleasant propositions which are commonly debated are likely to be true”
There are debates where one side is arguing for a belief which both sides find unpleasant. Sailer’s hypothesis implies that any commonly debated proposition which is less pleasant than its alternative explanations is more likely to be true.
Explaining it a different way:
Blue: X is a more pleasant belief than not-X. Also, X is true.
Green: X is a more pleasant belief than not-X. However, X is false.
not-X is the only possible candidate for an “unpleasant truth” , so wouldn’t Sailer’s hypothesis elevate the priors for X being false?
The implicit rational here for the Sailer’s hypothesis would presumably be, “no one wants to believe not-X and therefore everyone is systematically biased towards X. Thus, the existence of individuals who believe not-X elevates the prior for not-X proportionately more than the existence of individuals who believed X. So, all else being equal, favor the unpleasant hypothesis.”
The hypothesis is essentially modeled off the logic of how a negative result for a test which has a tendency towards false positives is more informative than getting a positive result for the same.
I think the answer is “yes”, so let’s clarify what we mean: I read the phrase . . . as “unpleasant propositions which are commonly debated are likely to be true”
I didn’t read it that way. And looking at what issues excite a lot of emotion, it appears to me that frequently the real trigger is related only indirectly.
There are debates where one side is arguing for a belief which both sides find unpleasant.
That may be so, but it’s not so easy to assess unpleasantness. For example, if you talk to some survivalists you get the impression that they WANT there to be a societal breakdown even though it would mean millions of people dying horribly of famine, disease etc.
Besides, I think a lot of disingenuous people are smart enough to say something like “I wish my position were untrue” or “I have no dog in this fight” in order to enhance their credibility.
it’s not so easy to assess unpleasantness...they WANT there to be a societal breakdown
Well, presumably because it would prove them right all along, not because they enjoy chaos...but it doesn’t hold any explanatory power to say that people feel strong emotions towards certain epistemic questions because certain beliefs are more or less pleasant, and then to turn around and say that the reason a belief is (un)pleasant is that it affirms/contradicts a previously held belief. That’s circular.
The initial idea of saying that there is an “unpleasant truth” in every controversy was to create a theory that had predictive power over which issues people would get emotional over. If we then say that unpleasant truths are those which prove people wrong, we lose predictive power—in LW terms, our theory stops paying rent.
We’d be better off just saying “people don’t like changing their minds” in general, if we’re not going to predict which issues and which conditions will create this sort of emotional stubbornness.
I think a lot of disingenuous people are smart enough to say something like “I wish my position were untrue”
I think it’s important to create a distinction between the satisfaction of having one’s beliefs confirmed vs. actually wishing certain beliefs to be true. They are both sources of bias and mind-kill, but they are very different. The survivalists are presumably feeling satisfaction for the former reason (vindication) when faced with talk of society collapsing, even as they do not feel the latter (true preference for a universe where society collapses).
Well, presumably because it would prove them right all along, not because they enjoy chaos
I’m not so sure about that. Once, after a few drinks, I directly confronted a survivalist about this issue. He basically told me that due to his working class background, he felt locked out of the elite; that if there were a societal breakdown he would have the opportunity to become a high status person.
I would guess that a lot of survivalists have feelings along these lines; that they resent modern society’s power structure and that at some level they wish it would fall apart.
But anyway, I agree you have articulated a problem with Sailer’s hypothesis. You can always find an “unpleasant truth,” particularly if you read “unpleasant truth” to include situations where peoples’ long-held beliefs are wrong. Regardless of whether the underlying beliefs are pleasant or unpleasant.
The initial idea of saying that there is an “unpleasant truth” in every controversy was to create a theory that had predictive power
I’m not sure if that’s the idea, but regardless of whether or not that was the aim, I certainly agree that if the hypothesis lacks predictive power then there’s a good chance it’s worthless.
One can put things a slightly different way: How do you know if people are facing evidence of an uncomfortable truth apart from them getting emotional about it?
I think it’s important to create a distinction between the satisfaction of having one’s beliefs confirmed vs. actually wishing certain beliefs to be true. They are both sources of bias and mind-kill, but they are very different. The survivalists are presumably feeling satisfaction for the former reason (vindication) when faced with talk of society collapsing, even as they do not feel the latter (true preference for a universe where society collapses).
Putting aside my question about survivalists’ preferences, why draw the distinction? Ultimately the effect is the same, no?
why draw the distinction? Ultimately the effect is the same, no?
I don’t think so. To continue the survivalist example—a survivalist who wanted the belief that civilization would collapse to be true would be making villainous plots to cause the collapse. A survivalist who simply wanted to be vindicated but didn’t actually desire collapse would look at the first signs of collapse, tell everyone “I told you so” with a rather smug expression, and then join them in the fight to prevent civilization from collapsing.
How do you know if people are facing evidence of an uncomfortable truth apart from them getting emotional about it?
Being emotional is probably not a good signal of this. For example, plenty of atheists are emotional about religion—that doesn’t mean they are uncomfortably aware that it’s actually true in some corner of their minds. One might be emotional because one believes that people who hold certain viewpoints are damaging society.
I think self deception from uncomfortable truths has some unique tells which are distinct from sheer negative affect. Some of these are discussed in the “belief in belief” articles—to the extent that they can do so without becoming consciously aware of it, the person will basically act as if they believe the uncomfortable truth is true, even while professing that it is false.
I think belief in a good afterlife where we will all be together is the most obvious example of this pattern—most people simply don’t act as if death is nothing more a temporary separation when faced with actual death, regardless of what they profess to believe. At some implicit level, I think most people know that the separation is permanent. (There’s exceptions of course—I’ve seen some particularly strong believers who really were relatively unperturbed in the face of death)
I don’t think so. To continue the survivalist example—a survivalist who wanted the belief that civilization would collapse to be true would be making villainous plots to cause the collapse. A survivalist who simply wanted to be vindicated but didn’t actually desire collapse would look at the first signs of collapse, tell everyone “I told you so” with a rather smug expression, and then join them in the fight to prevent civilization from collapsing.
I disagree with this based on my general observations of survivalists. I haven’t noticed any of them plotting to undermine civilization. Also, I doubt that any of them word do much to prevent a collapse. Also, just introspecting, there are a lot of things I wish were different about the world but I am doing little or nothing to bring about such changes. I think my attitude is pretty common.
Perhaps more importantly, even if what you are saying is correct, how does it relate to the subject at hand—which is predicting which topics will generate a lot of heat in discussion?
Being emotional is probably not a good signal of this. For example, plenty of atheists are emotional about religion—that doesn’t mean they are uncomfortably aware that it’s actually true in some corner of their minds.
I agree that other things can get people worked up besides cognitive dissonance.
Some of these are discussed in the “belief in belief” articles—to the extent that they can do so without becoming consciously aware of it, the person will basically act as if they believe the uncomfortable truth is true, even while professing that it is false.
I like that idea. So one can hypothesize that, at a minimum, in any area where a lot of peoples’ actions are inconsistent with their professed beliefs, then discussion of those beliefs will tend to generate a lot of heat, so to speak. Not sure that covers everything, but it seems like a good start.
There’s exceptions of course—I’ve seen some particularly strong believers who really were relatively unperturbed in the face of death)
And quite possibly those same people remain relatively unperturbed when debating life after death. :)
I believe Steve Sailer hypothesized that every controversial issue has at least one unpleasant truth which people are subconsciously aware of (or afraid of) but reluctant to face.
I would assume that if the theory is true, both sides have something they don’t want to admit.
You also have the effect of non-overweight people who enjoy being sanctimonious or condescending.
I agree that’s a huge problem. On most discussion boards, when the topic of obesity comes up, there will always be a couple guys who show up to say something like “Just stop stuffing your face and you’ll lose weight.” Which typically generates a lot of bickering. Still, one can ask what is going on psychologically in such situations.
Frankly, I suspect the uncomfortable truth here is that there is barely enough evidence for pick out hypotheses out of hypothesis-space much less test them, thus both sides are mostly bullshitting and afraid of getting called on it.
That may be; it ties in with the hypothesis advanced by another poster—that everyone has experience with eating so there is a tendency to overconfidence.
Anyway, Paul Graham had some interesting thoughts on what kinds of issues generate controversy; basically it has to do with peoples’ self-defined identities:
More generally, you can have a fruitful discussion about a topic only if it doesn’t engage the identities of any of the participants
That maxim seems far too cautious. It would, for instance, suggest that you can’t have a “fruitful discussion” about installing wheelchair ramps with people who identify as disabled; nor can you have a “fruitful discussion” about neurodiversity with people who identify as autistic, ADHD, etc.; and so on.
I would suggest a less-cautious variant: you can’t have a fruitful discussion about a topic when any one party to the discussion presumes that another party’s purpose in entering the discussion is illegitimate.
Also —
If you are discussing censorship laws with a lady who keeps using the word “smut,” you will experience that same sense of banging your head against a brick wall. If you attempt to reason with a Marxist, the word “bourgeoise” will eventually be invoked to banish any coherence or logic in what you have been saying.
It would, for instance, suggest that you can’t have a “fruitful discussion” about installing wheelchair ramps with people who identify as disabled
I would say that self-identifying as disabled is not quite the same thing as self-identifying with a group which believes in / advocates for rights for disabled people. Getting back to the religion analogy, one can ask about having a discussion of religion with a Jew—in such a situation it would help to know if the individual is Jewish like Albert Einstein or Jewish like Moshe Feiglin.
you can’t have a fruitful discussion about a topic when any one party to the discussion presumes that another party’s purpose in entering the discussion is illegitimate.
Perhaps, but what if that’s just a side effect of the heat generated by a controversial issue?
That maxim seems far too cautious. It would, for instance, suggest that you can’t have a “fruitful discussion” about installing wheelchair ramps with people who identify as disabled; nor can you have a “fruitful discussion” about neurodiversity with people who identify as autistic, ADHD, etc.; and so on.
Well, I don’t understand mere conceptual proximity to imply engagement in Graham’s sense. Neurodiversity for example implies a particular approach to neurological issues; you can identity as ADHD etc. without identifying as a neurodiverse individual or being identity-entangled with any particular attitude toward that model. If you are talking to someone who identifies as neurodiverse, though, or who’s adopted an identity directly excluding that identity, then I think Graham’s caution applies.
That’s not to say that talking with such a person on such a topic is necessarily a waste of time, though; one or both of you might learn something about the arguments being used, or about facts to apply. And you might find your opinion shifting, if you don’t have an identity in the game. What you can’t expect is to shift an entangled opinion without first breaking down the identity it’s entangled with—and trying, or even being perceived as such, is a good way to send the discussion straight to hell.
I do agree that a presumption of bad faith would exclude fruitful discussion in a much stronger sense.
I believe Steve Sailer hypothesized that every controversial issue has at least one unpleasant truth which people are subconsciously aware of (or afraid of) but reluctant to face. This, according to the hypothesis, generates the cognitive dissonance which generates the mind-killing emotion.
In these discussions of diet and obesity, I wonder what percentage of the participants are obese/overweight and troubled by their obesity/overweight.
Both sides of a controversy may assert that they are in possession of an “unpleasant truth” that the other side is afraid to admit to. Conspiracy theories work pretty well as “unpleasant truths”, for instance; as do prejudices, eschatological or afterlife beliefs, and so on.
So that doesn’t seem to be a good guide to who is correct when you’re within the controversy.
I agree 100%. And in fact (well at least in my opinion) there are basically no shortcuts to determining who is correct. Because once a shortcut is announced, the fellow who is in the wrong will attempt to twist things to make the shortcut appear to favor him.
Nevertheless, Sailer’s theory may be correct.
Two separate points:
1) As a heuristic, that seems like rhetoric designed to create a bias towards certain viewpoints , not rationality. In every controversy there are two sides that believe different things, and often at least one side often believes that the other is missing some sort of “uncomfortable truth” which collapses their worldview. There’s no a-priori reason to assume uncomfortable-truth-proposers are the correct ones in a given controversy.
2) There may or may not be be “uncomfortable truths” around obesity relating to its relationship to willpower, etc. However, that’s not sufficient to explain why talking about the fine points of high fat vs. high carb diets should ignite controversy. If the title of this post was “Obese are fat due to X moral failure”, I’d understand the controversy, but the outcome of the carb vs. fat debate won’t negatively impact any demographic, except for various business interests and concerns about animal rights or environmentalism. (And that doesn’t really seem to be the debate driver either)
How do people’s identities get tangled up with a carb vs. fat position, and the question of whether or not a high-fat-low-carb diet allows for a higher caloric consumption without gaining more weight? At the face of it, the question seems specific and non-threatening—no threat of moral failure, no criticism… only the possibility that some people might benefit by changing their diet composition.
Putting aside whether it’s a useful heuristic or not, one can ask if it is in fact true. Put another way, why is it that some disputes excite a lot more emotion than others.
Next, if Sailer’s hypothesis is correct, one can then ask if his observation can be used as a heuristic to figure out which side is more likely to be correct. As I pointed out in another post, I think the answer is “No.” I think that in general, there are no proxies to figuring out which side of a controversy is correct. Because proxies are vulnerable to munchkinism.
I’m not sure about that; my impression is that a lot of people—including fat people—have a lot invested emotionally in their views on which diets are effective and which aren’t. If someone (1) is fat; and (2) has beliefs about which diets are effective, it seems that there would be a lot of opportunity for cognitive dissonance.
I think the answer is “yes”, so let’s clarify what we mean: I read the phrase
as
“unpleasant propositions which are commonly debated are likely to be true”
There are debates where one side is arguing for a belief which both sides find unpleasant. Sailer’s hypothesis implies that any commonly debated proposition which is less pleasant than its alternative explanations is more likely to be true.
Explaining it a different way:
Blue: X is a more pleasant belief than not-X. Also, X is true.
Green: X is a more pleasant belief than not-X. However, X is false.
not-X is the only possible candidate for an “unpleasant truth” , so wouldn’t Sailer’s hypothesis elevate the priors for X being false?
The implicit rational here for the Sailer’s hypothesis would presumably be, “no one wants to believe not-X and therefore everyone is systematically biased towards X. Thus, the existence of individuals who believe not-X elevates the prior for not-X proportionately more than the existence of individuals who believed X. So, all else being equal, favor the unpleasant hypothesis.”
The hypothesis is essentially modeled off the logic of how a negative result for a test which has a tendency towards false positives is more informative than getting a positive result for the same.
I didn’t read it that way. And looking at what issues excite a lot of emotion, it appears to me that frequently the real trigger is related only indirectly.
That may be so, but it’s not so easy to assess unpleasantness. For example, if you talk to some survivalists you get the impression that they WANT there to be a societal breakdown even though it would mean millions of people dying horribly of famine, disease etc.
Besides, I think a lot of disingenuous people are smart enough to say something like “I wish my position were untrue” or “I have no dog in this fight” in order to enhance their credibility.
Well, presumably because it would prove them right all along, not because they enjoy chaos...but it doesn’t hold any explanatory power to say that people feel strong emotions towards certain epistemic questions because certain beliefs are more or less pleasant, and then to turn around and say that the reason a belief is (un)pleasant is that it affirms/contradicts a previously held belief. That’s circular.
The initial idea of saying that there is an “unpleasant truth” in every controversy was to create a theory that had predictive power over which issues people would get emotional over. If we then say that unpleasant truths are those which prove people wrong, we lose predictive power—in LW terms, our theory stops paying rent.
We’d be better off just saying “people don’t like changing their minds” in general, if we’re not going to predict which issues and which conditions will create this sort of emotional stubbornness.
I think it’s important to create a distinction between the satisfaction of having one’s beliefs confirmed vs. actually wishing certain beliefs to be true. They are both sources of bias and mind-kill, but they are very different. The survivalists are presumably feeling satisfaction for the former reason (vindication) when faced with talk of society collapsing, even as they do not feel the latter (true preference for a universe where society collapses).
I’m not so sure about that. Once, after a few drinks, I directly confronted a survivalist about this issue. He basically told me that due to his working class background, he felt locked out of the elite; that if there were a societal breakdown he would have the opportunity to become a high status person.
I would guess that a lot of survivalists have feelings along these lines; that they resent modern society’s power structure and that at some level they wish it would fall apart.
But anyway, I agree you have articulated a problem with Sailer’s hypothesis. You can always find an “unpleasant truth,” particularly if you read “unpleasant truth” to include situations where peoples’ long-held beliefs are wrong. Regardless of whether the underlying beliefs are pleasant or unpleasant.
I’m not sure if that’s the idea, but regardless of whether or not that was the aim, I certainly agree that if the hypothesis lacks predictive power then there’s a good chance it’s worthless.
One can put things a slightly different way: How do you know if people are facing evidence of an uncomfortable truth apart from them getting emotional about it?
Putting aside my question about survivalists’ preferences, why draw the distinction? Ultimately the effect is the same, no?
I don’t think so. To continue the survivalist example—a survivalist who wanted the belief that civilization would collapse to be true would be making villainous plots to cause the collapse. A survivalist who simply wanted to be vindicated but didn’t actually desire collapse would look at the first signs of collapse, tell everyone “I told you so” with a rather smug expression, and then join them in the fight to prevent civilization from collapsing.
Being emotional is probably not a good signal of this. For example, plenty of atheists are emotional about religion—that doesn’t mean they are uncomfortably aware that it’s actually true in some corner of their minds. One might be emotional because one believes that people who hold certain viewpoints are damaging society.
I think self deception from uncomfortable truths has some unique tells which are distinct from sheer negative affect. Some of these are discussed in the “belief in belief” articles—to the extent that they can do so without becoming consciously aware of it, the person will basically act as if they believe the uncomfortable truth is true, even while professing that it is false.
I think belief in a good afterlife where we will all be together is the most obvious example of this pattern—most people simply don’t act as if death is nothing more a temporary separation when faced with actual death, regardless of what they profess to believe. At some implicit level, I think most people know that the separation is permanent. (There’s exceptions of course—I’ve seen some particularly strong believers who really were relatively unperturbed in the face of death)
I disagree with this based on my general observations of survivalists. I haven’t noticed any of them plotting to undermine civilization. Also, I doubt that any of them word do much to prevent a collapse. Also, just introspecting, there are a lot of things I wish were different about the world but I am doing little or nothing to bring about such changes. I think my attitude is pretty common.
Perhaps more importantly, even if what you are saying is correct, how does it relate to the subject at hand—which is predicting which topics will generate a lot of heat in discussion?
I agree that other things can get people worked up besides cognitive dissonance.
I like that idea. So one can hypothesize that, at a minimum, in any area where a lot of peoples’ actions are inconsistent with their professed beliefs, then discussion of those beliefs will tend to generate a lot of heat, so to speak. Not sure that covers everything, but it seems like a good start.
And quite possibly those same people remain relatively unperturbed when debating life after death. :)
I would assume that if the theory is true, both sides have something they don’t want to admit.
Lol yes, notice I said “at least one unpleasant truth.” But sometimes one side is basically right and the other side is basically wrong.
You also have the effect of non-overweight people who enjoy being sanctimonious or condescending.
It would also be informative what kinds of people find what kind of advice sanctimonious and condescending.
I agree that’s a huge problem. On most discussion boards, when the topic of obesity comes up, there will always be a couple guys who show up to say something like “Just stop stuffing your face and you’ll lose weight.” Which typically generates a lot of bickering. Still, one can ask what is going on psychologically in such situations.
Frankly, I suspect the uncomfortable truth here is that there is barely enough evidence for pick out hypotheses out of hypothesis-space much less test them, thus both sides are mostly bullshitting and afraid of getting called on it.
That may be; it ties in with the hypothesis advanced by another poster—that everyone has experience with eating so there is a tendency to overconfidence.
Anyway, Paul Graham had some interesting thoughts on what kinds of issues generate controversy; basically it has to do with peoples’ self-defined identities:
http://paulgraham.com/identity.html
That maxim seems far too cautious. It would, for instance, suggest that you can’t have a “fruitful discussion” about installing wheelchair ramps with people who identify as disabled; nor can you have a “fruitful discussion” about neurodiversity with people who identify as autistic, ADHD, etc.; and so on.
I would suggest a less-cautious variant: you can’t have a fruitful discussion about a topic when any one party to the discussion presumes that another party’s purpose in entering the discussion is illegitimate.
Also —
— Robert Anton Wilson, “Sleep-Walking and Hypnotism”, from Natural Law, or Don’t Put a Rubber On Your Willy.
I would say that self-identifying as disabled is not quite the same thing as self-identifying with a group which believes in / advocates for rights for disabled people. Getting back to the religion analogy, one can ask about having a discussion of religion with a Jew—in such a situation it would help to know if the individual is Jewish like Albert Einstein or Jewish like Moshe Feiglin.
Perhaps, but what if that’s just a side effect of the heat generated by a controversial issue?
Well, I don’t understand mere conceptual proximity to imply engagement in Graham’s sense. Neurodiversity for example implies a particular approach to neurological issues; you can identity as ADHD etc. without identifying as a neurodiverse individual or being identity-entangled with any particular attitude toward that model. If you are talking to someone who identifies as neurodiverse, though, or who’s adopted an identity directly excluding that identity, then I think Graham’s caution applies.
That’s not to say that talking with such a person on such a topic is necessarily a waste of time, though; one or both of you might learn something about the arguments being used, or about facts to apply. And you might find your opinion shifting, if you don’t have an identity in the game. What you can’t expect is to shift an entangled opinion without first breaking down the identity it’s entangled with—and trying, or even being perceived as such, is a good way to send the discussion straight to hell.
I do agree that a presumption of bad faith would exclude fruitful discussion in a much stronger sense.