1) As a heuristic, that seems like rhetoric designed to create a bias towards certain viewpoints , not rationality. In every controversy there are two sides that believe different things, and often at least one side often believes that the other is missing some sort of “uncomfortable truth” which collapses their worldview. There’s no a-priori reason to assume uncomfortable-truth-proposers are the correct ones in a given controversy.
2) There may or may not be be “uncomfortable truths” around obesity relating to its relationship to willpower, etc. However, that’s not sufficient to explain why talking about the fine points of high fat vs. high carb diets should ignite controversy. If the title of this post was “Obese are fat due to X moral failure”, I’d understand the controversy, but the outcome of the carb vs. fat debate won’t negatively impact any demographic, except for various business interests and concerns about animal rights or environmentalism. (And that doesn’t really seem to be the debate driver either)
How do people’s identities get tangled up with a carb vs. fat position, and the question of whether or not a high-fat-low-carb diet allows for a higher caloric consumption without gaining more weight? At the face of it, the question seems specific and non-threatening—no threat of moral failure, no criticism… only the possibility that some people might benefit by changing their diet composition.
As a heuristic, that seems like rhetoric designed to create a bias towards certain viewpoints , not rationality. In every controversy there are two sides that believe different things, and often at least one side often believes that the other is missing some sort of “uncomfortable truth” which collapses their worldview. There’s no a-priori reason to assume uncomfortable-truth-proposers are the correct ones in a given controversy.
Putting aside whether it’s a useful heuristic or not, one can ask if it is in fact true. Put another way, why is it that some disputes excite a lot more emotion than others.
Next, if Sailer’s hypothesis is correct, one can then ask if his observation can be used as a heuristic to figure out which side is more likely to be correct. As I pointed out in another post, I think the answer is “No.” I think that in general, there are no proxies to figuring out which side of a controversy is correct. Because proxies are vulnerable to munchkinism.
There may or may not be be “uncomfortable truths” around obesity relating to its relationship to willpower, etc. However, that’s not sufficient to explain why talking about the fine points of high fat vs. high carb diets should ignite controversy.
I’m not sure about that; my impression is that a lot of people—including fat people—have a lot invested emotionally in their views on which diets are effective and which aren’t. If someone (1) is fat; and (2) has beliefs about which diets are effective, it seems that there would be a lot of opportunity for cognitive dissonance.
Next, if Sailer’s hypothesis is correct, one can then ask if his observation can be used as a heuristic to figure out which side is more likely to be correct. As I pointed out in another post, I think the answer is “No.”
I think the answer is “yes”, so let’s clarify what we mean: I read the phrase
I believe Steve Sailer hypothesized that every controversial issue has at least one unpleasant truth which people are subconsciously aware of (or afraid of) but reluctant to face.
as
“unpleasant propositions which are commonly debated are likely to be true”
There are debates where one side is arguing for a belief which both sides find unpleasant. Sailer’s hypothesis implies that any commonly debated proposition which is less pleasant than its alternative explanations is more likely to be true.
Explaining it a different way:
Blue: X is a more pleasant belief than not-X. Also, X is true.
Green: X is a more pleasant belief than not-X. However, X is false.
not-X is the only possible candidate for an “unpleasant truth” , so wouldn’t Sailer’s hypothesis elevate the priors for X being false?
The implicit rational here for the Sailer’s hypothesis would presumably be, “no one wants to believe not-X and therefore everyone is systematically biased towards X. Thus, the existence of individuals who believe not-X elevates the prior for not-X proportionately more than the existence of individuals who believed X. So, all else being equal, favor the unpleasant hypothesis.”
The hypothesis is essentially modeled off the logic of how a negative result for a test which has a tendency towards false positives is more informative than getting a positive result for the same.
I think the answer is “yes”, so let’s clarify what we mean: I read the phrase . . . as “unpleasant propositions which are commonly debated are likely to be true”
I didn’t read it that way. And looking at what issues excite a lot of emotion, it appears to me that frequently the real trigger is related only indirectly.
There are debates where one side is arguing for a belief which both sides find unpleasant.
That may be so, but it’s not so easy to assess unpleasantness. For example, if you talk to some survivalists you get the impression that they WANT there to be a societal breakdown even though it would mean millions of people dying horribly of famine, disease etc.
Besides, I think a lot of disingenuous people are smart enough to say something like “I wish my position were untrue” or “I have no dog in this fight” in order to enhance their credibility.
it’s not so easy to assess unpleasantness...they WANT there to be a societal breakdown
Well, presumably because it would prove them right all along, not because they enjoy chaos...but it doesn’t hold any explanatory power to say that people feel strong emotions towards certain epistemic questions because certain beliefs are more or less pleasant, and then to turn around and say that the reason a belief is (un)pleasant is that it affirms/contradicts a previously held belief. That’s circular.
The initial idea of saying that there is an “unpleasant truth” in every controversy was to create a theory that had predictive power over which issues people would get emotional over. If we then say that unpleasant truths are those which prove people wrong, we lose predictive power—in LW terms, our theory stops paying rent.
We’d be better off just saying “people don’t like changing their minds” in general, if we’re not going to predict which issues and which conditions will create this sort of emotional stubbornness.
I think a lot of disingenuous people are smart enough to say something like “I wish my position were untrue”
I think it’s important to create a distinction between the satisfaction of having one’s beliefs confirmed vs. actually wishing certain beliefs to be true. They are both sources of bias and mind-kill, but they are very different. The survivalists are presumably feeling satisfaction for the former reason (vindication) when faced with talk of society collapsing, even as they do not feel the latter (true preference for a universe where society collapses).
Well, presumably because it would prove them right all along, not because they enjoy chaos
I’m not so sure about that. Once, after a few drinks, I directly confronted a survivalist about this issue. He basically told me that due to his working class background, he felt locked out of the elite; that if there were a societal breakdown he would have the opportunity to become a high status person.
I would guess that a lot of survivalists have feelings along these lines; that they resent modern society’s power structure and that at some level they wish it would fall apart.
But anyway, I agree you have articulated a problem with Sailer’s hypothesis. You can always find an “unpleasant truth,” particularly if you read “unpleasant truth” to include situations where peoples’ long-held beliefs are wrong. Regardless of whether the underlying beliefs are pleasant or unpleasant.
The initial idea of saying that there is an “unpleasant truth” in every controversy was to create a theory that had predictive power
I’m not sure if that’s the idea, but regardless of whether or not that was the aim, I certainly agree that if the hypothesis lacks predictive power then there’s a good chance it’s worthless.
One can put things a slightly different way: How do you know if people are facing evidence of an uncomfortable truth apart from them getting emotional about it?
I think it’s important to create a distinction between the satisfaction of having one’s beliefs confirmed vs. actually wishing certain beliefs to be true. They are both sources of bias and mind-kill, but they are very different. The survivalists are presumably feeling satisfaction for the former reason (vindication) when faced with talk of society collapsing, even as they do not feel the latter (true preference for a universe where society collapses).
Putting aside my question about survivalists’ preferences, why draw the distinction? Ultimately the effect is the same, no?
why draw the distinction? Ultimately the effect is the same, no?
I don’t think so. To continue the survivalist example—a survivalist who wanted the belief that civilization would collapse to be true would be making villainous plots to cause the collapse. A survivalist who simply wanted to be vindicated but didn’t actually desire collapse would look at the first signs of collapse, tell everyone “I told you so” with a rather smug expression, and then join them in the fight to prevent civilization from collapsing.
How do you know if people are facing evidence of an uncomfortable truth apart from them getting emotional about it?
Being emotional is probably not a good signal of this. For example, plenty of atheists are emotional about religion—that doesn’t mean they are uncomfortably aware that it’s actually true in some corner of their minds. One might be emotional because one believes that people who hold certain viewpoints are damaging society.
I think self deception from uncomfortable truths has some unique tells which are distinct from sheer negative affect. Some of these are discussed in the “belief in belief” articles—to the extent that they can do so without becoming consciously aware of it, the person will basically act as if they believe the uncomfortable truth is true, even while professing that it is false.
I think belief in a good afterlife where we will all be together is the most obvious example of this pattern—most people simply don’t act as if death is nothing more a temporary separation when faced with actual death, regardless of what they profess to believe. At some implicit level, I think most people know that the separation is permanent. (There’s exceptions of course—I’ve seen some particularly strong believers who really were relatively unperturbed in the face of death)
I don’t think so. To continue the survivalist example—a survivalist who wanted the belief that civilization would collapse to be true would be making villainous plots to cause the collapse. A survivalist who simply wanted to be vindicated but didn’t actually desire collapse would look at the first signs of collapse, tell everyone “I told you so” with a rather smug expression, and then join them in the fight to prevent civilization from collapsing.
I disagree with this based on my general observations of survivalists. I haven’t noticed any of them plotting to undermine civilization. Also, I doubt that any of them word do much to prevent a collapse. Also, just introspecting, there are a lot of things I wish were different about the world but I am doing little or nothing to bring about such changes. I think my attitude is pretty common.
Perhaps more importantly, even if what you are saying is correct, how does it relate to the subject at hand—which is predicting which topics will generate a lot of heat in discussion?
Being emotional is probably not a good signal of this. For example, plenty of atheists are emotional about religion—that doesn’t mean they are uncomfortably aware that it’s actually true in some corner of their minds.
I agree that other things can get people worked up besides cognitive dissonance.
Some of these are discussed in the “belief in belief” articles—to the extent that they can do so without becoming consciously aware of it, the person will basically act as if they believe the uncomfortable truth is true, even while professing that it is false.
I like that idea. So one can hypothesize that, at a minimum, in any area where a lot of peoples’ actions are inconsistent with their professed beliefs, then discussion of those beliefs will tend to generate a lot of heat, so to speak. Not sure that covers everything, but it seems like a good start.
There’s exceptions of course—I’ve seen some particularly strong believers who really were relatively unperturbed in the face of death)
And quite possibly those same people remain relatively unperturbed when debating life after death. :)
Two separate points:
1) As a heuristic, that seems like rhetoric designed to create a bias towards certain viewpoints , not rationality. In every controversy there are two sides that believe different things, and often at least one side often believes that the other is missing some sort of “uncomfortable truth” which collapses their worldview. There’s no a-priori reason to assume uncomfortable-truth-proposers are the correct ones in a given controversy.
2) There may or may not be be “uncomfortable truths” around obesity relating to its relationship to willpower, etc. However, that’s not sufficient to explain why talking about the fine points of high fat vs. high carb diets should ignite controversy. If the title of this post was “Obese are fat due to X moral failure”, I’d understand the controversy, but the outcome of the carb vs. fat debate won’t negatively impact any demographic, except for various business interests and concerns about animal rights or environmentalism. (And that doesn’t really seem to be the debate driver either)
How do people’s identities get tangled up with a carb vs. fat position, and the question of whether or not a high-fat-low-carb diet allows for a higher caloric consumption without gaining more weight? At the face of it, the question seems specific and non-threatening—no threat of moral failure, no criticism… only the possibility that some people might benefit by changing their diet composition.
Putting aside whether it’s a useful heuristic or not, one can ask if it is in fact true. Put another way, why is it that some disputes excite a lot more emotion than others.
Next, if Sailer’s hypothesis is correct, one can then ask if his observation can be used as a heuristic to figure out which side is more likely to be correct. As I pointed out in another post, I think the answer is “No.” I think that in general, there are no proxies to figuring out which side of a controversy is correct. Because proxies are vulnerable to munchkinism.
I’m not sure about that; my impression is that a lot of people—including fat people—have a lot invested emotionally in their views on which diets are effective and which aren’t. If someone (1) is fat; and (2) has beliefs about which diets are effective, it seems that there would be a lot of opportunity for cognitive dissonance.
I think the answer is “yes”, so let’s clarify what we mean: I read the phrase
as
“unpleasant propositions which are commonly debated are likely to be true”
There are debates where one side is arguing for a belief which both sides find unpleasant. Sailer’s hypothesis implies that any commonly debated proposition which is less pleasant than its alternative explanations is more likely to be true.
Explaining it a different way:
Blue: X is a more pleasant belief than not-X. Also, X is true.
Green: X is a more pleasant belief than not-X. However, X is false.
not-X is the only possible candidate for an “unpleasant truth” , so wouldn’t Sailer’s hypothesis elevate the priors for X being false?
The implicit rational here for the Sailer’s hypothesis would presumably be, “no one wants to believe not-X and therefore everyone is systematically biased towards X. Thus, the existence of individuals who believe not-X elevates the prior for not-X proportionately more than the existence of individuals who believed X. So, all else being equal, favor the unpleasant hypothesis.”
The hypothesis is essentially modeled off the logic of how a negative result for a test which has a tendency towards false positives is more informative than getting a positive result for the same.
I didn’t read it that way. And looking at what issues excite a lot of emotion, it appears to me that frequently the real trigger is related only indirectly.
That may be so, but it’s not so easy to assess unpleasantness. For example, if you talk to some survivalists you get the impression that they WANT there to be a societal breakdown even though it would mean millions of people dying horribly of famine, disease etc.
Besides, I think a lot of disingenuous people are smart enough to say something like “I wish my position were untrue” or “I have no dog in this fight” in order to enhance their credibility.
Well, presumably because it would prove them right all along, not because they enjoy chaos...but it doesn’t hold any explanatory power to say that people feel strong emotions towards certain epistemic questions because certain beliefs are more or less pleasant, and then to turn around and say that the reason a belief is (un)pleasant is that it affirms/contradicts a previously held belief. That’s circular.
The initial idea of saying that there is an “unpleasant truth” in every controversy was to create a theory that had predictive power over which issues people would get emotional over. If we then say that unpleasant truths are those which prove people wrong, we lose predictive power—in LW terms, our theory stops paying rent.
We’d be better off just saying “people don’t like changing their minds” in general, if we’re not going to predict which issues and which conditions will create this sort of emotional stubbornness.
I think it’s important to create a distinction between the satisfaction of having one’s beliefs confirmed vs. actually wishing certain beliefs to be true. They are both sources of bias and mind-kill, but they are very different. The survivalists are presumably feeling satisfaction for the former reason (vindication) when faced with talk of society collapsing, even as they do not feel the latter (true preference for a universe where society collapses).
I’m not so sure about that. Once, after a few drinks, I directly confronted a survivalist about this issue. He basically told me that due to his working class background, he felt locked out of the elite; that if there were a societal breakdown he would have the opportunity to become a high status person.
I would guess that a lot of survivalists have feelings along these lines; that they resent modern society’s power structure and that at some level they wish it would fall apart.
But anyway, I agree you have articulated a problem with Sailer’s hypothesis. You can always find an “unpleasant truth,” particularly if you read “unpleasant truth” to include situations where peoples’ long-held beliefs are wrong. Regardless of whether the underlying beliefs are pleasant or unpleasant.
I’m not sure if that’s the idea, but regardless of whether or not that was the aim, I certainly agree that if the hypothesis lacks predictive power then there’s a good chance it’s worthless.
One can put things a slightly different way: How do you know if people are facing evidence of an uncomfortable truth apart from them getting emotional about it?
Putting aside my question about survivalists’ preferences, why draw the distinction? Ultimately the effect is the same, no?
I don’t think so. To continue the survivalist example—a survivalist who wanted the belief that civilization would collapse to be true would be making villainous plots to cause the collapse. A survivalist who simply wanted to be vindicated but didn’t actually desire collapse would look at the first signs of collapse, tell everyone “I told you so” with a rather smug expression, and then join them in the fight to prevent civilization from collapsing.
Being emotional is probably not a good signal of this. For example, plenty of atheists are emotional about religion—that doesn’t mean they are uncomfortably aware that it’s actually true in some corner of their minds. One might be emotional because one believes that people who hold certain viewpoints are damaging society.
I think self deception from uncomfortable truths has some unique tells which are distinct from sheer negative affect. Some of these are discussed in the “belief in belief” articles—to the extent that they can do so without becoming consciously aware of it, the person will basically act as if they believe the uncomfortable truth is true, even while professing that it is false.
I think belief in a good afterlife where we will all be together is the most obvious example of this pattern—most people simply don’t act as if death is nothing more a temporary separation when faced with actual death, regardless of what they profess to believe. At some implicit level, I think most people know that the separation is permanent. (There’s exceptions of course—I’ve seen some particularly strong believers who really were relatively unperturbed in the face of death)
I disagree with this based on my general observations of survivalists. I haven’t noticed any of them plotting to undermine civilization. Also, I doubt that any of them word do much to prevent a collapse. Also, just introspecting, there are a lot of things I wish were different about the world but I am doing little or nothing to bring about such changes. I think my attitude is pretty common.
Perhaps more importantly, even if what you are saying is correct, how does it relate to the subject at hand—which is predicting which topics will generate a lot of heat in discussion?
I agree that other things can get people worked up besides cognitive dissonance.
I like that idea. So one can hypothesize that, at a minimum, in any area where a lot of peoples’ actions are inconsistent with their professed beliefs, then discussion of those beliefs will tend to generate a lot of heat, so to speak. Not sure that covers everything, but it seems like a good start.
And quite possibly those same people remain relatively unperturbed when debating life after death. :)