I am currently conducting research on the seductive appeal of contrarian positions, particularly among intellectuals.
[Emphasis mine]
I just want to note that the bolded phrasing is really quite tendentious. I hope you’re not actually taking the perspective on contrarian positions that this sentence implies… if you are, then you’re starting from a severely biased perspective, which can hardly bode well for the validity of your research.
Point taken. I’ve edited the main body to limit editorializing. I have a hypothesis, and that hypothesis is rooted in survey data suggesting highly educated people are more likely to entertain beliefs that are inconsistent with majority opinion. I’m not concerned about the truth value of these contrarian positions, just why certain arguments in support of them appear appealing to certain kinds of people (and if that’s experimentally testable).
To be clear: this study is about testing different argumentative techniques on different kinds of positions (conventional vs contrarian). It’s not about the overarching reasons why someone who already subscribes to a contrarian position might have been persuaded by it in the first place.
I’m not concerned about the truth value of these contrarian positions, just why they appear appealing to certain kinds of people (and if that’s experimentally testable).
This seems a very odd way of approaching the question. Surely the truth value of any given position has something to do with how appealing it is?
At the very least, you’ve got to examine—even if only to rule out!—the obvious explanation: that more highly educated people are better at discerning truth, and that “contrarian” positions appeal to such people to the extent that they are more correct than the “mainstream” views in each case. How can you hope to have any kind of a sensible answer to your question if you ignore the issue of the truth of any given position?
EDIT: And since we’re on the topic—doesn’t it seem likely that a position is more likely to have “compelling arguments” for it… if it’s true? That seems like it should influence your conclusion somehow, doesn’t it?
At the current moment, I’m not interested in having to be the arbiter for deciding what is true for particularly complex topics. (Indeed, the research has nothing to do with this question, as it’s about testing the persuasiveness of ARGUMENTS—contrarian and conventional are just two factors that are varied). Initially, I was interested in only generating contrarian positions that were decidedly untrue (eg vaccines cause autism, or the moon landing was faked), versus more ambiguous contrarian positions, but most of what I’m interested in are the unpopular views that are plausibly compelling — at least on the first hearing.
I understand your quite sensible reluctance to set before yourself the task of making and proclaiming a judgment on the truth of each of your chosen “contrarian claims”. Unfortunately, this means that you’re excluding a big chunk of hypothesis space for reasons of convenience and not on any principled basis, which means that your entire investigation is fundamentally of questionable epistemic value.
Suppose you do your investigation and you conclude that the reason that highly educated people are attracted to your chosen “contrarian claims” for reason X (where X is something that has nothing to do with said claims’ truth values). Now suppose I read your findings, and I say to you: “You say the reason educated people are attracted to these claims is reason X; but I think actually the reason is that these claims are true. What steps did you take to rule out this alternate explanation, and on what basis do you judge said explanation to be less plausible than your provided explanation (which invokes reason X)?”
You would have no answer for me, isn’t that so? You could only say “I took no such steps; and I can make no such judgment.”
And given this, why should anyone take your proffered explanation seriously—whatever that explanation might be?
You are misinterpreting the purpose of the study, and then accusing me of missing something fundamental that makes you doubt everything about my epistemic value. The actual study involves an experiment in which different sets of arguments are offered for the same contrarian position in a between subjects study of belief change. The truth value is not actually relevant to me — just the kinds of arguments people find compelling, conditional on whether the position is contrarian or conventional.
I understand you to be saying that you just want to find out, if a belief is known to be contrary to popular opinion, whether people who have university degrees from high-status universities are more likely to take it on as their own.
I guess there’s something interesting here about what kinds of beliefs people wear as clothing and which kinds of beliefs transmit because of truthful arguments for them. I don’t think that testing the hypothesis “Do people ever like to believe things because they think they’re in on a secret that the rest of the world is too foolish to realise?” and “Which particular demographic does it the most?” is likely helpful. I expect it will likely come up with “Yes, we found a small but positive effect-size” and “Well-educated people do it a very little bit” and “People employed at tech jobs do it a little bit more”. Maybe you have a reason that this is helpful?
Like, it’s not clear it’s going to be a very robust result—depending on whether it’s in-season to be contrarian, or whether it’s in-season to be meta-contrarian, studies like this will give you opposite results, and the only real result is that “We can use information about the current fashion to change people’s beliefs.”
I think there are more interesting questions to ask, like:
Which are the current conversations that are propagating because of status/class signalling?
What are the main mechanisms by which such coordination on signalling occurs?
What / where in society is the true conversation that is trying to figure out true things, and by what medium is that conversation had?
What causes people to use one type of reasoning versus the other?
Your assumptions about the research interest are incorrect (although likely no fault of your own, as I was being vague intentionally). The actual experiment tests different argumentative techniques on certain kinds of positions, depending on the initial level of background support that a position has (contrarian or conventional).
See the comment I made at the top of the thread:
“To be clear: this study is about testing different argumentative techniques on different kinds of positions (conventional vs contrarian). It’s not about the overarching reasons why someone who already subscribes to a contrarian position might have been persuaded by it in the first place.”
How do you propose to separate the effects of argumentative techniques from the effects of “the overarching reasons why someone who already subscribes to a contrarian position might have been persuaded by it in the first place”? That is, how would you correct for this clearly quite serious confounding factor?
I identify individuals who don’t currently subscribe to a contrarian belief. I give a random half of them one kind of argument for this position, and the other another kind of argument for the position. I compare belief change in either camp. There are more components to the study, but I’m not interested in defending the research methodology.
I agree with Said that truth (or precision of model, for untestable positions) is likely to be an important upstream causal factor if you’re talking about correlation with IQ or education. Other correlates may have other causes.
Do you have a metric for conventionality or contrarian-ness of an idea in a population? How do you decide whether “credit is risky; prefer cash” is the normal position or the rebel? This metric could be useful on it’s own—seeing how different groups accept or reject various hypotheses could be a fascinating study.
The problem is that I can’t possibly have the expertise to discern which of the contrarian positions are true, and if I were to try to independently arrive at my own conclusions, I would invariably end up deferring to experts and authorities on the subject, which would, in most cases, be the non-contrarian position. My current simple method for operationalizing contrariness is simply looking at how popular a given belief is, across the relevant social groups you ascribe to.
[Emphasis mine]
I just want to note that the bolded phrasing is really quite tendentious. I hope you’re not actually taking the perspective on contrarian positions that this sentence implies… if you are, then you’re starting from a severely biased perspective, which can hardly bode well for the validity of your research.
Point taken. I’ve edited the main body to limit editorializing. I have a hypothesis, and that hypothesis is rooted in survey data suggesting highly educated people are more likely to entertain beliefs that are inconsistent with majority opinion. I’m not concerned about the truth value of these contrarian positions, just why certain arguments in support of them appear appealing to certain kinds of people (and if that’s experimentally testable).
To be clear: this study is about testing different argumentative techniques on different kinds of positions (conventional vs contrarian). It’s not about the overarching reasons why someone who already subscribes to a contrarian position might have been persuaded by it in the first place.
This seems a very odd way of approaching the question. Surely the truth value of any given position has something to do with how appealing it is?
At the very least, you’ve got to examine—even if only to rule out!—the obvious explanation: that more highly educated people are better at discerning truth, and that “contrarian” positions appeal to such people to the extent that they are more correct than the “mainstream” views in each case. How can you hope to have any kind of a sensible answer to your question if you ignore the issue of the truth of any given position?
EDIT: And since we’re on the topic—doesn’t it seem likely that a position is more likely to have “compelling arguments” for it… if it’s true? That seems like it should influence your conclusion somehow, doesn’t it?
At the current moment, I’m not interested in having to be the arbiter for deciding what is true for particularly complex topics. (Indeed, the research has nothing to do with this question, as it’s about testing the persuasiveness of ARGUMENTS—contrarian and conventional are just two factors that are varied). Initially, I was interested in only generating contrarian positions that were decidedly untrue (eg vaccines cause autism, or the moon landing was faked), versus more ambiguous contrarian positions, but most of what I’m interested in are the unpopular views that are plausibly compelling — at least on the first hearing.
I understand your quite sensible reluctance to set before yourself the task of making and proclaiming a judgment on the truth of each of your chosen “contrarian claims”. Unfortunately, this means that you’re excluding a big chunk of hypothesis space for reasons of convenience and not on any principled basis, which means that your entire investigation is fundamentally of questionable epistemic value.
Suppose you do your investigation and you conclude that the reason that highly educated people are attracted to your chosen “contrarian claims” for reason X (where X is something that has nothing to do with said claims’ truth values). Now suppose I read your findings, and I say to you: “You say the reason educated people are attracted to these claims is reason X; but I think actually the reason is that these claims are true. What steps did you take to rule out this alternate explanation, and on what basis do you judge said explanation to be less plausible than your provided explanation (which invokes reason X)?”
You would have no answer for me, isn’t that so? You could only say “I took no such steps; and I can make no such judgment.”
And given this, why should anyone take your proffered explanation seriously—whatever that explanation might be?
You are misinterpreting the purpose of the study, and then accusing me of missing something fundamental that makes you doubt everything about my epistemic value. The actual study involves an experiment in which different sets of arguments are offered for the same contrarian position in a between subjects study of belief change. The truth value is not actually relevant to me — just the kinds of arguments people find compelling, conditional on whether the position is contrarian or conventional.
I understand you to be saying that you just want to find out, if a belief is known to be contrary to popular opinion, whether people who have university degrees from high-status universities are more likely to take it on as their own.
I guess there’s something interesting here about what kinds of beliefs people wear as clothing and which kinds of beliefs transmit because of truthful arguments for them. I don’t think that testing the hypothesis “Do people ever like to believe things because they think they’re in on a secret that the rest of the world is too foolish to realise?” and “Which particular demographic does it the most?” is likely helpful. I expect it will likely come up with “Yes, we found a small but positive effect-size” and “Well-educated people do it a very little bit” and “People employed at tech jobs do it a little bit more”. Maybe you have a reason that this is helpful?
Like, it’s not clear it’s going to be a very robust result—depending on whether it’s in-season to be contrarian, or whether it’s in-season to be meta-contrarian, studies like this will give you opposite results, and the only real result is that “We can use information about the current fashion to change people’s beliefs.”
I think there are more interesting questions to ask, like:
Which are the current conversations that are propagating because of status/class signalling?
What are the main mechanisms by which such coordination on signalling occurs?
What / where in society is the true conversation that is trying to figure out true things, and by what medium is that conversation had?
What causes people to use one type of reasoning versus the other?
Your assumptions about the research interest are incorrect (although likely no fault of your own, as I was being vague intentionally). The actual experiment tests different argumentative techniques on certain kinds of positions, depending on the initial level of background support that a position has (contrarian or conventional).
See the comment I made at the top of the thread:
“To be clear: this study is about testing different argumentative techniques on different kinds of positions (conventional vs contrarian). It’s not about the overarching reasons why someone who already subscribes to a contrarian position might have been persuaded by it in the first place.”
How do you propose to separate the effects of argumentative techniques from the effects of “the overarching reasons why someone who already subscribes to a contrarian position might have been persuaded by it in the first place”? That is, how would you correct for this clearly quite serious confounding factor?
This seems fairly easy by randomizing the types of arguments and the positions, no?
I identify individuals who don’t currently subscribe to a contrarian belief. I give a random half of them one kind of argument for this position, and the other another kind of argument for the position. I compare belief change in either camp. There are more components to the study, but I’m not interested in defending the research methodology.
I agree with Said that truth (or precision of model, for untestable positions) is likely to be an important upstream causal factor if you’re talking about correlation with IQ or education. Other correlates may have other causes.
Do you have a metric for conventionality or contrarian-ness of an idea in a population? How do you decide whether “credit is risky; prefer cash” is the normal position or the rebel? This metric could be useful on it’s own—seeing how different groups accept or reject various hypotheses could be a fascinating study.
The problem is that I can’t possibly have the expertise to discern which of the contrarian positions are true, and if I were to try to independently arrive at my own conclusions, I would invariably end up deferring to experts and authorities on the subject, which would, in most cases, be the non-contrarian position. My current simple method for operationalizing contrariness is simply looking at how popular a given belief is, across the relevant social groups you ascribe to.