I understand your quite sensible reluctance to set before yourself the task of making and proclaiming a judgment on the truth of each of your chosen “contrarian claims”. Unfortunately, this means that you’re excluding a big chunk of hypothesis space for reasons of convenience and not on any principled basis, which means that your entire investigation is fundamentally of questionable epistemic value.
Suppose you do your investigation and you conclude that the reason that highly educated people are attracted to your chosen “contrarian claims” for reason X (where X is something that has nothing to do with said claims’ truth values). Now suppose I read your findings, and I say to you: “You say the reason educated people are attracted to these claims is reason X; but I think actually the reason is that these claims are true. What steps did you take to rule out this alternate explanation, and on what basis do you judge said explanation to be less plausible than your provided explanation (which invokes reason X)?”
You would have no answer for me, isn’t that so? You could only say “I took no such steps; and I can make no such judgment.”
And given this, why should anyone take your proffered explanation seriously—whatever that explanation might be?
You are misinterpreting the purpose of the study, and then accusing me of missing something fundamental that makes you doubt everything about my epistemic value. The actual study involves an experiment in which different sets of arguments are offered for the same contrarian position in a between subjects study of belief change. The truth value is not actually relevant to me — just the kinds of arguments people find compelling, conditional on whether the position is contrarian or conventional.
I understand you to be saying that you just want to find out, if a belief is known to be contrary to popular opinion, whether people who have university degrees from high-status universities are more likely to take it on as their own.
I guess there’s something interesting here about what kinds of beliefs people wear as clothing and which kinds of beliefs transmit because of truthful arguments for them. I don’t think that testing the hypothesis “Do people ever like to believe things because they think they’re in on a secret that the rest of the world is too foolish to realise?” and “Which particular demographic does it the most?” is likely helpful. I expect it will likely come up with “Yes, we found a small but positive effect-size” and “Well-educated people do it a very little bit” and “People employed at tech jobs do it a little bit more”. Maybe you have a reason that this is helpful?
Like, it’s not clear it’s going to be a very robust result—depending on whether it’s in-season to be contrarian, or whether it’s in-season to be meta-contrarian, studies like this will give you opposite results, and the only real result is that “We can use information about the current fashion to change people’s beliefs.”
I think there are more interesting questions to ask, like:
Which are the current conversations that are propagating because of status/class signalling?
What are the main mechanisms by which such coordination on signalling occurs?
What / where in society is the true conversation that is trying to figure out true things, and by what medium is that conversation had?
What causes people to use one type of reasoning versus the other?
Your assumptions about the research interest are incorrect (although likely no fault of your own, as I was being vague intentionally). The actual experiment tests different argumentative techniques on certain kinds of positions, depending on the initial level of background support that a position has (contrarian or conventional).
See the comment I made at the top of the thread:
“To be clear: this study is about testing different argumentative techniques on different kinds of positions (conventional vs contrarian). It’s not about the overarching reasons why someone who already subscribes to a contrarian position might have been persuaded by it in the first place.”
How do you propose to separate the effects of argumentative techniques from the effects of “the overarching reasons why someone who already subscribes to a contrarian position might have been persuaded by it in the first place”? That is, how would you correct for this clearly quite serious confounding factor?
I identify individuals who don’t currently subscribe to a contrarian belief. I give a random half of them one kind of argument for this position, and the other another kind of argument for the position. I compare belief change in either camp. There are more components to the study, but I’m not interested in defending the research methodology.
I understand your quite sensible reluctance to set before yourself the task of making and proclaiming a judgment on the truth of each of your chosen “contrarian claims”. Unfortunately, this means that you’re excluding a big chunk of hypothesis space for reasons of convenience and not on any principled basis, which means that your entire investigation is fundamentally of questionable epistemic value.
Suppose you do your investigation and you conclude that the reason that highly educated people are attracted to your chosen “contrarian claims” for reason X (where X is something that has nothing to do with said claims’ truth values). Now suppose I read your findings, and I say to you: “You say the reason educated people are attracted to these claims is reason X; but I think actually the reason is that these claims are true. What steps did you take to rule out this alternate explanation, and on what basis do you judge said explanation to be less plausible than your provided explanation (which invokes reason X)?”
You would have no answer for me, isn’t that so? You could only say “I took no such steps; and I can make no such judgment.”
And given this, why should anyone take your proffered explanation seriously—whatever that explanation might be?
You are misinterpreting the purpose of the study, and then accusing me of missing something fundamental that makes you doubt everything about my epistemic value. The actual study involves an experiment in which different sets of arguments are offered for the same contrarian position in a between subjects study of belief change. The truth value is not actually relevant to me — just the kinds of arguments people find compelling, conditional on whether the position is contrarian or conventional.
I understand you to be saying that you just want to find out, if a belief is known to be contrary to popular opinion, whether people who have university degrees from high-status universities are more likely to take it on as their own.
I guess there’s something interesting here about what kinds of beliefs people wear as clothing and which kinds of beliefs transmit because of truthful arguments for them. I don’t think that testing the hypothesis “Do people ever like to believe things because they think they’re in on a secret that the rest of the world is too foolish to realise?” and “Which particular demographic does it the most?” is likely helpful. I expect it will likely come up with “Yes, we found a small but positive effect-size” and “Well-educated people do it a very little bit” and “People employed at tech jobs do it a little bit more”. Maybe you have a reason that this is helpful?
Like, it’s not clear it’s going to be a very robust result—depending on whether it’s in-season to be contrarian, or whether it’s in-season to be meta-contrarian, studies like this will give you opposite results, and the only real result is that “We can use information about the current fashion to change people’s beliefs.”
I think there are more interesting questions to ask, like:
Which are the current conversations that are propagating because of status/class signalling?
What are the main mechanisms by which such coordination on signalling occurs?
What / where in society is the true conversation that is trying to figure out true things, and by what medium is that conversation had?
What causes people to use one type of reasoning versus the other?
Your assumptions about the research interest are incorrect (although likely no fault of your own, as I was being vague intentionally). The actual experiment tests different argumentative techniques on certain kinds of positions, depending on the initial level of background support that a position has (contrarian or conventional).
See the comment I made at the top of the thread:
“To be clear: this study is about testing different argumentative techniques on different kinds of positions (conventional vs contrarian). It’s not about the overarching reasons why someone who already subscribes to a contrarian position might have been persuaded by it in the first place.”
How do you propose to separate the effects of argumentative techniques from the effects of “the overarching reasons why someone who already subscribes to a contrarian position might have been persuaded by it in the first place”? That is, how would you correct for this clearly quite serious confounding factor?
This seems fairly easy by randomizing the types of arguments and the positions, no?
I identify individuals who don’t currently subscribe to a contrarian belief. I give a random half of them one kind of argument for this position, and the other another kind of argument for the position. I compare belief change in either camp. There are more components to the study, but I’m not interested in defending the research methodology.