I think this used to be a tenable position a decade or two ago. But I think it’s no longer tenable, due to the dynamic described in this tweet:
Suppose an ideology says you’re not allowed to question idea X. At first X might not be very important. But now when people want to argue for Y, “X->Y” and “~Y->~X” are both publicly irrefutable. So over time X will become more and more load-bearing for censorious ideologies.
We can also think of this as a variant of Goodhart’s law, which I’ll call ideological Goodhart (and have just tweeted about here): any false belief that cannot be questioned by adherents of an ideology will become increasingly central to that ideology. As this process plays out, advocates of that ideology will adopt increasingly extreme positions, and support increasingly crazy policies.
Unfortunately the way that taboos work is by surrounding the whole topic in an aversive miasma. If you could carefully debate the implications of X, then that would provide an avenue for disproving X, which would be unacceptable. So instead this process tends to look more like “if you don’t believe Y then you’re probably the sort of terrible person who believes ~X”, and now you’re tarred with the connotation even if you try to carefully explain why you actually have different reasons for not believing Y (which is what you’d likely say either way).
I expect this effect to be weaker than you’re suggesting, especially if Y is something you in fact independently care about, and not an otherwise unimportant proximal detail that could reasonably be interpreted as a “just asking questions” means of arguing for ~X. I’m struggling to think of a particularly illustrative X and Y, but consider X=”COVID was not a lab leak”, which seemed lightly taboo to disagree with in 2020. Here’s a pair of tweets you could have sent in 2020: 1. “I think COVID was probably a lab leak.” 2. “I don’t know whether COVID was a lab leak. (In fact for now I’m intentionally not looking into it, because it doesn’t seem important enough to outweigh the risk of arriving at taboo beliefs.) But gain-of-function research in general is unacceptably risky, in a way that makes global pandemic lab leaks a very real possibility, and we should have much stronger regulations to prevent that.”
I expect the second one would receive notably less push back, even though it defends Y=”gain of function research is unacceptably risky”, and suggests that Y provides evidence for ~X.
Fair, this isn’t a confident claim from me. I do have a sense that the last decade has been particularly bad in terms of blatant preference falsification, but it’s hard to distinguish “the world was different before then” from “I was younger and didn’t have a great sense of what was going on”.
I’m 62, so I was adult in the ’80s and ’90s. My sense is that the world was different. The consequences of expressing divergent opinion seem much more serious now.
Suppose an ideology says you’re not allowed to question idea X.
I think there are two different kinds of “not questioning”: there’s unquestioningly accepting an idea as true, and there’s refusing to question and remaining agnostic. The latter position is reasonable in the sense that if you refuse to investigate an issue, you shouldn’t have any strong beliefs about it. And I think the load-bearingness is only a major issue if you refuse to question X while also accepting that X is true.
This is a good point. Though the thing about true beliefs is that there is a specific version of them that’s true, which you’re allowed to defend (if you can find it). And so you can more easily figure out what the implications are.
Whereas for false beliefs you can’t get into the specifics, because looking hard enough at the specifics will tend to disprove the belief.
Is there another strategy you prefer? Afaict the options are
1) Have public taboo beliefs.
2) Have private beliefs that you lie about.
3) Remain deliberately agnostic about taboo but insufficiently important topics.
4) Get forever lucky, such that every taboo topic you investigate results in you honestly arriving at an allowed belief.
Whether 1) is at all compatible with having other career goals is a fact of the territory, and I expect in the US in 2024, there are topics where having taboo beliefs could totally end your career, for many values of career. (Leaving open whether there are such beliefs that are true, but, per the topic of this post, that’s not something you can learn without taking risks.)
2) seems even more prone to the effect you describe than 3).
My guess is you’re making a bid for 1), but I feel like a case for that should take into account the costs of believing X weighed against the costs of agnosticism about X, rather than a sweeping heuristic argument. (Where maybe the cost of agnosticism about X includes adjacent topics Y you’ll either have to include in your agnosticism or otherwise eat the cost of ~X connotations, though I’m skeptical about how often this will come up, and per my comment here I expect the ~Y->~X taboos will often be much smaller than the ~X taboo.)
I think this used to be a tenable position a decade or two ago. But I think it’s no longer tenable, due to the dynamic described in this tweet:
We can also think of this as a variant of Goodhart’s law, which I’ll call ideological Goodhart (and have just tweeted about here): any false belief that cannot be questioned by adherents of an ideology will become increasingly central to that ideology. As this process plays out, advocates of that ideology will adopt increasingly extreme positions, and support increasingly crazy policies.
Tbc: It should be fine to argue against those implications, right? It’s just that, if you grant the implication, then you can’t publicly refute Y.
Unfortunately the way that taboos work is by surrounding the whole topic in an aversive miasma. If you could carefully debate the implications of X, then that would provide an avenue for disproving X, which would be unacceptable. So instead this process tends to look more like “if you don’t believe Y then you’re probably the sort of terrible person who believes ~X”, and now you’re tarred with the connotation even if you try to carefully explain why you actually have different reasons for not believing Y (which is what you’d likely say either way).
I expect this effect to be weaker than you’re suggesting, especially if Y is something you in fact independently care about, and not an otherwise unimportant proximal detail that could reasonably be interpreted as a “just asking questions” means of arguing for ~X. I’m struggling to think of a particularly illustrative X and Y, but consider X=”COVID was not a lab leak”, which seemed lightly taboo to disagree with in 2020. Here’s a pair of tweets you could have sent in 2020:
1. “I think COVID was probably a lab leak.”
2. “I don’t know whether COVID was a lab leak. (In fact for now I’m intentionally not looking into it, because it doesn’t seem important enough to outweigh the risk of arriving at taboo beliefs.) But gain-of-function research in general is unacceptably risky, in a way that makes global pandemic lab leaks a very real possibility, and we should have much stronger regulations to prevent that.”
I expect the second one would receive notably less push back, even though it defends Y=”gain of function research is unacceptably risky”, and suggests that Y provides evidence for ~X.
(Disagree that it was a tenable position a decade or two ago, agree that it is an untenable position now)
Fair, this isn’t a confident claim from me. I do have a sense that the last decade has been particularly bad in terms of blatant preference falsification, but it’s hard to distinguish “the world was different before then” from “I was younger and didn’t have a great sense of what was going on”.
I’m 62, so I was adult in the ’80s and ’90s. My sense is that the world was different. The consequences of expressing divergent opinion seem much more serious now.
Can you say what position you recommend instead? Is it just opining publicly about everything, with no regard to how taboo it is?
I think there are two different kinds of “not questioning”: there’s unquestioningly accepting an idea as true, and there’s refusing to question and remaining agnostic. The latter position is reasonable in the sense that if you refuse to investigate an issue, you shouldn’t have any strong beliefs about it. And I think the load-bearingness is only a major issue if you refuse to question X while also accepting that X is true.
I don’t see why it should be limited to false beliefs.
Note that even if X is true, X->Y need not be true, and it can still be harmful to not be able to question X->Y.
This is a good point. Though the thing about true beliefs is that there is a specific version of them that’s true, which you’re allowed to defend (if you can find it). And so you can more easily figure out what the implications are.
Whereas for false beliefs you can’t get into the specifics, because looking hard enough at the specifics will tend to disprove the belief.
Is there another strategy you prefer? Afaict the options are
1) Have public taboo beliefs.
2) Have private beliefs that you lie about.
3) Remain deliberately agnostic about taboo but insufficiently important topics.
4) Get forever lucky, such that every taboo topic you investigate results in you honestly arriving at an allowed belief.
Whether 1) is at all compatible with having other career goals is a fact of the territory, and I expect in the US in 2024, there are topics where having taboo beliefs could totally end your career, for many values of career. (Leaving open whether there are such beliefs that are true, but, per the topic of this post, that’s not something you can learn without taking risks.)
2) seems even more prone to the effect you describe than 3).
My guess is you’re making a bid for 1), but I feel like a case for that should take into account the costs of believing X weighed against the costs of agnosticism about X, rather than a sweeping heuristic argument. (Where maybe the cost of agnosticism about X includes adjacent topics Y you’ll either have to include in your agnosticism or otherwise eat the cost of ~X connotations, though I’m skeptical about how often this will come up, and per my comment here I expect the ~Y->~X taboos will often be much smaller than the ~X taboo.)