My guess is that there are fragments of things addressing at least part of this, just not oriented around ideology as a keyword (belief as attire, professing and cheering, fable of science and politics). I guess one thing is that much of the sequence are focused on “here is a way for beliefs to be wrong” rather than examining more closely why having this way-of-treating-beliefs might be useful. (Although Robin Hanson’s work I think often explores that more directly)
What to do if you spot a harmful ideology is a political question, and in some cases the answer might be pretty orthogonal to rationality. (although you might mean the more specific subquestion of “how to stop harmful ideologies while maintaining/raising the sanity waterline.” i.e. many people fight harmful ideologies with counter ideologies).
some random additional thoughts (this might also be part of what you were already thinking of, it’s just what my brain had easily available)
I think I see the word ideology as a bit more neutral than you’re phrasing here. Or at least, your examples are ‘generally accepted around here as false/bad’. But, LessWrong has an overall ideology of beliefs-that-we-coordinate around, complete with “those beliefs being object-level useful” and “some people using those beliefs as attire, sometimes for reasons that are plausibly virtuous and sometimes for reasons that seem like exactly the sort of thing Eliezer wrote the sequences to complain about.
Science also has an ideology (similar but different from Yudkowskianism). The sequences also cover “how to address wrongness in the science ideology”, I think. For example in Science or Bayes:
In physics, you can get absolutely clear-cut issues. Not in the sense that the issues are trivial to explain. But if you try to apply Bayes to healthcare, or economics, you may not be able to formally lay out what is the simplest hypothesis, or what the evidence supports. But when I say “macroscopic decoherence is simpler than collapse” it is actually strict simplicity; you could write the two hypotheses out as computer programs and count the lines of code. Nor is the evidence itself in dispute.
I wanted a very clear example—Bayes says “zig”, this is a zag—when it came time to break your allegiance to Science. [emphasis mine]
“Oh, sure,” you say, “the physicists messed up the many-worlds thing, but give them a break, Eliezer! No one ever claimed that the social process of science was perfect. People are human; they make mistakes.”
But the physicists who refuse to adopt many-worlds aren’t disobeying the rules of Science. They’re obeying the rules of Science.
The tradition handed down through the generations says that a new physics theory comes up with new experimental predictions that distinguish it from the old theory. You perform the test, and the new theory is confirmed or falsified. If it’s confirmed, you hold a huge celebration, call the newspapers, and hand out Nobel Prizes for everyone; any doddering old emeritus professors who refuse to convert are quietly humored. If the theory is disconfirmed, the lead proponent publicly recants, and gains a reputation for honesty.
So, one way to fight bad/wrong/incomplete ideology is… well, to argue against it, if you’re in an environment where that sort of thing works. If you’re not in an environment conducive to clear argument, the obvious choices are “first try to make the environment conducive to argument” or, well, various dark-artsy rhetorical flourishes that work symmetrically whether your ideas are good or not.
It seems like you have more specific questions in mind (would be curious what your motivating examples are).
The way I’d have carved up your question space is less like “how to stop/fight ideologies” and more like “what to do about the general fact of some sets of beliefs becoming sticky over time?”
The sequences also touch upon, in response to the claim “Death is good because it kills old scientists that are stuck in their ways, which allows science to march forward”, to which Eliezer replies “Jesus Christ sure, but you can just make scientists retire without killing them.” But, you do still need to implement the part where you actually make them retire as public figures.
What to do if you spot a harmful ideology is a political question, and in some cases the answer might be pretty orthogonal to rationality. (although you might mean the more specific subquestion of “how to stop harmful ideologies while maintaining/raising the sanity waterline.” i.e. many people fight harmful ideologies with counter ideologies).
Right, politics as usual seems to imply a sequence of ideologies replacing each other, and it might just be a random walk as far as how beneficial/harmful the ideologies are. My question is how to do better than that.
It seems like you have more specific questions in mind (would be curious what your motivating examples are).
My original motivating examples came from contemporary US politics, so it’s probably better not to bring them up here, but I’m now also worried about the implications for the “long reflection” / “great deliberation”.
first try to make the environment conducive to argument
By doing what? I mean it seems possible to build environments conducive to argument for a relatively small group of people, like LW, but I don’t know what can be done to push a whole society in that direction, so that’s part of my question.
The way I’d have carved up your question space is less like “how to stop/fight ideologies” and more like “what to do about the general fact of some sets of beliefs becoming sticky over time?”
I think I’m still more inclined to use the first framing, because if we make beliefs less sticky, it might just speed up the cycles of ideologies replacing each other, and it seems like the bigger problem is “beliefs as rallying flags” (i.e., beliefs can selected for because they are good rallying flags instead of for epistemic reasons).
I’d have no problem with turning it into a top-level question post, if that’s something you can do. (I posted it in Open Thread in case there was already some sequence of posts that directly addressed my questions, that I simply missed.) It not, I may write a question post after I do some more research and think/talk things over.
My guess is that there are fragments of things addressing at least part of this, just not oriented around ideology as a keyword (belief as attire, professing and cheering, fable of science and politics). I guess one thing is that much of the sequence are focused on “here is a way for beliefs to be wrong” rather than examining more closely why having this way-of-treating-beliefs might be useful. (Although Robin Hanson’s work I think often explores that more directly)
What to do if you spot a harmful ideology is a political question, and in some cases the answer might be pretty orthogonal to rationality. (although you might mean the more specific subquestion of “how to stop harmful ideologies while maintaining/raising the sanity waterline.” i.e. many people fight harmful ideologies with counter ideologies).
some random additional thoughts (this might also be part of what you were already thinking of, it’s just what my brain had easily available)
I think I see the word ideology as a bit more neutral than you’re phrasing here. Or at least, your examples are ‘generally accepted around here as false/bad’. But, LessWrong has an overall ideology of beliefs-that-we-coordinate around, complete with “those beliefs being object-level useful” and “some people using those beliefs as attire, sometimes for reasons that are plausibly virtuous and sometimes for reasons that seem like exactly the sort of thing Eliezer wrote the sequences to complain about.
Science also has an ideology (similar but different from Yudkowskianism). The sequences also cover “how to address wrongness in the science ideology”, I think. For example in Science or Bayes:
(Paul Graham’s “What you can’t say” is also relevant)
So, one way to fight bad/wrong/incomplete ideology is… well, to argue against it, if you’re in an environment where that sort of thing works. If you’re not in an environment conducive to clear argument, the obvious choices are “first try to make the environment conducive to argument” or, well, various dark-artsy rhetorical flourishes that work symmetrically whether your ideas are good or not.
It seems like you have more specific questions in mind (would be curious what your motivating examples are).
The way I’d have carved up your question space is less like “how to stop/fight ideologies” and more like “what to do about the general fact of some sets of beliefs becoming sticky over time?”
The sequences also touch upon, in response to the claim “Death is good because it kills old scientists that are stuck in their ways, which allows science to march forward”, to which Eliezer replies “Jesus Christ sure, but you can just make scientists retire without killing them.” But, you do still need to implement the part where you actually make them retire as public figures.
Right, politics as usual seems to imply a sequence of ideologies replacing each other, and it might just be a random walk as far as how beneficial/harmful the ideologies are. My question is how to do better than that.
My original motivating examples came from contemporary US politics, so it’s probably better not to bring them up here, but I’m now also worried about the implications for the “long reflection” / “great deliberation”.
By doing what? I mean it seems possible to build environments conducive to argument for a relatively small group of people, like LW, but I don’t know what can be done to push a whole society in that direction, so that’s part of my question.
I think I’m still more inclined to use the first framing, because if we make beliefs less sticky, it might just speed up the cycles of ideologies replacing each other, and it seems like the bigger problem is “beliefs as rallying flags” (i.e., beliefs can selected for because they are good rallying flags instead of for epistemic reasons).
(btw, I think this comment would work well as a question, which might make it easier to reference in the future)
I’d have no problem with turning it into a top-level question post, if that’s something you can do. (I posted it in Open Thread in case there was already some sequence of posts that directly addressed my questions, that I simply missed.) It not, I may write a question post after I do some more research and think/talk things over.