What steps would one take to get more actionable information on this topic?
I’d suggest starting by reading up on “brainwashing” and developing a sense of what signs characterize it (and, indeed, if it’s even a thing at all).
For each new visitor who comes in and accuses us of messianic groupthink how far should I update in the direction of believing them?
Presumably this depends on how much new evidence they are providing relative to the last visitor accusing us of messianic groupthink, and whether you think you updated properly then. A dozen people repeating the same theory based on the same observations is not (necessarily) significantly more evidence in favor of that theory than five people repeating it; what you should be paying attention to is new evidence.
Note that your suggestions are all within the framework of the “accepted LW wisdom”. The best you can hope for is to detect some internal inconsistencies in this framework. One’s best chance of “deconversion” is usually to seriously consider the arguments from outside the framework of beliefs, possibly after realizing that the framework in question is not self-consistent or leads to personally unacceptable conclusions (like having to prefer torture to specks). Something like that “worked” for palladias, apparently. Also, I once described an alternative to the LW epistemology (my personal brand of instrumentalism), but it did not go over very well.
Brainwashing (which is one thing drethlin asked about the probability of) is not an LW concept, particularly; I’m not sure how reading up on it is remaining inside the “accepted LW wisdom.”
If reading up on brainwashing teaches me that certain signs characterize it, and LW demonstrates those signs, I should increase my estimate that LW is brainwashing people, and consequently that I’m being brainwashed. And, yes, if I conclude that it’s likely that I’m being brainwashed, there are various deconversion techniques I can use to negate that.
Of course, seriously considering arguments from outside the framework of beliefs is a good idea regardless.
Being completely wrong, admittedly, (the other thing drethlin asked about the probability of) doesn’t lend itself to this approach so well… it’s hard to know where to even start, there.
If reading up on brainwashing teaches me that certain signs characterize it, and LW demonstrates those signs, I should increase my estimate that LW is brainwashing people, and consequently that I’m being brainwashed.
Reading up on brainwashing can mean reading gwern’s essay which concludes that brainwashing doesn’t really work. Of course that’s exactly what someone who wants to brainwash yourself would tell you, wouldn’t it?
Sure. I’m not exactly sure why you’d choose to interpret “read up on brainwashing” in this context as meaning “read what a member of the group you’re concerned about being brainwashed by has to say about brainwashing,” but I certainly agree that it’s a legitimate example, and it has exactly the failure mode you imply.
For what it’s worth, gwern’s findings are consistent with mine (see this thread). I’d rather restrict “brainwashing” to coercive persuasion, i.e. indoctrinating prisoners of war or what have you, but Scientology, the Unification Church, and so forth also seem remarkably poor at long-term persuasion. It’s difficult to find comparable numbers for large, socially accepted religions, or for that matter nontheism—more of the conversion process plays out in the public sphere, making it harder to delineate, and ulterior motives (i.e. converting to a fiancee’s religion) are much more common—but if you read between the lines they seem to be higher.
Deprogramming techniques aren’t much better, incidentally—from everything I’ve read they range from the ineffective to the abusive, and often have quite a bit in common with brainwashing in the coercive sense. You couldn’t apply most of them to yourself, and wouldn’t want to in any case.
Brainwashing (which is one thing drethlin asked about the probability of) is not an LW concept, particularly; I’m not sure how reading up on it is remaining inside the “accepted LW wisdom.”
No argument there. What I alluded to is the second part, incremental “Bayesian” updating based on (independent) new evidence. This is more of an LW “inside” thing.
Sorry, I wasn’t trying to be nonresponsive, that reading just didn’t occur to me. (Coincidence? Or a troubling sign of epistemic closure?)
I will admit, the idea that I should update beliefs based on new evidence, but that repeatedly presenting me with the same evidence over and over should not significantly update my beliefs, seems to me nothing but common sense.
Of course, that’s just what I should expect it to feel like if I were trapped inside a self-reinforcing network of pernicious false beliefs.
So, all right… in the spirit of seriously considering arguments from outside the framework, and given that as a champion of an alternative epistemology you arguably count as “outside the framework”, what would you propose as an answer to drethlin’s question about how far they should update based on each new critic?
what would you propose as an answer to drethlin’s question about how far they should update based on each new critic?
Hmm. My suspicion is that formulating the question in this way already puts you “inside the box”, since it uses Bayesian terms to begin with. Something like trying to detect problems in a religious moral framework after postulating objective morality. Maybe this is not a good example, but a better one eludes me at the moment. To honestly try to break out of the framework, one has to find a way to ask different questions. I suspect that am too much “inside” to figure out what they could be.
And I can certainly see how, if we did not insist on framing the problem in terms of how to consistently update confidence levels based on evidence in the first place, other ways of approaching the “how can I tell if I’m being brainwashed?” question would present themselves. Some traditional examples that come to mind include praying for guidance on the subject and various schools of divination, for example. Of course, a huge number of less traditional possibilities that seem equally unjustified from a “Bayesian” framework (but otherwise share nothing in common with those) are also possible.
I’d suggest starting by reading up on “brainwashing” and developing a sense of what signs characterize it (and, indeed, if it’s even a thing at all).
Presumably this depends on how much new evidence they are providing relative to the last visitor accusing us of messianic groupthink, and whether you think you updated properly then. A dozen people repeating the same theory based on the same observations is not (necessarily) significantly more evidence in favor of that theory than five people repeating it; what you should be paying attention to is new evidence.
Note that your suggestions are all within the framework of the “accepted LW wisdom”. The best you can hope for is to detect some internal inconsistencies in this framework. One’s best chance of “deconversion” is usually to seriously consider the arguments from outside the framework of beliefs, possibly after realizing that the framework in question is not self-consistent or leads to personally unacceptable conclusions (like having to prefer torture to specks). Something like that “worked” for palladias, apparently. Also, I once described an alternative to the LW epistemology (my personal brand of instrumentalism), but it did not go over very well.
Brainwashing (which is one thing drethlin asked about the probability of) is not an LW concept, particularly; I’m not sure how reading up on it is remaining inside the “accepted LW wisdom.”
If reading up on brainwashing teaches me that certain signs characterize it, and LW demonstrates those signs, I should increase my estimate that LW is brainwashing people, and consequently that I’m being brainwashed. And, yes, if I conclude that it’s likely that I’m being brainwashed, there are various deconversion techniques I can use to negate that.
Of course, seriously considering arguments from outside the framework of beliefs is a good idea regardless.
Being completely wrong, admittedly, (the other thing drethlin asked about the probability of) doesn’t lend itself to this approach so well… it’s hard to know where to even start, there.
Reading up on brainwashing can mean reading gwern’s essay which concludes that brainwashing doesn’t really work. Of course that’s exactly what someone who wants to brainwash yourself would tell you, wouldn’t it?
Sure. I’m not exactly sure why you’d choose to interpret “read up on brainwashing” in this context as meaning “read what a member of the group you’re concerned about being brainwashed by has to say about brainwashing,” but I certainly agree that it’s a legitimate example, and it has exactly the failure mode you imply.
For what it’s worth, gwern’s findings are consistent with mine (see this thread). I’d rather restrict “brainwashing” to coercive persuasion, i.e. indoctrinating prisoners of war or what have you, but Scientology, the Unification Church, and so forth also seem remarkably poor at long-term persuasion. It’s difficult to find comparable numbers for large, socially accepted religions, or for that matter nontheism—more of the conversion process plays out in the public sphere, making it harder to delineate, and ulterior motives (i.e. converting to a fiancee’s religion) are much more common—but if you read between the lines they seem to be higher.
Deprogramming techniques aren’t much better, incidentally—from everything I’ve read they range from the ineffective to the abusive, and often have quite a bit in common with brainwashing in the coercive sense. You couldn’t apply most of them to yourself, and wouldn’t want to in any case.
No argument there. What I alluded to is the second part, incremental “Bayesian” updating based on (independent) new evidence. This is more of an LW “inside” thing.
Ah! Yes, fair.
Sorry, I wasn’t trying to be nonresponsive, that reading just didn’t occur to me. (Coincidence? Or a troubling sign of epistemic closure?)
I will admit, the idea that I should update beliefs based on new evidence, but that repeatedly presenting me with the same evidence over and over should not significantly update my beliefs, seems to me nothing but common sense.
Of course, that’s just what I should expect it to feel like if I were trapped inside a self-reinforcing network of pernicious false beliefs.
So, all right… in the spirit of seriously considering arguments from outside the framework, and given that as a champion of an alternative epistemology you arguably count as “outside the framework”, what would you propose as an answer to drethlin’s question about how far they should update based on each new critic?
Hmm. My suspicion is that formulating the question in this way already puts you “inside the box”, since it uses Bayesian terms to begin with. Something like trying to detect problems in a religious moral framework after postulating objective morality. Maybe this is not a good example, but a better one eludes me at the moment. To honestly try to break out of the framework, one has to find a way to ask different questions. I suspect that am too much “inside” to figure out what they could be.
(nods) That’s fair.
And I can certainly see how, if we did not insist on framing the problem in terms of how to consistently update confidence levels based on evidence in the first place, other ways of approaching the “how can I tell if I’m being brainwashed?” question would present themselves. Some traditional examples that come to mind include praying for guidance on the subject and various schools of divination, for example. Of course, a huge number of less traditional possibilities that seem equally unjustified from a “Bayesian” framework (but otherwise share nothing in common with those) are also possible.
I’m not terribly concerned about it, though.
Then again, I wouldn’t be.