Brainwashing (which is one thing drethlin asked about the probability of) is not an LW concept, particularly; I’m not sure how reading up on it is remaining inside the “accepted LW wisdom.”
No argument there. What I alluded to is the second part, incremental “Bayesian” updating based on (independent) new evidence. This is more of an LW “inside” thing.
Sorry, I wasn’t trying to be nonresponsive, that reading just didn’t occur to me. (Coincidence? Or a troubling sign of epistemic closure?)
I will admit, the idea that I should update beliefs based on new evidence, but that repeatedly presenting me with the same evidence over and over should not significantly update my beliefs, seems to me nothing but common sense.
Of course, that’s just what I should expect it to feel like if I were trapped inside a self-reinforcing network of pernicious false beliefs.
So, all right… in the spirit of seriously considering arguments from outside the framework, and given that as a champion of an alternative epistemology you arguably count as “outside the framework”, what would you propose as an answer to drethlin’s question about how far they should update based on each new critic?
what would you propose as an answer to drethlin’s question about how far they should update based on each new critic?
Hmm. My suspicion is that formulating the question in this way already puts you “inside the box”, since it uses Bayesian terms to begin with. Something like trying to detect problems in a religious moral framework after postulating objective morality. Maybe this is not a good example, but a better one eludes me at the moment. To honestly try to break out of the framework, one has to find a way to ask different questions. I suspect that am too much “inside” to figure out what they could be.
And I can certainly see how, if we did not insist on framing the problem in terms of how to consistently update confidence levels based on evidence in the first place, other ways of approaching the “how can I tell if I’m being brainwashed?” question would present themselves. Some traditional examples that come to mind include praying for guidance on the subject and various schools of divination, for example. Of course, a huge number of less traditional possibilities that seem equally unjustified from a “Bayesian” framework (but otherwise share nothing in common with those) are also possible.
No argument there. What I alluded to is the second part, incremental “Bayesian” updating based on (independent) new evidence. This is more of an LW “inside” thing.
Ah! Yes, fair.
Sorry, I wasn’t trying to be nonresponsive, that reading just didn’t occur to me. (Coincidence? Or a troubling sign of epistemic closure?)
I will admit, the idea that I should update beliefs based on new evidence, but that repeatedly presenting me with the same evidence over and over should not significantly update my beliefs, seems to me nothing but common sense.
Of course, that’s just what I should expect it to feel like if I were trapped inside a self-reinforcing network of pernicious false beliefs.
So, all right… in the spirit of seriously considering arguments from outside the framework, and given that as a champion of an alternative epistemology you arguably count as “outside the framework”, what would you propose as an answer to drethlin’s question about how far they should update based on each new critic?
Hmm. My suspicion is that formulating the question in this way already puts you “inside the box”, since it uses Bayesian terms to begin with. Something like trying to detect problems in a religious moral framework after postulating objective morality. Maybe this is not a good example, but a better one eludes me at the moment. To honestly try to break out of the framework, one has to find a way to ask different questions. I suspect that am too much “inside” to figure out what they could be.
(nods) That’s fair.
And I can certainly see how, if we did not insist on framing the problem in terms of how to consistently update confidence levels based on evidence in the first place, other ways of approaching the “how can I tell if I’m being brainwashed?” question would present themselves. Some traditional examples that come to mind include praying for guidance on the subject and various schools of divination, for example. Of course, a huge number of less traditional possibilities that seem equally unjustified from a “Bayesian” framework (but otherwise share nothing in common with those) are also possible.
I’m not terribly concerned about it, though.
Then again, I wouldn’t be.