And that was the Bayesian flaw, though no, the story wasn’t about AI.
The probability that you see some amazingly sane things mixed with some apparently crazy things, given that the speaker is much saner (and honest), is not the same as the probability that you’re dealing with a much saner and honest speaker, given that you see a mix of some surprising and amazing sane things with apparently crazy things.
For example, someone could grab some of the material from LW, use it without attribution, and mix it with random craziness.
P(sane things plus crazy things | speaker is saner) P(speaker is saner) =
P(speaker is saner | sane things plus crazy things) P(sane things plus crazy things)
The fact that P(sane things plus crazy things | speaker is saner) <> P(speaker is saner | sane things plus crazy things) isn’t a problem, if you deal with your priors correctly.
I think I misinterpreted your original question as meaning “Why is this problem fundamentally difficult even for Bayesians?”, when it was actually, “What’s wrong with the reasoning used by the speaker in addressing this problem?”
And that was the Bayesian flaw, though no, the story wasn’t about AI.
The probability that you see some amazingly sane things mixed with some apparently crazy things, given that the speaker is much saner (and honest), is not the same as the probability that you’re dealing with a much saner and honest speaker, given that you see a mix of some surprising and amazing sane things with apparently crazy things.
For example, someone could grab some of the material from LW, use it without attribution, and mix it with random craziness.
Mencius Moldbug is a great current example, IMHO, of not-sane but says some surprising and amazingly sane things.
But
is an assertion that P(Sane|Looks Sane) ~ 1, so it seems that this isn’t a Bayesian flaw per se.
P(sane things plus crazy things | speaker is saner) P(speaker is saner) = P(speaker is saner | sane things plus crazy things) P(sane things plus crazy things)
The fact that P(sane things plus crazy things | speaker is saner) <> P(speaker is saner | sane things plus crazy things) isn’t a problem, if you deal with your priors correctly.
I think I misinterpreted your original question as meaning “Why is this problem fundamentally difficult even for Bayesians?”, when it was actually, “What’s wrong with the reasoning used by the speaker in addressing this problem?”
news.ycombinator.com?