I find it a convincing description of what someone saner than oneself looks like, i.e. P(Looks Sane|Sane) is high. However, there may be many possible entities which Look Sane but are not. For example, a flawed or malicious genius who produces genuine gems mixed with actual crackpottery or poisoned, seemingly-sane ideas beyond one’s capacity to detect. If a lot more things Look Sane than are, then you get turned into paperclips for neglecting to consider P(Sane|Looks Sane).
Given the history of your thinking, I guess that in the story the entity in question is an AI, and the speaker is arguing why he is sure that the AI is super-wise? Hence the importance you now attach to the matter.
And that was the Bayesian flaw, though no, the story wasn’t about AI.
The probability that you see some amazingly sane things mixed with some apparently crazy things, given that the speaker is much saner (and honest), is not the same as the probability that you’re dealing with a much saner and honest speaker, given that you see a mix of some surprising and amazing sane things with apparently crazy things.
For example, someone could grab some of the material from LW, use it without attribution, and mix it with random craziness.
P(sane things plus crazy things | speaker is saner) P(speaker is saner) =
P(speaker is saner | sane things plus crazy things) P(sane things plus crazy things)
The fact that P(sane things plus crazy things | speaker is saner) <> P(speaker is saner | sane things plus crazy things) isn’t a problem, if you deal with your priors correctly.
I think I misinterpreted your original question as meaning “Why is this problem fundamentally difficult even for Bayesians?”, when it was actually, “What’s wrong with the reasoning used by the speaker in addressing this problem?”
I find it a convincing description of what someone saner than oneself looks like, i.e. P(Looks Sane|Sane) is high. However, there may be many possible entities which Look Sane but are not. For example, a flawed or malicious genius who produces genuine gems mixed with actual crackpottery or poisoned, seemingly-sane ideas beyond one’s capacity to detect. If a lot more things Look Sane than are, then you get turned into paperclips for neglecting to consider P(Sane|Looks Sane).
Given the history of your thinking, I guess that in the story the entity in question is an AI, and the speaker is arguing why he is sure that the AI is super-wise? Hence the importance you now attach to the matter.
And that was the Bayesian flaw, though no, the story wasn’t about AI.
The probability that you see some amazingly sane things mixed with some apparently crazy things, given that the speaker is much saner (and honest), is not the same as the probability that you’re dealing with a much saner and honest speaker, given that you see a mix of some surprising and amazing sane things with apparently crazy things.
For example, someone could grab some of the material from LW, use it without attribution, and mix it with random craziness.
Mencius Moldbug is a great current example, IMHO, of not-sane but says some surprising and amazingly sane things.
But
is an assertion that P(Sane|Looks Sane) ~ 1, so it seems that this isn’t a Bayesian flaw per se.
P(sane things plus crazy things | speaker is saner) P(speaker is saner) = P(speaker is saner | sane things plus crazy things) P(sane things plus crazy things)
The fact that P(sane things plus crazy things | speaker is saner) <> P(speaker is saner | sane things plus crazy things) isn’t a problem, if you deal with your priors correctly.
I think I misinterpreted your original question as meaning “Why is this problem fundamentally difficult even for Bayesians?”, when it was actually, “What’s wrong with the reasoning used by the speaker in addressing this problem?”
news.ycombinator.com?