I’m wondering about your reasons for posting a straightforward probability question (as a top-level post rather than an Open Thread comment, no less). Are you trying to take a reading of how competent the average LW contributor currently is on trivial questions? Are you setting up a real-world problem analogous to this, where most people get the wrong answer? Or is it something else entirely?
If the problem is too easy, consider the meta-problem: what makes argument 1 seductive, and how can we teach ourselves to easily see through such arguments in the future?
(In this case it was easy to see the flaw in argument 1 because argument 2 was laid out right beside it. What if all we had was argument 1?)
I think perhaps our intuitive understanding of “state of knowledge” is wrong, and we need to fix it, but I’m not sure how.
In this particular case, all we need to do is encode our “state of knowledge” formally into the relevant probabilities, mooting all appeals to intuition.
However this is a “toy problem”, in real-world situations I expect that it will not be practical to enumerate all possible outcomes.
I am helping a colleague of mine investigate application of Bayesian inference methods to the question of software testing, and we’re seeing much the same difficulty: on an extremely simplified problem we can draw definite conclusions, but we don’t yet know how to extend those conclusions to situations the industry would consider relevant.
Argument 2. Bayes’ Law doesn’t lie.
I’m wondering about your reasons for posting a straightforward probability question (as a top-level post rather than an Open Thread comment, no less). Are you trying to take a reading of how competent the average LW contributor currently is on trivial questions? Are you setting up a real-world problem analogous to this, where most people get the wrong answer? Or is it something else entirely?
If the problem is too easy, consider the meta-problem: what makes argument 1 seductive, and how can we teach ourselves to easily see through such arguments in the future?
(In this case it was easy to see the flaw in argument 1 because argument 2 was laid out right beside it. What if all we had was argument 1?)
I think perhaps our intuitive understanding of “state of knowledge” is wrong, and we need to fix it, but I’m not sure how.
In this particular case, all we need to do is encode our “state of knowledge” formally into the relevant probabilities, mooting all appeals to intuition.
However this is a “toy problem”, in real-world situations I expect that it will not be practical to enumerate all possible outcomes.
I am helping a colleague of mine investigate application of Bayesian inference methods to the question of software testing, and we’re seeing much the same difficulty: on an extremely simplified problem we can draw definite conclusions, but we don’t yet know how to extend those conclusions to situations the industry would consider relevant.
I occasionally get requests for more homework problems. You’re also correct that I was curious about the average skill level on LW.
In retrospect, though, what I should have done was start a Bayesian Fun Problems Thread. Will try to remember to do that next time I have a puzzle.