There’s something interesting about the answer “I wrote a script to figure it out”. Does that amount to giving a frequentist answer to a Bayesian question, or am I all wet ?
If the latter, what does your example teach about frequentist vs Bayesian reasoning, Eliezer ?
There’s something interesting about the answer “I wrote a script to figure it out”. Does that amount to giving a frequentist answer to a Bayesian question, or am I all wet ?
Running a script is just coming up with a model of the problem where our uncertainties about the problem are isomorphic, or at least approximately isomorphic, to our uncertainties about the model. In this case, our uncertainties about the model are our uncertainties about the underlying algorithms in the script.
It’s just like dropping a flat disc on a Plinko board and saying “hey, at each junction the disc could go either way with roughly equal probability, so this is like a coin flip, so let’s simulate 1000 games of Plinko with coin flips and see what happens.”
If the latter, what does your example teach about frequentist vs Bayesian reasoning, Eliezer ?
I don’t see it teaching anything about the difference, but if it does I’d be glad to hear it. I think cousin it is right: this problem, like the monty hall problem, hinges on the difference between choosing something and choosing something randomly. Frequentists are well aware of the monty hall problem—it was one of my assigned problems last semester in my stat theory course, straight out of the text (Statistical Inference, 2nd edition, by Casella and Berger).
There’s something interesting about the answer “I wrote a script to figure it out”. Does that amount to giving a frequentist answer to a Bayesian question, or am I all wet ?
If the latter, what does your example teach about frequentist vs Bayesian reasoning, Eliezer ?
Running a script is just coming up with a model of the problem where our uncertainties about the problem are isomorphic, or at least approximately isomorphic, to our uncertainties about the model. In this case, our uncertainties about the model are our uncertainties about the underlying algorithms in the script.
It’s just like dropping a flat disc on a Plinko board and saying “hey, at each junction the disc could go either way with roughly equal probability, so this is like a coin flip, so let’s simulate 1000 games of Plinko with coin flips and see what happens.”
I don’t see it teaching anything about the difference, but if it does I’d be glad to hear it. I think cousin it is right: this problem, like the monty hall problem, hinges on the difference between choosing something and choosing something randomly. Frequentists are well aware of the monty hall problem—it was one of my assigned problems last semester in my stat theory course, straight out of the text (Statistical Inference, 2nd edition, by Casella and Berger).