I am not sure I understand, probably because I am too preprogrammed by Bayesianism.
You roll a d20, it comes up with a number (let’s say 8). The Frequentist now believes there is a 95% chance the die is loaded to produce 8s? But they won’t bet 20:1 on the result, and instead they will do something else with that 95% number? Maybe use it to publish a journal article, I guess.
Bayesianism defines probability in terms of belief. Frequentism defines probability as a statement about the world’s true probabiliity. Saying “[t]he Frequentist now believes” is therefore asking for a Frequentist’s Bayesian probability.
Right, okay. I am trying to learn your ontology here, but the concepts are not close to my current inferential distance. I don’t understand what the 95% means. I don’t understand why the d100 has 99% chance to be fixed after one roll, while a d10 only has 90%. By the second roll I think I can start to stomach the logic here though, so maybe we can set that aside.
In my terms, when you say that a Bayesian wouldn’t bet $1bil:$1 that the sun will rise tomorrow, that doesn’t seem correct to me. It’s true that I wouldn’t actually make that nightly bet, because the risk free rate is like 3% per annum so it’d be a pretty terrible allocation of risk, plus it seems like it’d be an assassination market on the rotation of Earth and I don’t like incentivizing that as a matter of course. But does the math of likelihood ratios not work as well to bury bad theories under a mountain of evidence?
I think not assigning 1e-40 chance to an event is an epistemological choice separate from Bayesianism. The math seems quite capable of leading to that conclusion, and recovering from that state quickly enough.
I think maybe the crux is “There is no way for a Bayesian to be wrong. Everything is just an update. But a Frequentist who said the die was fair can be proven wrong to arbitrary precision.” You can, if the Bayesian announces their prior, know precisely how much of your arbitrary evidence they will require to believe the die is loaded.
Again, I hope this is taken in the spirit I mean it, which is “you are the only self proclaimed Frequentist on this board I know of, so you are a very valuable source of epistemic variation that I should learn how to model”.
With 2 hypothesis: die is fair/die is 100% loaded, a single roll doesn’t discriminate at all.
The key insight is that you have to combine Baysean and Frequentist theories. The prior is heavily weighted towards “the die is fair” such that even 3 or 4 of the same number in a row doesn’t push the actionable probability all the way to “more likely weighted” but as independent observations continue, the weight of evidence accumulates.
I am not sure I understand, probably because I am too preprogrammed by Bayesianism.
You roll a d20, it comes up with a number (let’s say 8). The Frequentist now believes there is a 95% chance the die is loaded to produce 8s? But they won’t bet 20:1 on the result, and instead they will do something else with that 95% number? Maybe use it to publish a journal article, I guess.
Bayesianism defines probability in terms of belief. Frequentism defines probability as a statement about the world’s true probabiliity. Saying “[t]he Frequentist now believes” is therefore asking for a Frequentist’s Bayesian probability.
Right, okay. I am trying to learn your ontology here, but the concepts are not close to my current inferential distance. I don’t understand what the 95% means. I don’t understand why the d100 has 99% chance to be fixed after one roll, while a d10 only has 90%. By the second roll I think I can start to stomach the logic here though, so maybe we can set that aside.
In my terms, when you say that a Bayesian wouldn’t bet $1bil:$1 that the sun will rise tomorrow, that doesn’t seem correct to me. It’s true that I wouldn’t actually make that nightly bet, because the risk free rate is like 3% per annum so it’d be a pretty terrible allocation of risk, plus it seems like it’d be an assassination market on the rotation of Earth and I don’t like incentivizing that as a matter of course. But does the math of likelihood ratios not work as well to bury bad theories under a mountain of evidence?
I think not assigning 1e-40 chance to an event is an epistemological choice separate from Bayesianism. The math seems quite capable of leading to that conclusion, and recovering from that state quickly enough.
I think maybe the crux is “There is no way for a Bayesian to be wrong. Everything is just an update. But a Frequentist who said the die was fair can be proven wrong to arbitrary precision.” You can, if the Bayesian announces their prior, know precisely how much of your arbitrary evidence they will require to believe the die is loaded.
Again, I hope this is taken in the spirit I mean it, which is “you are the only self proclaimed Frequentist on this board I know of, so you are a very valuable source of epistemic variation that I should learn how to model”.
I strong upvoted this because something about this comment makes it hilarious to me (in a good way).
With 2 hypothesis: die is fair/die is 100% loaded, a single roll doesn’t discriminate at all. The key insight is that you have to combine Baysean and Frequentist theories. The prior is heavily weighted towards “the die is fair” such that even 3 or 4 of the same number in a row doesn’t push the actionable probability all the way to “more likely weighted” but as independent observations continue, the weight of evidence accumulates.