Hypothesis testing: I give you a black-box random distribution and claim it obeys a specified formula. You sample some data from the box and inspect it. Frequentism often allows you to call me a liar and be wrong no more than 10% of the time, guaranteed, no priors in sight.
Wrong. If all black boxes do obey their specified formulas, then every single time you call the other person a liar, you will be wrong. P(wrong|”false”) ~ 1.
I’m thinking you still haven’t quite understood here what frequentist statistics do.
It’s not perfectly reliable. They assume they have perfect information about experimental setups and likelihood ratios. (Where does this perfect knowledge come from? Can Bayesians get their priors from the same source?)
A Bayesian who wants to report something at least as reliable as a frequentist statistic, simply reports a likelihood ratio between two or more hypotheses from the evidence; and in that moment has told another Bayesian just what frequentists think they have perfect knowledge of, but simply, with far less confusion and error and mathematical chicanery and opportunity for distortion, and greater ability to combine the results of multiple experiments.
And more importantly, we understand what likelihood ratios are, and that they do not become posteriors without adding a prior somewhere.
Wrong. If all black boxes do obey their specified formulas, then every single time you call the other person a liar, you will be wrong. P(wrong|”false”) ~ 1.
Ok, bear with me. cousin_it’s claim was that P(wrong|boxes-obey-formulas)<=.1, am I right? I get that P(wrong|”false” & boxes-obey-formulas) ~ 1, so the denial of cousin_it’s claim seems to require P(“false”|boxes-obey-formulas) > .1? I assumed that the point was precisely that the frequentist procedure will give you P(“false”|boxes-obey-formulas)<=.1. Is that wrong?
Wrong. If all black boxes do obey their specified formulas, then every single time you call the other person a liar, you will be wrong. P(wrong|”false”) ~ 1.
I’m thinking you still haven’t quite understood here what frequentist statistics do.
It’s not perfectly reliable. They assume they have perfect information about experimental setups and likelihood ratios. (Where does this perfect knowledge come from? Can Bayesians get their priors from the same source?)
A Bayesian who wants to report something at least as reliable as a frequentist statistic, simply reports a likelihood ratio between two or more hypotheses from the evidence; and in that moment has told another Bayesian just what frequentists think they have perfect knowledge of, but simply, with far less confusion and error and mathematical chicanery and opportunity for distortion, and greater ability to combine the results of multiple experiments.
And more importantly, we understand what likelihood ratios are, and that they do not become posteriors without adding a prior somewhere.
Thanks for the catch, struck out that part.
Yes, you can get your priors from the same source they get experimental setups: the world. Except this source doesn’t provide priors.
ETA: likelihood ratios don’t seem to communicate the same info about the world as confidence intervals to me. Can you clarify?
Ok, bear with me. cousin_it’s claim was that P(wrong|boxes-obey-formulas)<=.1, am I right? I get that P(wrong|”false” & boxes-obey-formulas) ~ 1, so the denial of cousin_it’s claim seems to require P(“false”|boxes-obey-formulas) > .1? I assumed that the point was precisely that the frequentist procedure will give you P(“false”|boxes-obey-formulas)<=.1. Is that wrong?
My claim was what Eliezer said, and it was incorrect. Other than that, your comment is correct.
Ah, I parsed it wrongly. Whoops. Would it be worth replacing it with a corrected claim rather than just striking it?
Done. Thanks for the help!