You can think in terms of complete distributions within the frequentist framework perfectly well, too.
Does anyone do that, though?
Essentially it’s the capability to detect a signal (of certain effect size) in the presence of noise (in certain amounts) with a given level of confidence.
Well, if you want to think of it like that, you could probably formulate all of this in information-theoretic terms and speak of needing a certain number of bits; then the sample size & effect size interact to say how many bits each n contains. So a binary variable contains a lot less than a continuous variable, a shift in a rare observation like 90⁄10 is going to be harder to detect than a shift in a 50⁄50 split, etc. That’s not stuff I know a lot about.
Well, sure. The frequentist approach, aka mainstream statistics, deals with distributions all the time and the arguments about particular tests or predictions being optimal, or unbiased, or asymptotically true, etc. are all explicitly conditional on characteristics of underlying distributions.
Well, if you want to think of it like that, you could probably formulate all of this in information-theoretic terms and speak of needing a certain number of bits;
Yes, something like that. Take a look at Fisher information, e.g. “The Fisher information is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter θ upon which the probability of X depends.”
Does anyone do that, though?
Well, if you want to think of it like that, you could probably formulate all of this in information-theoretic terms and speak of needing a certain number of bits; then the sample size & effect size interact to say how many bits each n contains. So a binary variable contains a lot less than a continuous variable, a shift in a rare observation like 90⁄10 is going to be harder to detect than a shift in a 50⁄50 split, etc. That’s not stuff I know a lot about.
Well, sure. The frequentist approach, aka mainstream statistics, deals with distributions all the time and the arguments about particular tests or predictions being optimal, or unbiased, or asymptotically true, etc. are all explicitly conditional on characteristics of underlying distributions.
Yes, something like that. Take a look at Fisher information, e.g. “The Fisher information is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter θ upon which the probability of X depends.”