I’m with Rif and GJM that the choice of threshold is arbitrary and confusing.
Binning is always wrong, except for decision theory and computational limitations, including those imposed by pedagogy. You said you’ll follow up with a continuous version, so I can’t blame you for binning here. I don’t have a solution, but it seems more confusing than necessary.
Decisions are discrete, which may suggest binning in analysis, but it is generally better to postpone binning as long as possible and the analysis should tell you how to bin, not just be an excuse. Decision theory says that you should maximize expected value, which says that you should have probabilistic beliefs, which is a big motivation for philosophical bayesianism. But though the post is about both bayesianism and decision theory, it fails to make the connection.
I think it would be less arbitrary to pander to the majority. Just consider two hypotheses: most prefer shorter or most longer. Then your posterior is just one number, the credence that the majority prefers longer articles. But if you impose a fixed cost of change, that partitions the space of posteriors into three pieces, corresponding to change to longer articles, no change, and change to shorter articles. The thresholds are in posterior space, not parameter space. The threshold does not depend just on the maximum likelihood parameter, but also on the amount of data. This ends up with a more complicated decision model and a less complicated inference, which may not serve your pedagogical goals, but it shoves the arbitrariness into the numerical costs and benefits, which at least are well-specified and avoids the confusion about 25 vs 12.5 which stems from thinking that words specified the model.
I’m with Rif and GJM that the choice of threshold is arbitrary and confusing.
Binning is always wrong, except for decision theory and computational limitations, including those imposed by pedagogy. You said you’ll follow up with a continuous version, so I can’t blame you for binning here. I don’t have a solution, but it seems more confusing than necessary.
Decisions are discrete, which may suggest binning in analysis, but it is generally better to postpone binning as long as possible and the analysis should tell you how to bin, not just be an excuse. Decision theory says that you should maximize expected value, which says that you should have probabilistic beliefs, which is a big motivation for philosophical bayesianism. But though the post is about both bayesianism and decision theory, it fails to make the connection.
I think it would be less arbitrary to pander to the majority. Just consider two hypotheses: most prefer shorter or most longer. Then your posterior is just one number, the credence that the majority prefers longer articles. But if you impose a fixed cost of change, that partitions the space of posteriors into three pieces, corresponding to change to longer articles, no change, and change to shorter articles. The thresholds are in posterior space, not parameter space. The threshold does not depend just on the maximum likelihood parameter, but also on the amount of data. This ends up with a more complicated decision model and a less complicated inference, which may not serve your pedagogical goals, but it shoves the arbitrariness into the numerical costs and benefits, which at least are well-specified and avoids the confusion about 25 vs 12.5 which stems from thinking that words specified the model.