This has nothing to do with Bayesian vs. Frequentist. We’re just calculated probabilities from the problem statement, like you said. From the problem, we know P(H)=1/2, P(Monday|H)=1, etc, which leads to P(H|Monday or Tuesday)=1/2. The 1⁄3 calculation is not from the problem statement, but rather from a misapplication of large sample theory. The outcome-dependent sampling biases your estimator.
And it’s strange that you don’t call your approach Frequentist, when you derived it from expected cell counts in repeated samples.
And it’s strange that you don’t call your approach Frequentist, when you derived it from expected cell counts in repeated samples.
Don’t forget—around here ‘Bayesian’ is used normatively, and as part of some sort of group identification. “Bayesians” here will often use frequentist approaches in particular problems.
But that can be legitimate, as Bayesian calculations are a superset of frequentist calculations. Nothing bars a Bayesian from postulating that a limiting frequency exists in an unbounded number of trials in some hypothetical situation; but you won’t see one, e.g., accept R.A. Fisher’s argument for his use of p-values for statistical inference.
I adopted some frequentist terminology for purposes of this discussion because none of the other explanations I or others had posted seemed to be getting through, and I thought that might be the problem.
The reason I said that there’s a frequentist vs. Bayesian issue here is because the frequentist probabilitiy definition I’m most familiar with is P(f) = lim n->inf sum(f(i), i=1..n)/n, where f(x) is the x’th repetition of an independent repeatable experiment, and that definition is hard to reconcile with SB sometimes being asked twice. I assumed that issue, or a rule justified by that issue, was behind your objection.
I adopted some frequentist terminology for purposes of this discussion because none of the other explanations I or others had posted seemed to be getting through, and I thought that might be the problem.
The reason I said that there’s a frequentist vs. Bayesian issue here is because the frequentist probabilitiy definition I’m most familiar with is P(f) = lim n->inf sum(f(i), i=1..n)/n, where f(x) is the x’th repetition of an independent repeatable experiment, and that definition is hard to reconcile with SB sometimes being asked twice. I assumed that issue, or a rule justified by that issue, was behind your objection.
This has nothing to do with Bayesian vs. Frequentist. We’re just calculated probabilities from the problem statement, like you said. From the problem, we know P(H)=1/2, P(Monday|H)=1, etc, which leads to P(H|Monday or Tuesday)=1/2. The 1⁄3 calculation is not from the problem statement, but rather from a misapplication of large sample theory. The outcome-dependent sampling biases your estimator.
And it’s strange that you don’t call your approach Frequentist, when you derived it from expected cell counts in repeated samples.
Don’t forget—around here ‘Bayesian’ is used normatively, and as part of some sort of group identification. “Bayesians” here will often use frequentist approaches in particular problems.
But that can be legitimate, as Bayesian calculations are a superset of frequentist calculations. Nothing bars a Bayesian from postulating that a limiting frequency exists in an unbounded number of trials in some hypothetical situation; but you won’t see one, e.g., accept R.A. Fisher’s argument for his use of p-values for statistical inference.
I adopted some frequentist terminology for purposes of this discussion because none of the other explanations I or others had posted seemed to be getting through, and I thought that might be the problem.
The reason I said that there’s a frequentist vs. Bayesian issue here is because the frequentist probabilitiy definition I’m most familiar with is P(f) = lim n->inf sum(f(i), i=1..n)/n, where f(x) is the x’th repetition of an independent repeatable experiment, and that definition is hard to reconcile with SB sometimes being asked twice. I assumed that issue, or a rule justified by that issue, was behind your objection.
I adopted some frequentist terminology for purposes of this discussion because none of the other explanations I or others had posted seemed to be getting through, and I thought that might be the problem.
The reason I said that there’s a frequentist vs. Bayesian issue here is because the frequentist probabilitiy definition I’m most familiar with is P(f) = lim n->inf sum(f(i), i=1..n)/n, where f(x) is the x’th repetition of an independent repeatable experiment, and that definition is hard to reconcile with SB sometimes being asked twice. I assumed that issue, or a rule justified by that issue, was behind your objection.