This paper discusses two semantics for Bayesian inference in the case where the hypotheses under consideration are known to be false.
Verisimilitude: p(h) = the probability that that h is closest to the truth [according to some measure of closeness-to-truth] among hypotheses under consideration
Counterfactual: p(h) = the probability of h given the (false) supposition that one of the hypotheses under consideration is true
In any case, it’s unclear what motivates making decisions by maximizing expected value against such probabilities, which seems like a problem for boundedly rational decision-making.
Ty for the link but these seem like both clearly bad semantics (e.g. under either of these the second-best hypothesis under consideration might score arbitrarily badly).
This paper discusses two semantics for Bayesian inference in the case where the hypotheses under consideration are known to be false.
Verisimilitude: p(h) = the probability that that h is closest to the truth [according to some measure of closeness-to-truth] among hypotheses under consideration
Counterfactual: p(h) = the probability of h given the (false) supposition that one of the hypotheses under consideration is true
In any case, it’s unclear what motivates making decisions by maximizing expected value against such probabilities, which seems like a problem for boundedly rational decision-making.
Ty for the link but these seem like both clearly bad semantics (e.g. under either of these the second-best hypothesis under consideration might score arbitrarily badly).