Not necessarily. Typically, the events would be all the Lebesgue measurable subsets of the state space. That’s large enough to furnish a suitable event to play the role of the virtual evidence.
True, although I’m not sure this would be “virtual evidence” anymore by Pearl’s definition. But maybe this is the way it should be handled.
A different way of spelling out what’s unique about virtual evidence is that, while compatible with Bayes, the update itself is calculated via some other means (IE, we get the likelihoods from somewhere else than looking at pre-defined likelihoods for the proposition we’re updating on).
This also resolves Eliezer’s Pascal’s Muggle conundrum over how he should react to 10^10-to-1 evidence in favour of something for which he has a probability of 10^100-to-1 against. The background information X that went into the latter figure is called into question.
Does it, though? If you were going to call that background evidence into question for a mere 10^10-to-1 evidence, should the probability have been 10^100-to-1 against in the first place?
Does it, though? If you were going to call that background evidence into question for a mere 10^10-to-1 evidence, should the probability have been 10^100-to-1 against in the first place?
This is verging on the question, what do you do when the truth is not in the support of your model? That may be the main way you reach 10^100-to-1 odds in practice. Non-Bayesians like to pose this question as a knock-down of Bayesianism. I don’t agree with them, but I’m not qualified to argue the case.
Once you’ve accepted some X as evidence, i.e. conditioned all your probabilities on X, how do you recover from that when meeting new evidence Y that is extraordinarily unlikely (e.g. 10 to −100) given X? Pulling X out from behind the vertical bar may be a first step, but that still leaves you contemplating the extraordinarily unlikely proposition X&Y that you have nevertheless observed.
True, although I’m not sure this would be “virtual evidence” anymore by Pearl’s definition. But maybe this is the way it should be handled.
A different way of spelling out what’s unique about virtual evidence is that, while compatible with Bayes, the update itself is calculated via some other means (IE, we get the likelihoods from somewhere else than looking at pre-defined likelihoods for the proposition we’re updating on).
Does it, though? If you were going to call that background evidence into question for a mere 10^10-to-1 evidence, should the probability have been 10^100-to-1 against in the first place?
This is verging on the question, what do you do when the truth is not in the support of your model? That may be the main way you reach 10^100-to-1 odds in practice. Non-Bayesians like to pose this question as a knock-down of Bayesianism. I don’t agree with them, but I’m not qualified to argue the case.
Once you’ve accepted some X as evidence, i.e. conditioned all your probabilities on X, how do you recover from that when meeting new evidence Y that is extraordinarily unlikely (e.g. 10 to −100) given X? Pulling X out from behind the vertical bar may be a first step, but that still leaves you contemplating the extraordinarily unlikely proposition X&Y that you have nevertheless observed.