I agree that in the real world you’d have something like “I’m uncertain about whether X or Y will happen, call it 50⁄50. If X happens, I’m 50⁄50 about whether A or B will happen. If Y happens, I’m 50⁄50 about whether B or C will happen.” And it’s not obvious that this should be the same as being 50⁄50 between B or X, and conditioned on X being 50⁄50 between A or C.
Having those two situations be different is kind of what I mean by giving up on probabilities—your preferences are no longer a function of the probability that outcomes occur, they are a more complicated function of your epistemic state, and so it’s not correct to summarize your epistemic state as a probability distribution over outcomes.
I don’t think this is totally crazy, but I think it’s worth recognizing it as a fairly drastic move.
I agree that in the real world you’d have something like “I’m uncertain about whether X or Y will happen, call it 50⁄50. If X happens, I’m 50⁄50 about whether A or B will happen. If Y happens, I’m 50⁄50 about whether B or C will happen.” And it’s not obvious that this should be the same as being 50⁄50 between B or X, and conditioned on X being 50⁄50 between A or C.
Having those two situations be different is kind of what I mean by giving up on probabilities—your preferences are no longer a function of the probability that outcomes occur, they are a more complicated function of your epistemic state, and so it’s not correct to summarize your epistemic state as a probability distribution over outcomes.
I don’t think this is totally crazy, but I think it’s worth recognizing it as a fairly drastic move.
Would a decision theory like this count as “giving up on probabilities” in the sense in which you mean it here?