OK! But I still feel like there’s something being swept under the carpet here. And I think I’ve managed to put my finger on what’s bothering me.
There are various things we could require our agents to have preferences over, but I am not sure that probability distributions over outcomes is the best choice. (Even though I do agree that the things we want our agents to have preferences over have essentially the same probabilistic structure.)
A weaker assumptions we might make about agents’ preferences is that they are over possibly-uncertain situations, expressed in terms of the agent’s epistemic state.
And I don’t think “nested” possibly-uncertain-situations even exist. There is no such thing as assigning 50% probability to each of (1) assigning 50% probability to each of A and B, and (2) assigning 50% probability to each of A and C. There is such a thing as assigning 50% probability now to assigning those different probabilities in five minutes, and by the law of iterated expectations your final probabilities for A,B,C must then obey the distributive law, but the situations are still not literally the same, and I think that in divergent-utility situations we can’t assume that your preferences depend only on the final outcome distribution.
Another way to say this is that, given that the Ai and Bi are lotteries rather than actual outcomes and that combinations like ∑piAi mean something more complicated than they may initially look like they mean, the dominance axioms are less obvious than the notation makes them look, and even though there are no divergences in the sums-over-probabilities that arise when you do the calculations there are divergences in implied something-like-sums-over-weighted utilities, and in my formulation you really are having to rearrange outcomes as well as probabilities when you do the calculations.
I agree that in the real world you’d have something like “I’m uncertain about whether X or Y will happen, call it 50⁄50. If X happens, I’m 50⁄50 about whether A or B will happen. If Y happens, I’m 50⁄50 about whether B or C will happen.” And it’s not obvious that this should be the same as being 50⁄50 between B or X, and conditioned on X being 50⁄50 between A or C.
Having those two situations be different is kind of what I mean by giving up on probabilities—your preferences are no longer a function of the probability that outcomes occur, they are a more complicated function of your epistemic state, and so it’s not correct to summarize your epistemic state as a probability distribution over outcomes.
I don’t think this is totally crazy, but I think it’s worth recognizing it as a fairly drastic move.
OK! But I still feel like there’s something being swept under the carpet here. And I think I’ve managed to put my finger on what’s bothering me.
There are various things we could require our agents to have preferences over, but I am not sure that probability distributions over outcomes is the best choice. (Even though I do agree that the things we want our agents to have preferences over have essentially the same probabilistic structure.)
A weaker assumptions we might make about agents’ preferences is that they are over possibly-uncertain situations, expressed in terms of the agent’s epistemic state.
And I don’t think “nested” possibly-uncertain-situations even exist. There is no such thing as assigning 50% probability to each of (1) assigning 50% probability to each of A and B, and (2) assigning 50% probability to each of A and C. There is such a thing as assigning 50% probability now to assigning those different probabilities in five minutes, and by the law of iterated expectations your final probabilities for A,B,C must then obey the distributive law, but the situations are still not literally the same, and I think that in divergent-utility situations we can’t assume that your preferences depend only on the final outcome distribution.
Another way to say this is that, given that the Ai and Bi are lotteries rather than actual outcomes and that combinations like ∑piAi mean something more complicated than they may initially look like they mean, the dominance axioms are less obvious than the notation makes them look, and even though there are no divergences in the sums-over-probabilities that arise when you do the calculations there are divergences in implied something-like-sums-over-weighted utilities, and in my formulation you really are having to rearrange outcomes as well as probabilities when you do the calculations.
I agree that in the real world you’d have something like “I’m uncertain about whether X or Y will happen, call it 50⁄50. If X happens, I’m 50⁄50 about whether A or B will happen. If Y happens, I’m 50⁄50 about whether B or C will happen.” And it’s not obvious that this should be the same as being 50⁄50 between B or X, and conditioned on X being 50⁄50 between A or C.
Having those two situations be different is kind of what I mean by giving up on probabilities—your preferences are no longer a function of the probability that outcomes occur, they are a more complicated function of your epistemic state, and so it’s not correct to summarize your epistemic state as a probability distribution over outcomes.
I don’t think this is totally crazy, but I think it’s worth recognizing it as a fairly drastic move.
Would a decision theory like this count as “giving up on probabilities” in the sense in which you mean it here?