Thanks for the Bradley reference. He does indeed work in Jeffrey’s framework. On conditional utility (“conditional desirability”, in Jeffrey terminology) Bradley references another paper from 1999 where he goes into a bit more detail on the motivation:
To arrive at our candidate expression for conditional desirabilities in terms of unconditional ones, we reason as follows. Getting the news that XY is true is just the same as getting both the news that X is true and the news that Y is true. But DesXY is not necessarily equal to DesX + DesY because of the way in which the desirabilities of X and Y might depend on one another. Unless X and Y are probabilistically independent, for instance, the news that X is true will affect the probability and, hence, the desirability of Y. Or it might affect the desirability of Y directly, because it is the sort of condition that makes Y less or more desirable. It is natural then to think of DesXY as equal, not to the sum of the desirabilities of X and Y, but to the sum of the desirability of X and the desirability of Y given that X is true.
(With DesXY he means U(X∧Y).)
I also found a more recent (2017) book from him, where he defines U(A|B):=U(A∧B)−U(B) and where he uses the probability axioms, Jeffrey’s desirability axiom, and U(⊤)=0 as axioms. So pretty much the same way we did here.
So yeah, I think that settles conditional utility.
In the book Bradley has also some other interesting discussions, such as this one:
[...] Richard Jeffrey is often said to have defended a specific one, namely the ‘news value’ conception of benefit. It is true that news value is a type of value that unambiguously satisfies the desirability axioms. Consider getting the news that a trip to the beach is planned and suppose that one enjoys the beach in sunny weather but hates it in the rain. Then, whether this is good news or not will depend on how likely it is that it is going to be sunny or rainy. If you like, what the news means for you, what its implications are, depends on your beliefs. If it’s going to rain, then the news means a day of being wet and cold; if it’s going to be sunny, then the news means an enjoyable day swimming. In the absence of certainty about the weather, one’s attitude to the prospect will lie somewhere between one’s attitude to these two prospects, but closer to the one that is more probable. This explains why news value should respect the axiom of desirability. It also gives a rationale for the axiom of normality, for news that is certain is no news at all and hence cannot be good or bad.
Nonetheless, considerable caution should be exercised in giving Desirabilism this interpretation. In particular, it should not be inferred that Jeffrey’s claim is that we value something because of its news value. News value tracks desirability but does not constitute it. Moreover, it does not always track it accurately. Sometimes getting the news that X tells us more than just that X is the case because of the conditions under which we get the news. To give an extreme example: if I believe that I am isolated, then I cannot receive any news without learning that this is not the case. This ‘extra’ content is no part of the desirability of X.
Our main interest is in desirability as a certain kind of grounds for acting in conditions of uncertainty. In this respect, it perhaps more helpful to fix one’s intuitions using the concept of willingness to pay than that of news value. For if one imagines that all action is a matter of paying to have prospects made true, then the desirabilities of these prospects will measure (when appropriately scaled) the price that one is willing to pay for them. It is clear that one should not be willing to pay anything to make a tautology true and quite plausible that one should price the prospect of either X or Y by the sum of the probability-discounted prices of the each. So this interpretation is both formally adequate and exhibits the required relationship between desirability and action.
Anyway, someone should do a writeup of our findings, right? :)
Anyway, someone should do a writeup of our findings, right? :)
Sure, I’ve found it to be an interesting framework to think in so I suppose someone else might too. You’re the one who’s done the heavy lifting so far so I’ll let you have an executive role.
If you want me to write up a first draft I can probably do it end of next week. I’m a bit busy for at least the next few days.
I think I will write a somewhat longer post as a full introduction to Jeffrey-style utility theory. But I’m still not quite sure on some things. For example, Bradley suggests that we can also interpret the utility of some proposition as the maximum amount of money we would pay (to God, say) to make it true. But I’m not sure whether that money would rather track expected utility (probability times utility) -- or not. Generally the interpretation of expected utility versus the interpretation of utility is not yet quite clear to me, yet. Have to think a bit more about it...
I’m not sure this is what you mean, but yes, in case of acts, it is indeed so that only the utility of an action matters for our choice, not the expected utility, since we don’t care about probabilities of, or assign probabilities to, possible actions when we choose among them, we just pick the action with the highest utility.
But only some propositions describe acts. I can’t chose (make true/certain) that the sun shines tomorrow, so the probability of the sun shining tomorrow matters, not just its utility. Now if the utility of the sun shining tomorrow is the maximum amount of money I would pay for the sun shining tomorrow, is that plausible? Assuming the utility of sunshine tomorrow is a fixed value x, wouldn’t I pay less money if sunshine is very likely anyway, and more if sunshine is unlikely?
On the other hand, I believe (but am uncertain) the utility of a proposition being true moves towards 0 as its probability rises. (Which would correctly predict that I pay less for sunshine when it is likely anyway.) But I notice I don’t have a real understanding of why or in which sense this happens! Specifically, we know that tautologies have utility 0, but I don’t even see how to prove how it follows that all propositions with probability 1 (even non-tautologies) have utility 0. Jeffrey says it as if it’s obvious, but he doesn’t actually give a proof. And then, more generally, it also isn’t clear to me why the utility of a proposition would move towards 0 as its probability moves towards 1, if that’s the case.
I notice I’m still far from having a good level of understanding of (Jeffrey’s) utility theory...
[...] Richard Jeffrey is often said to have defended a specific one, namely the ‘news value’ conception of benefit. It is true that news value is a type of value that unambiguously satisfies the desirability axioms.
but at the same time
News value tracks desirability but does not constitute it. Moreover, it does not always track it accurately. Sometimes getting the news that X tells us more than just that X is the case because of the conditions under which we get the news.
And I can see how starting from this you would get that U(⊤)=0. However, I think one of the remaining confusions is how you would go in the other direction. How can you go from the premise that we shift utilities to be 0 for tautologies to say that we value something to a large part from how unlikely it is.
And then we also have the desirability axiom
U(A∨B)=P(A)U(A)+P(B)U(B)P(A)+P(B)
for all A and B such that P(A∧B)=0 together with Bayesian probability theory.
What I was talking about in my previous comment goes against the desirability axiom in the sense that I meant that for X="Sun with probability p and rain with probability (1−p)" in the more general case there could be subjects that prefer certain outcomes proportionally more (or less) than usual such that U(X)≠pU(Sun)+(1−p)U(Rain) for some probabilities p. As the equality derives directly from the desirability axiom, it was wrong of me to generalise that far.
But, to get back to the confusion at hand we need to unpack the tautology axiom a bit. If we say that a proposition ⊤ is a tautology if and only if P(⊤)=1[1], then we can see that any proposition that is no news to us has zero utils as well.
And I think it might be well to keep in mind that learning that e.g. sun tomorrow is more probable than we once thought does not necessarily make us prefer sun tomorrow less, but the amount of utils for sun tomorrow has decreased (in an absolute sense). This comes in nicely with the money analogy because you wouldn’t buy something that you expect with certainty anyway[2], but this doesn’t mean that you prefer it any less compared to some other worse outcome that you expected some time earlier. It is just that we’ve updated from our observations such that the utility function now reflects our current beliefs. If you prefer A to B then this is a fact regardless of the probabilities of those outcomes. When the probabilities change, what is changing is the mapping from proposition to real number (the utility function) and it is only changing with an shift (and possibly scaling) by a real number.
At least that is the interpretation that I’ve done.
Thanks for the Bradley reference. He does indeed work in Jeffrey’s framework. On conditional utility (“conditional desirability”, in Jeffrey terminology) Bradley references another paper from 1999 where he goes into a bit more detail on the motivation:
(With DesXY he means U(X∧Y).)
I also found a more recent (2017) book from him, where he defines U(A|B):=U(A∧B)−U(B) and where he uses the probability axioms, Jeffrey’s desirability axiom, and U(⊤)=0 as axioms. So pretty much the same way we did here.
So yeah, I think that settles conditional utility.
In the book Bradley has also some other interesting discussions, such as this one:
Anyway, someone should do a writeup of our findings, right? :)
Sure, I’ve found it to be an interesting framework to think in so I suppose someone else might too. You’re the one who’s done the heavy lifting so far so I’ll let you have an executive role.
If you want me to write up a first draft I can probably do it end of next week. I’m a bit busy for at least the next few days.
I think I will write a somewhat longer post as a full introduction to Jeffrey-style utility theory. But I’m still not quite sure on some things. For example, Bradley suggests that we can also interpret the utility of some proposition as the maximum amount of money we would pay (to God, say) to make it true. But I’m not sure whether that money would rather track expected utility (probability times utility) -- or not. Generally the interpretation of expected utility versus the interpretation of utility is not yet quite clear to me, yet. Have to think a bit more about it...
Isn’t that just a question whether you assume expected utility or not. In the general case it is only utility not expected utility that matters.
I’m not sure this is what you mean, but yes, in case of acts, it is indeed so that only the utility of an action matters for our choice, not the expected utility, since we don’t care about probabilities of, or assign probabilities to, possible actions when we choose among them, we just pick the action with the highest utility.
But only some propositions describe acts. I can’t chose (make true/certain) that the sun shines tomorrow, so the probability of the sun shining tomorrow matters, not just its utility. Now if the utility of the sun shining tomorrow is the maximum amount of money I would pay for the sun shining tomorrow, is that plausible? Assuming the utility of sunshine tomorrow is a fixed value x, wouldn’t I pay less money if sunshine is very likely anyway, and more if sunshine is unlikely?
On the other hand, I believe (but am uncertain) the utility of a proposition being true moves towards 0 as its probability rises. (Which would correctly predict that I pay less for sunshine when it is likely anyway.) But I notice I don’t have a real understanding of why or in which sense this happens! Specifically, we know that tautologies have utility 0, but I don’t even see how to prove how it follows that all propositions with probability 1 (even non-tautologies) have utility 0. Jeffrey says it as if it’s obvious, but he doesn’t actually give a proof. And then, more generally, it also isn’t clear to me why the utility of a proposition would move towards 0 as its probability moves towards 1, if that’s the case.
I notice I’m still far from having a good level of understanding of (Jeffrey’s) utility theory...
So we have that
but at the same time
And I can see how starting from this you would get that U(⊤)=0. However, I think one of the remaining confusions is how you would go in the other direction. How can you go from the premise that we shift utilities to be 0 for tautologies to say that we value something to a large part from how unlikely it is.
And then we also have the desirability axiom
U(A∨B)=P(A)U(A)+P(B)U(B)P(A)+P(B)
for all A and B such that P(A∧B)=0 together with Bayesian probability theory.
What I was talking about in my previous comment goes against the desirability axiom in the sense that I meant that for X="Sun with probability p and rain with probability (1−p)" in the more general case there could be subjects that prefer certain outcomes proportionally more (or less) than usual such that U(X)≠pU(Sun)+(1−p)U(Rain) for some probabilities p. As the equality derives directly from the desirability axiom, it was wrong of me to generalise that far.
But, to get back to the confusion at hand we need to unpack the tautology axiom a bit. If we say that a proposition ⊤ is a tautology if and only if P(⊤)=1 [1], then we can see that any proposition that is no news to us has zero utils as well.
And I think it might be well to keep in mind that learning that e.g. sun tomorrow is more probable than we once thought does not necessarily make us prefer sun tomorrow less, but the amount of utils for sun tomorrow has decreased (in an absolute sense). This comes in nicely with the money analogy because you wouldn’t buy something that you expect with certainty anyway[2], but this doesn’t mean that you prefer it any less compared to some other worse outcome that you expected some time earlier. It is just that we’ve updated from our observations such that the utility function now reflects our current beliefs. If you prefer A to B then this is a fact regardless of the probabilities of those outcomes. When the probabilities change, what is changing is the mapping from proposition to real number (the utility function) and it is only changing with an shift (and possibly scaling) by a real number.
At least that is the interpretation that I’ve done.
This seems reasonable but non-trivial to prove depending on how we translate between logic and probability.
If you do, you either don’t actually expect it or has a bad sense of business.