I spent quite a few hours going through Peterson’s 2008 book (online copy available here) to see if there were any interesting ideas, and found the time largely wasted. (This was my initial intuition, but I thought I’d take a closer look since Luke emailed me directly to ask me to comment.) It would take even more time to write up a good critique, so I’ll just point out the most glaring problem: Peterson’s proposal for how to derive a utility function from one’s subjective uncertainty about one’s own choices, as illustrated in this example:
This means that if the probability that you choose salmon is 2⁄3, and the probability
that you choose tuna is 1⁄3, then your utility of salmon is twice as high as that
of tuna.
What if we apply this idea to the choice between $20 and $30?
Let us now return to the problem of perfect discrimination mentioned above.
As explained by Luce, the problem is that ’the [utility] scale is defined only over
a set having no pairwise perfect discriminations, which is probably only a small
portion of any dimension we might wish to scale That is, the problem lies in the
assumption that p(x > y) != 0,1 for all x,y in B. After all, this condition is rather
unlikely to be satisfied, because most agents know for sure that they prefer $40 to
$20, and $30 to $20, etc.
Peterson tries to solve this problem in section 5.3, but his solution makes no sense. From page 90:
Suppose, for example,
that I wish to determine my utility of $20, $30, and $40, respectively. In this case,
the non-perfect object can be a photo of my beloved cat Carla, who died when I was
fourteen. If offered a choice between $20 and the photo, the probability is 1⁄4 that I
would choose the money; if offered a choice between $30 and the photo, the probability
is 2⁄4 that I would choose the money; and if offered a choice between $40 and
the photo, the probability is 3⁄4 that I would choose the money. This information is
sufficient for constructing a single ratio scale for all four objects. Here is how to do
it: The point of departure is the three local scales, which have one common element,
the photo of Carla. The utility of the photo is the same in all three pairwise choices.
Let u(photo) = 1. Then the utility of money is calculated by calibrating the three
local scales such that u(photo) = 1 in all of them.
So we end up with u($20)=1/3, u($30)=u(photo)=1, u($40)=3. But this utility function now implies that given a choice between $20 and $30, you’d choose $20 with probability 1⁄4, and $30 with probability 3⁄4, contradicting the initial assumption that you’d choose $30 with certainty. I have no idea how Peterson failed to notice this.
I’ve only read your comment, not anything by Peterson, so I’m just asking for clarification on what he claims to do:
In your first quote of him, he claims to derive utilities from a certain kind of subjective probability. But does he also claim to make the converse derivation? That is, does he also claim to derive those same subjective probabilities from utilities as you do in your final paragraph? It’s not clear to me that your first quote of him commits him to doing this.
That is, does he also claim to derive those same subjective probabilities from utilities as you do in your final paragraph?
No, he doesn’t commit to doing this, but taking this defense doesn’t really save his idea. Because what if instead of thinking I’d take $30 over $20 with probability 1, I think I’d make that choice with probability 0.99. Now u($30)/u($20) has to be 99, but u($30)=u(photo) and u(photo)/u($20)=3 still hold, so we can no longer obtain a consistent utility function using Peterson’s proposal. What sense does it make that we can derive a utility function if the probability of taking $30 over $20 is either 1 or 3⁄4, but not anything else? As far as I can tell, there is no reason to expect that our actual beliefs about hypothetical choices like these are such that Peterson’s proposal can output a utility function from them, and he doesn’t address the issue in his book.
What sense does it make that we can derive a utility function if the probability of taking $30 over $20 is either 1 or 3⁄4, but not anything else?
It seems that he should account for the fact that this subjective probability will update. For example, you quoted him as saying
This means that if the probability that you choose salmon is 2⁄3, and the probability that you choose tuna is 1⁄3, then your utility of salmon is twice as high as that of tuna.
But once I know that u(salmon)/u(tuna) = 2, I know that I will choose salmon over tuna. I therefore no longer assign the prior subjective probabilities that led me to this utility-ratio. I assign a new posterior subject probability — namely, certainty that I will choose tuna. This new subjective probability can no longer be used to derive the utility-ratio u(salmon)/u(tuna) = 2. I have learned the utility-ratio, but, in doing so, I have destroyed the state of affairs that allowed me to learn it. I might remember how I derived the utility-ratio, but I can no longer re-derive it in the same way. I have, as it were, “burned up” some of my prior subjective uncertainty, so I can’t use it any more.
Now suppose that I am so unfortunate as to forget the value of the utility-ratio u(salmon)/u(tuna). However, I still retain the posterior subjective certainty that I choose salmon over tuna. Now how am I going to get that utility-ratio back? I’m going to have to find some other piece of prior subjective uncertainty to “burn”. For example, I might notice some prior uncertainty about whether I would choose salmon over $5 and about whether I would choose tuna over $5. Then I could proceed as Peterson describes in the photo example.
So maybe Peterson’s proposal can be saved by distinguishing between prior and posterior subjective probabilities for my choices in this way. Prior probabilities would be required to be consistent in the following sense: If
my prior odds of choosing A over B are 1:1, and
my prior odds of choosing B over C are 3:1,
then
my prior odds of choosing A over C have to be 3:1.
Thus, in the photo example, the prior probability of taking $30 over $20 has to be 3⁄4, given the other probabilities. It’s not allowed to be 0.99. But the posterior probability is allowed to be 1, provided that I’ve already “burned up” some piece of prior subjective uncertainty to arrive at that certainty. In this way, perhaps, it makes sense to say that “the probability of taking $30 over $20 is either 1 or 3⁄4, but not anything else”.
Now suppose that I am so unfortunate as to forget the value of the utility-ratio u(salmon)/u(tuna).
But, on reflection, the possibility of forgetting knowledge is probably a can of worms best left unopened. For, one could ask what would happen if I remembered that I was highly confident that I would choose salmon over tuna, but I forgot that I was absolutely certain about this. It would then be hard to see how to avoid inconsistent utility functions, as you describe.
Perhaps it’s better to suppose that you’ve shown, by some unspecified means, that u(salmon) > u(tuna), but that you did so without computing the exact utility-ratio. Then you become certain that you choose salmon over tuna, but you no longer have the prior subjective uncertainty that you need to compute the ratio u(salmon)/u(tuna) directly. That’s the kind of case where you might be able to find some other piece of prior subjective uncertainty, as I describe in the above paragraph.
I spent quite a few hours going through Peterson’s 2008 book (online copy available here) to see if there were any interesting ideas, and found the time largely wasted. (This was my initial intuition, but I thought I’d take a closer look since Luke emailed me directly to ask me to comment.) It would take even more time to write up a good critique, so I’ll just point out the most glaring problem: Peterson’s proposal for how to derive a utility function from one’s subjective uncertainty about one’s own choices, as illustrated in this example:
What if we apply this idea to the choice between $20 and $30?
Peterson tries to solve this problem in section 5.3, but his solution makes no sense. From page 90:
So we end up with u($20)=1/3, u($30)=u(photo)=1, u($40)=3. But this utility function now implies that given a choice between $20 and $30, you’d choose $20 with probability 1⁄4, and $30 with probability 3⁄4, contradicting the initial assumption that you’d choose $30 with certainty. I have no idea how Peterson failed to notice this.
I’ve only read your comment, not anything by Peterson, so I’m just asking for clarification on what he claims to do:
In your first quote of him, he claims to derive utilities from a certain kind of subjective probability. But does he also claim to make the converse derivation? That is, does he also claim to derive those same subjective probabilities from utilities as you do in your final paragraph? It’s not clear to me that your first quote of him commits him to doing this.
No, he doesn’t commit to doing this, but taking this defense doesn’t really save his idea. Because what if instead of thinking I’d take $30 over $20 with probability 1, I think I’d make that choice with probability 0.99. Now u($30)/u($20) has to be 99, but u($30)=u(photo) and u(photo)/u($20)=3 still hold, so we can no longer obtain a consistent utility function using Peterson’s proposal. What sense does it make that we can derive a utility function if the probability of taking $30 over $20 is either 1 or 3⁄4, but not anything else? As far as I can tell, there is no reason to expect that our actual beliefs about hypothetical choices like these are such that Peterson’s proposal can output a utility function from them, and he doesn’t address the issue in his book.
It seems that he should account for the fact that this subjective probability will update. For example, you quoted him as saying
But once I know that u(salmon)/u(tuna) = 2, I know that I will choose salmon over tuna. I therefore no longer assign the prior subjective probabilities that led me to this utility-ratio. I assign a new posterior subject probability — namely, certainty that I will choose tuna. This new subjective probability can no longer be used to derive the utility-ratio u(salmon)/u(tuna) = 2. I have learned the utility-ratio, but, in doing so, I have destroyed the state of affairs that allowed me to learn it. I might remember how I derived the utility-ratio, but I can no longer re-derive it in the same way. I have, as it were, “burned up” some of my prior subjective uncertainty, so I can’t use it any more.
Now suppose that I am so unfortunate as to forget the value of the utility-ratio u(salmon)/u(tuna). However, I still retain the posterior subjective certainty that I choose salmon over tuna. Now how am I going to get that utility-ratio back? I’m going to have to find some other piece of prior subjective uncertainty to “burn”. For example, I might notice some prior uncertainty about whether I would choose salmon over $5 and about whether I would choose tuna over $5. Then I could proceed as Peterson describes in the photo example.
So maybe Peterson’s proposal can be saved by distinguishing between prior and posterior subjective probabilities for my choices in this way. Prior probabilities would be required to be consistent in the following sense: If
my prior odds of choosing A over B are 1:1, and
my prior odds of choosing B over C are 3:1,
then
my prior odds of choosing A over C have to be 3:1.
Thus, in the photo example, the prior probability of taking $30 over $20 has to be 3⁄4, given the other probabilities. It’s not allowed to be 0.99. But the posterior probability is allowed to be 1, provided that I’ve already “burned up” some piece of prior subjective uncertainty to arrive at that certainty. In this way, perhaps, it makes sense to say that “the probability of taking $30 over $20 is either 1 or 3⁄4, but not anything else”.
I wrote,
But, on reflection, the possibility of forgetting knowledge is probably a can of worms best left unopened. For, one could ask what would happen if I remembered that I was highly confident that I would choose salmon over tuna, but I forgot that I was absolutely certain about this. It would then be hard to see how to avoid inconsistent utility functions, as you describe.
Perhaps it’s better to suppose that you’ve shown, by some unspecified means, that u(salmon) > u(tuna), but that you did so without computing the exact utility-ratio. Then you become certain that you choose salmon over tuna, but you no longer have the prior subjective uncertainty that you need to compute the ratio u(salmon)/u(tuna) directly. That’s the kind of case where you might be able to find some other piece of prior subjective uncertainty, as I describe in the above paragraph.