That is, does he also claim to derive those same subjective probabilities from utilities as you do in your final paragraph?
No, he doesn’t commit to doing this, but taking this defense doesn’t really save his idea. Because what if instead of thinking I’d take $30 over $20 with probability 1, I think I’d make that choice with probability 0.99. Now u($30)/u($20) has to be 99, but u($30)=u(photo) and u(photo)/u($20)=3 still hold, so we can no longer obtain a consistent utility function using Peterson’s proposal. What sense does it make that we can derive a utility function if the probability of taking $30 over $20 is either 1 or 3⁄4, but not anything else? As far as I can tell, there is no reason to expect that our actual beliefs about hypothetical choices like these are such that Peterson’s proposal can output a utility function from them, and he doesn’t address the issue in his book.
What sense does it make that we can derive a utility function if the probability of taking $30 over $20 is either 1 or 3⁄4, but not anything else?
It seems that he should account for the fact that this subjective probability will update. For example, you quoted him as saying
This means that if the probability that you choose salmon is 2⁄3, and the probability that you choose tuna is 1⁄3, then your utility of salmon is twice as high as that of tuna.
But once I know that u(salmon)/u(tuna) = 2, I know that I will choose salmon over tuna. I therefore no longer assign the prior subjective probabilities that led me to this utility-ratio. I assign a new posterior subject probability — namely, certainty that I will choose tuna. This new subjective probability can no longer be used to derive the utility-ratio u(salmon)/u(tuna) = 2. I have learned the utility-ratio, but, in doing so, I have destroyed the state of affairs that allowed me to learn it. I might remember how I derived the utility-ratio, but I can no longer re-derive it in the same way. I have, as it were, “burned up” some of my prior subjective uncertainty, so I can’t use it any more.
Now suppose that I am so unfortunate as to forget the value of the utility-ratio u(salmon)/u(tuna). However, I still retain the posterior subjective certainty that I choose salmon over tuna. Now how am I going to get that utility-ratio back? I’m going to have to find some other piece of prior subjective uncertainty to “burn”. For example, I might notice some prior uncertainty about whether I would choose salmon over $5 and about whether I would choose tuna over $5. Then I could proceed as Peterson describes in the photo example.
So maybe Peterson’s proposal can be saved by distinguishing between prior and posterior subjective probabilities for my choices in this way. Prior probabilities would be required to be consistent in the following sense: If
my prior odds of choosing A over B are 1:1, and
my prior odds of choosing B over C are 3:1,
then
my prior odds of choosing A over C have to be 3:1.
Thus, in the photo example, the prior probability of taking $30 over $20 has to be 3⁄4, given the other probabilities. It’s not allowed to be 0.99. But the posterior probability is allowed to be 1, provided that I’ve already “burned up” some piece of prior subjective uncertainty to arrive at that certainty. In this way, perhaps, it makes sense to say that “the probability of taking $30 over $20 is either 1 or 3⁄4, but not anything else”.
Now suppose that I am so unfortunate as to forget the value of the utility-ratio u(salmon)/u(tuna).
But, on reflection, the possibility of forgetting knowledge is probably a can of worms best left unopened. For, one could ask what would happen if I remembered that I was highly confident that I would choose salmon over tuna, but I forgot that I was absolutely certain about this. It would then be hard to see how to avoid inconsistent utility functions, as you describe.
Perhaps it’s better to suppose that you’ve shown, by some unspecified means, that u(salmon) > u(tuna), but that you did so without computing the exact utility-ratio. Then you become certain that you choose salmon over tuna, but you no longer have the prior subjective uncertainty that you need to compute the ratio u(salmon)/u(tuna) directly. That’s the kind of case where you might be able to find some other piece of prior subjective uncertainty, as I describe in the above paragraph.
No, he doesn’t commit to doing this, but taking this defense doesn’t really save his idea. Because what if instead of thinking I’d take $30 over $20 with probability 1, I think I’d make that choice with probability 0.99. Now u($30)/u($20) has to be 99, but u($30)=u(photo) and u(photo)/u($20)=3 still hold, so we can no longer obtain a consistent utility function using Peterson’s proposal. What sense does it make that we can derive a utility function if the probability of taking $30 over $20 is either 1 or 3⁄4, but not anything else? As far as I can tell, there is no reason to expect that our actual beliefs about hypothetical choices like these are such that Peterson’s proposal can output a utility function from them, and he doesn’t address the issue in his book.
It seems that he should account for the fact that this subjective probability will update. For example, you quoted him as saying
But once I know that u(salmon)/u(tuna) = 2, I know that I will choose salmon over tuna. I therefore no longer assign the prior subjective probabilities that led me to this utility-ratio. I assign a new posterior subject probability — namely, certainty that I will choose tuna. This new subjective probability can no longer be used to derive the utility-ratio u(salmon)/u(tuna) = 2. I have learned the utility-ratio, but, in doing so, I have destroyed the state of affairs that allowed me to learn it. I might remember how I derived the utility-ratio, but I can no longer re-derive it in the same way. I have, as it were, “burned up” some of my prior subjective uncertainty, so I can’t use it any more.
Now suppose that I am so unfortunate as to forget the value of the utility-ratio u(salmon)/u(tuna). However, I still retain the posterior subjective certainty that I choose salmon over tuna. Now how am I going to get that utility-ratio back? I’m going to have to find some other piece of prior subjective uncertainty to “burn”. For example, I might notice some prior uncertainty about whether I would choose salmon over $5 and about whether I would choose tuna over $5. Then I could proceed as Peterson describes in the photo example.
So maybe Peterson’s proposal can be saved by distinguishing between prior and posterior subjective probabilities for my choices in this way. Prior probabilities would be required to be consistent in the following sense: If
my prior odds of choosing A over B are 1:1, and
my prior odds of choosing B over C are 3:1,
then
my prior odds of choosing A over C have to be 3:1.
Thus, in the photo example, the prior probability of taking $30 over $20 has to be 3⁄4, given the other probabilities. It’s not allowed to be 0.99. But the posterior probability is allowed to be 1, provided that I’ve already “burned up” some piece of prior subjective uncertainty to arrive at that certainty. In this way, perhaps, it makes sense to say that “the probability of taking $30 over $20 is either 1 or 3⁄4, but not anything else”.
I wrote,
But, on reflection, the possibility of forgetting knowledge is probably a can of worms best left unopened. For, one could ask what would happen if I remembered that I was highly confident that I would choose salmon over tuna, but I forgot that I was absolutely certain about this. It would then be hard to see how to avoid inconsistent utility functions, as you describe.
Perhaps it’s better to suppose that you’ve shown, by some unspecified means, that u(salmon) > u(tuna), but that you did so without computing the exact utility-ratio. Then you become certain that you choose salmon over tuna, but you no longer have the prior subjective uncertainty that you need to compute the ratio u(salmon)/u(tuna) directly. That’s the kind of case where you might be able to find some other piece of prior subjective uncertainty, as I describe in the above paragraph.