(Perhaps you were going to address this in a later post, but) the iterated decisions type of argument for EUM and the one-shot arguments like VNM seem not comparable to me in that they don’t actually support the same conclusions. The iterated decision arguments tell you what your utility function should be (linear in amount of good things if future opportunities don’t depend on past results; possibly nonlinear otherwise, as in the Kelly criterion), and the one-shot arguments importantly don’t, instead simply concluding that there should exist some utility function accurately reflecting your preferences.
The ‘iterated decisions’-type arguments support EUM in a given decision problem if you expect to face the exact same decision problem over and over again. The ‘representation theorem’ arguments support EUM for a given decision problem, without qualification.
In either case, your utility function is meant to be constructed from your underlying preference relation over the set of alternatives for the given problem. The form of the function can be linear in some things or not, that’s something to be determined by your preference relation and not the arguments for EUM.
In either case, your utility function is meant to be constructed from your underlying preference relation over the set of alternatives for the given problem. The form of the function can be linear in some things or not, that’s something to be determined by your preference relation and not the arguments for EUM.
No, what I was trying to say is that this is true only for representation theorem arguments, but not for the iterated decisions type of argument.
Suppose your utility function is some monotonically increasing function of your eventual wealth. If you’re facing a choice between some set of lotteries over monetary payouts, and you expect to face an extremely large number of i.i.d. iterations of this choice, then by the law of large numbers, you should pick the option with the highest expected monetary value each time, as this maximizes your actual eventual wealth (and thus your actual utility) with probability near 1.
Or suppose you expect to face an extremely large number of similarly-distributed opportunities to place bets at some given odds at whatever stakes you choose on each step, subject to the constraint that you can’t bet more money than you have. Then the Kelly criterion says that if you choose the stakes that maximizes your expected log wealth each time, this will maximize your eventual actual wealth (and thus your actual utility, since that’s monotonically increasing with you eventual wealth) with probability near 1.
So, in the first case, we concluded that you should maximize a linear function of money, and in the second case, we concluded that you should maximize a logarithmic function of money, but in both cases, we assumed nothing about your preferences besides “more money is better”, and the function you’re told to maximize isn’t necessarily your utility function as in the VNM representation theorem. The shape of the function you’re told you should maximize comes from the assumptions behind the iteration, not from your actual preferences.
Yeah, that’s a good argument that if your utility is monotonically increasing in some good X (e.g. wealth), then the type of the iterated decision you expect to fact involving lotteries over that good can determine that the best way to maximize your utility is to maximize a particular function (e.g. linear) of that good.
But this is not what the ‘iterated decisions’ argument for EUM amounts to. In a sense, it’s quite a bit less interesting. The ‘iterated decisions’ argument does not start with some weak assumption on your utility function and then attempts to impose more structure on your utility function in iterated choice situations. They don’t assume anything about your utility function, other than that you have one (or can be represented as having one).
All it’s saying is that, if you expect to face arbitrarily many i.i.d. iterations of a choice among lotteries (i.e. known probability distributions) over outcomes that you have assigned utilities to already, you should pick the lottery that has the highest expected utility. Note, the utility assignments do not have to be linear or monotonically increasing in any particular feature of the outcomes (such as the amount of money I gain if that outcome obtains), and that the utility function is basically assumed to be there.
Oh, are you talking about the kind of argument that starts from the assumption that your goal is to maximize a sum over time-steps of some function of what you get at that time-step? (This is, in fact, a strong assumption about the nature of the preferences involved, which representation theorems like VNM don’t make.)
The assumption is that you want to maximize your actual utility. Then, if you expect to face arbitrarily many i.i.d. iterations of a choice among lotteries over outcomes with certain utilities, picking the lottery with the highest expected utility each time gives you the highest actual utility.
It’s really not that interesting of an argument, nor is it very compelling as a general argument for EUM. In practice, you will almost never face the exact same decision problem, with the same options, same outcomes, same probability, and same utilities, over and over again.
Ah, I think that is what I was talking about. By “actual utility”, you mean the sum over the utility of the outcome of each decision problem you face, right? What I was getting at is that your utility function splitting as a sum like this is an assumption about your preferences, not just about the relationship between the various decision problems you face.
Yeah, by “actual utility” I mean the sum of the utilities you get from the outcomes of each decision problem you face. You’re right that if my utility function were defined over lifetime trajectories, then this would amount to quite a substantive assumption, i.e. the utility of each iteration contributes equally to the overall utility and what not.
And I think I get what you mean now, and I agree that for the iterated decisions argument to be internally motivating for an agent, it does require stronger assumptions than the representation theorem arguments. In the standard ‘iterated decisions’ argument, my utility function is defined over outcomes which are the prizes in the lotteries that I choose from in each iterated decision. It simply underspecifies what my preferences over trajectories of decision problems might look like (or whether I even have one). In this sense, the ‘iterated decisions’ argument is not as self-contained as (i.e., requires stronger assumptions than) ‘representation theorem’ arguments, in the sense that representation theorems justify EUM entirely in reference to the agent’s existing attitudes, whereas the ‘iterated decisions’ argument relies on external considerations that are not fixed by the attitudes of the agent.
(Perhaps you were going to address this in a later post, but) the iterated decisions type of argument for EUM and the one-shot arguments like VNM seem not comparable to me in that they don’t actually support the same conclusions. The iterated decision arguments tell you what your utility function should be (linear in amount of good things if future opportunities don’t depend on past results; possibly nonlinear otherwise, as in the Kelly criterion), and the one-shot arguments importantly don’t, instead simply concluding that there should exist some utility function accurately reflecting your preferences.
The ‘iterated decisions’-type arguments support EUM in a given decision problem if you expect to face the exact same decision problem over and over again. The ‘representation theorem’ arguments support EUM for a given decision problem, without qualification.
In either case, your utility function is meant to be constructed from your underlying preference relation over the set of alternatives for the given problem. The form of the function can be linear in some things or not, that’s something to be determined by your preference relation and not the arguments for EUM.
No, what I was trying to say is that this is true only for representation theorem arguments, but not for the iterated decisions type of argument.
Suppose your utility function is some monotonically increasing function of your eventual wealth. If you’re facing a choice between some set of lotteries over monetary payouts, and you expect to face an extremely large number of i.i.d. iterations of this choice, then by the law of large numbers, you should pick the option with the highest expected monetary value each time, as this maximizes your actual eventual wealth (and thus your actual utility) with probability near 1.
Or suppose you expect to face an extremely large number of similarly-distributed opportunities to place bets at some given odds at whatever stakes you choose on each step, subject to the constraint that you can’t bet more money than you have. Then the Kelly criterion says that if you choose the stakes that maximizes your expected log wealth each time, this will maximize your eventual actual wealth (and thus your actual utility, since that’s monotonically increasing with you eventual wealth) with probability near 1.
So, in the first case, we concluded that you should maximize a linear function of money, and in the second case, we concluded that you should maximize a logarithmic function of money, but in both cases, we assumed nothing about your preferences besides “more money is better”, and the function you’re told to maximize isn’t necessarily your utility function as in the VNM representation theorem. The shape of the function you’re told you should maximize comes from the assumptions behind the iteration, not from your actual preferences.
Yeah, that’s a good argument that if your utility is monotonically increasing in some good X (e.g. wealth), then the type of the iterated decision you expect to fact involving lotteries over that good can determine that the best way to maximize your utility is to maximize a particular function (e.g. linear) of that good.
But this is not what the ‘iterated decisions’ argument for EUM amounts to. In a sense, it’s quite a bit less interesting. The ‘iterated decisions’ argument does not start with some weak assumption on your utility function and then attempts to impose more structure on your utility function in iterated choice situations. They don’t assume anything about your utility function, other than that you have one (or can be represented as having one).
All it’s saying is that, if you expect to face arbitrarily many i.i.d. iterations of a choice among lotteries (i.e. known probability distributions) over outcomes that you have assigned utilities to already, you should pick the lottery that has the highest expected utility. Note, the utility assignments do not have to be linear or monotonically increasing in any particular feature of the outcomes (such as the amount of money I gain if that outcome obtains), and that the utility function is basically assumed to be there.
Oh, are you talking about the kind of argument that starts from the assumption that your goal is to maximize a sum over time-steps of some function of what you get at that time-step? (This is, in fact, a strong assumption about the nature of the preferences involved, which representation theorems like VNM don’t make.)
The assumption is that you want to maximize your actual utility. Then, if you expect to face arbitrarily many i.i.d. iterations of a choice among lotteries over outcomes with certain utilities, picking the lottery with the highest expected utility each time gives you the highest actual utility.
It’s really not that interesting of an argument, nor is it very compelling as a general argument for EUM. In practice, you will almost never face the exact same decision problem, with the same options, same outcomes, same probability, and same utilities, over and over again.
Ah, I think that is what I was talking about. By “actual utility”, you mean the sum over the utility of the outcome of each decision problem you face, right? What I was getting at is that your utility function splitting as a sum like this is an assumption about your preferences, not just about the relationship between the various decision problems you face.
Yeah, by “actual utility” I mean the sum of the utilities you get from the outcomes of each decision problem you face. You’re right that if my utility function were defined over lifetime trajectories, then this would amount to quite a substantive assumption, i.e. the utility of each iteration contributes equally to the overall utility and what not.
And I think I get what you mean now, and I agree that for the iterated decisions argument to be internally motivating for an agent, it does require stronger assumptions than the representation theorem arguments. In the standard ‘iterated decisions’ argument, my utility function is defined over outcomes which are the prizes in the lotteries that I choose from in each iterated decision. It simply underspecifies what my preferences over trajectories of decision problems might look like (or whether I even have one). In this sense, the ‘iterated decisions’ argument is not as self-contained as (i.e., requires stronger assumptions than) ‘representation theorem’ arguments, in the sense that representation theorems justify EUM entirely in reference to the agent’s existing attitudes, whereas the ‘iterated decisions’ argument relies on external considerations that are not fixed by the attitudes of the agent.
Does this get at the point you were making?
Yes, I think we’re on the same page now.