The assumption is that you want to maximize your actual utility. Then, if you expect to face arbitrarily many i.i.d. iterations of a choice among lotteries over outcomes with certain utilities, picking the lottery with the highest expected utility each time gives you the highest actual utility.
It’s really not that interesting of an argument, nor is it very compelling as a general argument for EUM. In practice, you will almost never face the exact same decision problem, with the same options, same outcomes, same probability, and same utilities, over and over again.
Ah, I think that is what I was talking about. By “actual utility”, you mean the sum over the utility of the outcome of each decision problem you face, right? What I was getting at is that your utility function splitting as a sum like this is an assumption about your preferences, not just about the relationship between the various decision problems you face.
Yeah, by “actual utility” I mean the sum of the utilities you get from the outcomes of each decision problem you face. You’re right that if my utility function were defined over lifetime trajectories, then this would amount to quite a substantive assumption, i.e. the utility of each iteration contributes equally to the overall utility and what not.
And I think I get what you mean now, and I agree that for the iterated decisions argument to be internally motivating for an agent, it does require stronger assumptions than the representation theorem arguments. In the standard ‘iterated decisions’ argument, my utility function is defined over outcomes which are the prizes in the lotteries that I choose from in each iterated decision. It simply underspecifies what my preferences over trajectories of decision problems might look like (or whether I even have one). In this sense, the ‘iterated decisions’ argument is not as self-contained as (i.e., requires stronger assumptions than) ‘representation theorem’ arguments, in the sense that representation theorems justify EUM entirely in reference to the agent’s existing attitudes, whereas the ‘iterated decisions’ argument relies on external considerations that are not fixed by the attitudes of the agent.
The assumption is that you want to maximize your actual utility. Then, if you expect to face arbitrarily many i.i.d. iterations of a choice among lotteries over outcomes with certain utilities, picking the lottery with the highest expected utility each time gives you the highest actual utility.
It’s really not that interesting of an argument, nor is it very compelling as a general argument for EUM. In practice, you will almost never face the exact same decision problem, with the same options, same outcomes, same probability, and same utilities, over and over again.
Ah, I think that is what I was talking about. By “actual utility”, you mean the sum over the utility of the outcome of each decision problem you face, right? What I was getting at is that your utility function splitting as a sum like this is an assumption about your preferences, not just about the relationship between the various decision problems you face.
Yeah, by “actual utility” I mean the sum of the utilities you get from the outcomes of each decision problem you face. You’re right that if my utility function were defined over lifetime trajectories, then this would amount to quite a substantive assumption, i.e. the utility of each iteration contributes equally to the overall utility and what not.
And I think I get what you mean now, and I agree that for the iterated decisions argument to be internally motivating for an agent, it does require stronger assumptions than the representation theorem arguments. In the standard ‘iterated decisions’ argument, my utility function is defined over outcomes which are the prizes in the lotteries that I choose from in each iterated decision. It simply underspecifies what my preferences over trajectories of decision problems might look like (or whether I even have one). In this sense, the ‘iterated decisions’ argument is not as self-contained as (i.e., requires stronger assumptions than) ‘representation theorem’ arguments, in the sense that representation theorems justify EUM entirely in reference to the agent’s existing attitudes, whereas the ‘iterated decisions’ argument relies on external considerations that are not fixed by the attitudes of the agent.
Does this get at the point you were making?
Yes, I think we’re on the same page now.