That depends on what you mean by “suitably coherent.” If you mean they need to satisfy the independence vNM axiom, then yes. But the point is that I don’t see any good argument why updateless agents should satisfy that axiom. The argument for that axiom passes through wanting to have a certain relationship with Bayesian updating.
Also, if by “have a utility function” you mean something other than “try to maximize expected utility,” I don’t know what you mean. To me, the cardinal (as opposed to ordinal) structure of preferences that makes me want to call something a “utility function” is about how to choose between lotteries.
Ah okay, interesting. Do you think that updateless agents need not accept any separability axiom at all? And if not, what justifies using the EU framework for discussing UDT agents?
In many discussions on LW about UDT, it seems that a starting point is that agent is maximising some notion of expected utility, and the updatelessness comes in via the EU formula iterating over policies rather than actions. But if we give up on some separability axiom, it seems that this EU starting point is not warranted, since every major EU representation theorem needs some version of separability.
You could take as an input parameter to UDT a preference ordering over lotteries that does not satisfy the independence axiom, but is a total order (or total preorder if you want ties). Each policy you can take results in a lottery over outcomes, and you take the policy that gives your favorite lottery. There is no need for the assumption that your preferences over lotteries is vNM.
Note that I don’t think that we really understand decision theory, and have a coherent proposal. The only thing I feel like I can say confidently is that if you are convinced by the style of argument that is used to argue for the independence axiom, then you should probably also be convinced by arguments that cause you to be updateful and thus not reflectively stable.
If the preference ordering over lotteries violates independence, then it will not be representable as maximising EU with respect to the probabilities in the lotteries (by the vNM theorem). Do you think it’s a mistake then to think of UDT as “EU maximisation, where the thing you’re choosing is policies”? If so, I believe this is the most common way UDT is framed in LW discussions, and so this would be a pretty important point for you to make more visibly (unless you’ve already made this point before in a post, in which case I’d love to read it).
I think UDT is as you say. I think it is also important to clarify that you are not updating on your observations when you decide on a policy. (If you did, it wouldn’t really be a function from observations to actions, but it is important to emphasize in UDT.)
Note that I am using “updateless” differently than “UDT”. By updateless, I mostly mean anything that is not performing Bayesian updates and forgetting the other possible worlds when it makes observations. UDT is more of a specific proposal. “Updateless” is more of negative property, defined by lack of updating.
I have been trying to write a big post on utility, and haven’t yet, and decided it would be good to give a quick argument here because of the question. The only posts I remember making against utility are in the geometric rationality sequence, especially this post.
Thanks, the clarification of UDT vs. “updateless” is helpful.
But now I’m a bit confused as to why you would still regard UDT as “EU maximisation, where the thing you’re choosing is policies”. If I have a preference ordering over lotteries that violates independence, the vNM theorem implies that I cannot be represented as maximising EU.
In fact, after reading Vladimir_Nesov’s comment, it doesn’t even seem fully accurate to view UDT taking in a preference ordering over lotteries. Here’s the way I’m thinking of UDT: your prior over possible worlds uniquely determines the probabilities of a single lottery L, and selecting a global policy is equivalent to choosing the outcomes of this lottery L. Now different UDT agents may prefer different lotteries, but this is in no sense expected utility maximisation. This is simply: some UDT agents think one lottery is the best, other might think another is the best. There is nothing in this story that resembles a cardinal utility function over outcomes that the agents are multiplying with their prior probabilities to maximise EU with respect to.
It seems that to get an EU representation of UDT, you need to impose coherence on the preference ordering over lotteries (i.e. over different prior distributions), but since UDT agents come with some fixed prior over worlds which is not updated, it’s not at all clear why rationality would demand coherence in your preference between lotteries (let alone coherence that satisfies independence).
Yeah, I don’t have a specific UDT proposal in mind. Maybe instead of “updateless” I should say “the kind of mind that might get counterfactually mugged” as in this example.
That depends on what you mean by “suitably coherent.” If you mean they need to satisfy the independence vNM axiom, then yes. But the point is that I don’t see any good argument why updateless agents should satisfy that axiom. The argument for that axiom passes through wanting to have a certain relationship with Bayesian updating.
Also, if by “have a utility function” you mean something other than “try to maximize expected utility,” I don’t know what you mean. To me, the cardinal (as opposed to ordinal) structure of preferences that makes me want to call something a “utility function” is about how to choose between lotteries.
Yeah by “having a utility function” I just mean “being representable as trying to maximise expected utility”.
Ah okay, interesting. Do you think that updateless agents need not accept any separability axiom at all? And if not, what justifies using the EU framework for discussing UDT agents?
In many discussions on LW about UDT, it seems that a starting point is that agent is maximising some notion of expected utility, and the updatelessness comes in via the EU formula iterating over policies rather than actions. But if we give up on some separability axiom, it seems that this EU starting point is not warranted, since every major EU representation theorem needs some version of separability.
You could take as an input parameter to UDT a preference ordering over lotteries that does not satisfy the independence axiom, but is a total order (or total preorder if you want ties). Each policy you can take results in a lottery over outcomes, and you take the policy that gives your favorite lottery. There is no need for the assumption that your preferences over lotteries is vNM.
Note that I don’t think that we really understand decision theory, and have a coherent proposal. The only thing I feel like I can say confidently is that if you are convinced by the style of argument that is used to argue for the independence axiom, then you should probably also be convinced by arguments that cause you to be updateful and thus not reflectively stable.
Okay this is very clarifying, thanks!
If the preference ordering over lotteries violates independence, then it will not be representable as maximising EU with respect to the probabilities in the lotteries (by the vNM theorem). Do you think it’s a mistake then to think of UDT as “EU maximisation, where the thing you’re choosing is policies”? If so, I believe this is the most common way UDT is framed in LW discussions, and so this would be a pretty important point for you to make more visibly (unless you’ve already made this point before in a post, in which case I’d love to read it).
I think UDT is as you say. I think it is also important to clarify that you are not updating on your observations when you decide on a policy. (If you did, it wouldn’t really be a function from observations to actions, but it is important to emphasize in UDT.)
Note that I am using “updateless” differently than “UDT”. By updateless, I mostly mean anything that is not performing Bayesian updates and forgetting the other possible worlds when it makes observations. UDT is more of a specific proposal. “Updateless” is more of negative property, defined by lack of updating.
I have been trying to write a big post on utility, and haven’t yet, and decided it would be good to give a quick argument here because of the question. The only posts I remember making against utility are in the geometric rationality sequence, especially this post.
Thanks, the clarification of UDT vs. “updateless” is helpful.
But now I’m a bit confused as to why you would still regard UDT as “EU maximisation, where the thing you’re choosing is policies”. If I have a preference ordering over lotteries that violates independence, the vNM theorem implies that I cannot be represented as maximising EU.
In fact, after reading Vladimir_Nesov’s comment, it doesn’t even seem fully accurate to view UDT taking in a preference ordering over lotteries. Here’s the way I’m thinking of UDT: your prior over possible worlds uniquely determines the probabilities of a single lottery L, and selecting a global policy is equivalent to choosing the outcomes of this lottery L. Now different UDT agents may prefer different lotteries, but this is in no sense expected utility maximisation. This is simply: some UDT agents think one lottery is the best, other might think another is the best. There is nothing in this story that resembles a cardinal utility function over outcomes that the agents are multiplying with their prior probabilities to maximise EU with respect to.
It seems that to get an EU representation of UDT, you need to impose coherence on the preference ordering over lotteries (i.e. over different prior distributions), but since UDT agents come with some fixed prior over worlds which is not updated, it’s not at all clear why rationality would demand coherence in your preference between lotteries (let alone coherence that satisfies independence).
Yeah, I don’t have a specific UDT proposal in mind. Maybe instead of “updateless” I should say “the kind of mind that might get counterfactually mugged” as in this example.