Money-pump arguments don’t give us much reason to expect that advanced artificial agents will be representable as expected-utility-maximizers.
The space of agents is large; EU maximizers may be a simple, natural subset of all possible agents.
Given any EU maximizer, you can construct a new, more complicated agent which has a preferential gap about something trivial. This new agent will (by VNM) not be an EU maximizer.
Similarly, given an agent with incomplete preferences that satisfies the other axioms, you can (always? trivially??) construct an agent with complete preferences by specifying a new preference-relation that is sensitive to all sweetenings and sourings.
So, while it is indeed not accurate to say that sufficiently-advanced artificial agents will be EU maximizers, it certainly seems like they can be.
I think gesturing vaguely at VNM and using money-pump arguments are useful for building an (imprecise, possibly wrong) intuition for why EU maximizers might be a simple, natural subset of all agents.
That is, if you try to construct / find / evolve the most powerful agent that you can, without a very precise understanding of agents / cognition / alignment, you’ll probably get something very close to an EU maximizer.
I agree that the authors should be more careful with their words when they cite VNM, but I think the intuition that they build based on these theorems is correct.
See also EJT’s comment here (and the rest of the thread). You’d just pick any one of the utility functions. You can also probably drop continuity for something weaker, as I point out in my reply there.
The space of agents is large; EU maximizers may be a simple, natural subset of all possible agents.
Given any EU maximizer, you can construct a new, more complicated agent which has a preferential gap about something trivial. This new agent will (by VNM) not be an EU maximizer.
Similarly, given an agent with incomplete preferences that satisfies the other axioms, you can (always? trivially??) construct an agent with complete preferences by specifying a new preference-relation that is sensitive to all sweetenings and sourings.
So, while it is indeed not accurate to say that sufficiently-advanced artificial agents will be EU maximizers, it certainly seems like they can be.
I think gesturing vaguely at VNM and using money-pump arguments are useful for building an (imprecise, possibly wrong) intuition for why EU maximizers might be a simple, natural subset of all agents.
That is, if you try to construct / find / evolve the most powerful agent that you can, without a very precise understanding of agents / cognition / alignment, you’ll probably get something very close to an EU maximizer.
I agree that the authors should be more careful with their words when they cite VNM, but I think the intuition that they build based on these theorems is correct.
See also EJT’s comment here (and the rest of the thread). You’d just pick any one of the utility functions. You can also probably drop continuity for something weaker, as I point out in my reply there.