Plus some other assumptions (capable of backwards induction, knowing trades in advance), right?
Yep, that’s right!
I’m curious whether these assumptions are actually stronger than, or related to, completeness.
Since the Completeness assumption is about preferences while the backward-induction and knowing-trades-in-advance assumptions are not, they don’t seem very closely related to me. The assumption that the agent’s strict preferences are transitive is more closely related, but it’s not stronger than Completeness in the sense of implying Completeness.
Can you say a bit more about what you mean by ‘interesting agents’?
From your other comment:
That is, if you try to construct / find / evolve the most powerful agent that you can, without a very precise understanding of agents / cognition / alignment, you’ll probably get something very close to an EU maximizer.
I think this could well be right. The main thought I want to argue against is more like:
Even if you initially succeed in creating a powerful agent that doesn’t maximize expected utility, VNM/CCT/money-pump arguments make it likely that this powerful agent will later become an expected utility maximizer.
I meant stronger in a loose sense: you argued that “completeness doesn’t come for free”, but it seems more like actually what you’ve shown is that not-pursuing-dominated-strategies is the thing that doesn’t come for free.
You either need a bunch of assumptions about preferences, or you need one less of those assumptions, plus a few other assumptions about knowing trades, induction, and adherence to a specific policy.
And even given all these other assumptions, the proposed agent with a preferential gap seems like it’s still only epsilon-different from an actual EU maximizer. To me this looks like a strong hint that these assumptions actually do point at a core of something simple which one might call “coherence”, which I expect to show up in (all minus epsilon) advanced agents, even if there are pathological points in advanced-agent-space which don’t have these properties (and even if expected utility theory as a whole isn’t quite correct).
You either need a bunch of assumptions about preferences, or you need one less of those assumptions, plus a few other assumptions about knowing trades, induction, and adherence to a specific policy.
I see. I think this is right.
the proposed agent with a preferential gap seems like it’s still only epsilon-different from an actual EU maximizer.
I agree with this too, but note that the agent with a single preferential gap is just an example. Agents can have arbitrarily many preferential gaps and still avoid pursuing dominated strategies. And agents with many preferential gaps may behave quite differently to expected utility maximizers.
Yep, that’s right!
Since the Completeness assumption is about preferences while the backward-induction and knowing-trades-in-advance assumptions are not, they don’t seem very closely related to me. The assumption that the agent’s strict preferences are transitive is more closely related, but it’s not stronger than Completeness in the sense of implying Completeness.
Can you say a bit more about what you mean by ‘interesting agents’?
From your other comment:
I think this could well be right. The main thought I want to argue against is more like:
Even if you initially succeed in creating a powerful agent that doesn’t maximize expected utility, VNM/CCT/money-pump arguments make it likely that this powerful agent will later become an expected utility maximizer.
I meant stronger in a loose sense: you argued that “completeness doesn’t come for free”, but it seems more like actually what you’ve shown is that not-pursuing-dominated-strategies is the thing that doesn’t come for free.
You either need a bunch of assumptions about preferences, or you need one less of those assumptions, plus a few other assumptions about knowing trades, induction, and adherence to a specific policy.
And even given all these other assumptions, the proposed agent with a preferential gap seems like it’s still only epsilon-different from an actual EU maximizer. To me this looks like a strong hint that these assumptions actually do point at a core of something simple which one might call “coherence”, which I expect to show up in (all minus epsilon) advanced agents, even if there are pathological points in advanced-agent-space which don’t have these properties (and even if expected utility theory as a whole isn’t quite correct).
I see. I think this is right.
I agree with this too, but note that the agent with a single preferential gap is just an example. Agents can have arbitrarily many preferential gaps and still avoid pursuing dominated strategies. And agents with many preferential gaps may behave quite differently to expected utility maximizers.