I meant stronger in a loose sense: you argued that “completeness doesn’t come for free”, but it seems more like actually what you’ve shown is that not-pursuing-dominated-strategies is the thing that doesn’t come for free.
You either need a bunch of assumptions about preferences, or you need one less of those assumptions, plus a few other assumptions about knowing trades, induction, and adherence to a specific policy.
And even given all these other assumptions, the proposed agent with a preferential gap seems like it’s still only epsilon-different from an actual EU maximizer. To me this looks like a strong hint that these assumptions actually do point at a core of something simple which one might call “coherence”, which I expect to show up in (all minus epsilon) advanced agents, even if there are pathological points in advanced-agent-space which don’t have these properties (and even if expected utility theory as a whole isn’t quite correct).
You either need a bunch of assumptions about preferences, or you need one less of those assumptions, plus a few other assumptions about knowing trades, induction, and adherence to a specific policy.
I see. I think this is right.
the proposed agent with a preferential gap seems like it’s still only epsilon-different from an actual EU maximizer.
I agree with this too, but note that the agent with a single preferential gap is just an example. Agents can have arbitrarily many preferential gaps and still avoid pursuing dominated strategies. And agents with many preferential gaps may behave quite differently to expected utility maximizers.
I meant stronger in a loose sense: you argued that “completeness doesn’t come for free”, but it seems more like actually what you’ve shown is that not-pursuing-dominated-strategies is the thing that doesn’t come for free.
You either need a bunch of assumptions about preferences, or you need one less of those assumptions, plus a few other assumptions about knowing trades, induction, and adherence to a specific policy.
And even given all these other assumptions, the proposed agent with a preferential gap seems like it’s still only epsilon-different from an actual EU maximizer. To me this looks like a strong hint that these assumptions actually do point at a core of something simple which one might call “coherence”, which I expect to show up in (all minus epsilon) advanced agents, even if there are pathological points in advanced-agent-space which don’t have these properties (and even if expected utility theory as a whole isn’t quite correct).
I see. I think this is right.
I agree with this too, but note that the agent with a single preferential gap is just an example. Agents can have arbitrarily many preferential gaps and still avoid pursuing dominated strategies. And agents with many preferential gaps may behave quite differently to expected utility maximizers.