Ah, sorry, now I understand what’s going on. You are saying “there’s an obvious generalization, but then you’d have to pick a ‘fair’ strategy profile that it would privilege.” I’m saying “there’s no obvious generalization which preserves what’s interesting about the two-strategy case.” So we’re in agreement already.
(I’m not entirely without hope; I have a vague idea that we could order the possible somehow, and if we can prove a higher utility for strategy X than for any strategy that is below X in the ordering, then the agent can prove it will definitely choose X or a strategy that is above it in the ordering. Or something like that. But need to look at the details much more closely.)
Ah, sorry, now I understand what’s going on. You are saying “there’s an obvious generalization, but then you’d have to pick a ‘fair’ strategy profile that it would privilege.” I’m saying “there’s no obvious generalization which preserves what’s interesting about the two-strategy case.” So we’re in agreement already.
(I’m not entirely without hope; I have a vague idea that we could order the possible somehow, and if we can prove a higher utility for strategy X than for any strategy that is below X in the ordering, then the agent can prove it will definitely choose X or a strategy that is above it in the ordering. Or something like that. But need to look at the details much more closely.)