Why does wanting to maintain indifference to shifting probability mass between (some) trajectories, imply that we care about ex-ante permissibility?
The ex-ante permissible trajectories are the trajectories that the agent lacks any strict preference between. Suppose the permissible trajectories are {A,B,C}. Then, from the agent’s perspective, A isn’t better than B, B isn’t better than A, and so on. The agent considers them all equally choiceworthy. So, the agent doesn’t mind picking any one of them over any other, nor therefore switching from one lottery over them with some distribution to another lottery with some other distribution. The agent doesn’t care whether it gets A versus B, versus an even chance of A or B, versus a one-third chance of A, B, or C.[1]
Suppose we didn’t have multiple permissible options ex-ante. For example, if only A was permissible, then the agent would dislike shifting probability mass away from A and towards B or C—because B and C aren’t among the best options.[2] So that’s why we want multiple ex-ante permissible trajectories: it’s the only way to maintain indifference to shifting probability mass between (those) trajectories.
[I’ll respond to the stuff in your second paragraph under your longer comment.]
- ^
The analogous case with complete preferences is clearer: if there are multiple permissible options, the agent must be indifferent between them all (or else the agent would be fine picking a strictly dominated option). So if options are permissible, then . Assuming expected utility theory, we’ll then of course have for any probability functions . This means the agent is indifferent to shifting probability mass between the permissible options.
- ^
This is a bit simplified but it should get the point across.
For the purposes of this discussion, this is right. I don’t think the differences between this description and the actual proposal matter in this case.
I don’t think this representation is quite right, although not for a reason I expect to matter for this discussion. It’s a technicality but I’ll mention it for completeness. If we’re using Bradley’s representation theorem from section 2.1., the set of subagents must include every coherent completion of the agent’s preferences. E.g., suppose there are three possible trajectories. Let p denote a pressed trajectory and u1,u2 two unpressed trajectories, where u1 gets you strictly more coins than u2. Then there’ll be five (ordinal) subagents, described in order of preference: ⟨u1,u2,p⟩, ⟨u1,u2p⟩, ⟨u1,p,u2⟩ , ⟨u1p,u2⟩, and ⟨p,u1,u2⟩.
Indeed, this wouldn’t be good, and isn’t what Thornley’s proposal does. The agent doesn’t choose arbitrarily between the best pressed vs unpressed options. Thornley’s proposal adds more requirements on the agent to ensure this. My use of ‘arbitrary’ in the post is a bit misleading in that context. I’m only using it to identify when the agent has multiple permissible options available, which is what we’re after to get TND. If no other requirements are added to the agent, and it’s acting under certainty, this could well lead it to actually choose arbitrarily. But it doesn’t have to in general, and under uncertainty and together with the rest of Thornley’s requirements, it doesn’t. (The requirements are described in his proposal.)
I’ll first flag that the results don’t rely on subagents. Creating a group agent out of multiple subagents is possibly an interesting way to create an agent representable as having incomplete preferences, but this isn’t the same as creating a single agent whose single preference relation happens not to satisfy completeness.
That said, I will spend some more time thinking about the subagent idea, and I do think collusion between them seems like the major initial hurdle for this approach to creating an agent with preferential gaps.