Some nits we know about but didn’t include in the problems section:
P[mushroom->anchovy] = 0. The current argument does not handle the case where subagents believe that there is a probability of 0 on one of the possible states. It wouldn’t be possible to complete the preferences exactly as written, then.
Indifference. If anchovy were placed directly above mushroom in the preference graph above (so that John is truly indifferent between them), then that might require some special handling. But also it might just work if the “Value vs Utility” issue is worked out. If the subagents are not myopic / handle instrumental values, then whether anchovy is less, identically, or more desirable than mushroom doesn’t really matter so much on its own as opposed to what opportunities are possible afterward from the anchovy state relative to the mushroom state.
Also, I think I buy the following part but I really wish it were more constructive.
Now, we haven’t established which distribution of preferences the system will end up sampling from. But so long as it ends up at some non-dominated choice, it must end up with non-strongly-incomplete preferences with probability 1 (otherwise it could modify the contract for a strict improvement in cases where it ends up with non-strongly-incomplete preferences). And, so long as the space of possibilities is compact and arbitrary contracts are allowed, all we have left is a bargaining problem. The only way the system would end up with dominated preference-distribution is if there’s some kind of bargaining breakdown.
Some nits we know about but didn’t include in the problems section:
P[mushroom->anchovy] = 0. The current argument does not handle the case where subagents believe that there is a probability of 0 on one of the possible states. It wouldn’t be possible to complete the preferences exactly as written, then.
Indifference. If anchovy were placed directly above mushroom in the preference graph above (so that John is truly indifferent between them), then that might require some special handling. But also it might just work if the “Value vs Utility” issue is worked out. If the subagents are not myopic / handle instrumental values, then whether anchovy is less, identically, or more desirable than mushroom doesn’t really matter so much on its own as opposed to what opportunities are possible afterward from the anchovy state relative to the mushroom state.
Also, I think I buy the following part but I really wish it were more constructive.