Consider the following degenerate case: there is only one decision to be made, and your competing theories assess it as follows.
Theory 1: option A is vastly worse than option B.
Theory 2: option A is just a tiny bit better than option B.
And suppose you find theory 2 just slightly more probable than theory 1.
Then it seems like any parliamentary model is going to say that theory 2 wins, and you choose option A. That seems like a bad outcome.
Accordingly, I suggest that to arrive at a workable parliamentary model we need to do at least one of the following:
Disallow degenerate cases of this kind. (Seems wrong; e.g., suppose you have an important decision to make on your deathbed.)
Bite the bullet and say that in the situation above you really are going to choose A over B. (Seems pretty terrible.)
Take into account how strongly the delegates feel about the decision, in such a way that you’d choose B in this situation. (Handwavily it feels as if any way of doing this is going to constrain how much “tactical” voting the delegates can engage in.)
As you might gather, I find the last option the most promising.
Great example. As an alternative to your three options (or maybe this falls under your first bullet), maybe negotiation should happen behind a veil of ignorance about what decisions will actually need to be made; the delegates would arrive at a decision function for all possible decisions.
Your example does make me nervous, though, on the behalf of delegates who don’t have much to negotiate with. Maybe (as badger says) cardinal information does need to come into it.
Yes, I think we need something like this veil of ignorance approach.
In a paper (preprint) with Ord and MacAskill we prove that for similar procedures, you end up with cyclical preferences across choice situations if you try to decide after you know the choice situation. The parliamentary model isn’t quite within the scope of the proof, but I think more or less the same proof works. I’ll try to sketch it.
Suppose:
We have equal credence in Theory 1, Theory 2, and Theory 3
Theory 1 prefers A > B > C
Theory 2 prefers B > C > A
Theory 3 prefers C > A > B
Then in a decision between A and B there is no scope for negotiation, so as two of the theories prefer A the parliament will. Similarly in a choice between B and C the parliament will prefer B, and in a choice between C and A the parliament will prefer A.
I think So8res’s solution is essentially your option 3, with the strength of the disagreements being taken into account in the utility function, and then once you really have everything you care about accounted for, then the best choice is the standard one.
I agree that some cardinal information needs to enter in the model to generate compromise. The question is whether we can map all theories onto the same utility scale or whether each agent gets their own scale. If we put everything on the same scale, it looks like we’re doing meta-utilitarianism. If each agent gets their own scale, compromise still makes sense without meta-value judgments.
Two outcomes is too degenerate if agents get their own scales, so suppose A, B, and C were options, theory 1 has ordinal preferences B > C > A, and theory 2 has preferences A > C > B. Depending on how much of a compromise C is for each agent, the outcome could vary between
choosing C (say if C is 99% as good as the ideal for each agent),
a 50⁄50 lottery over A and B (if C is only 1% better than the worst for each), or
some other lottery (for instance, 1 thinks C achieves 90% of B and 2 thinks C achieves 40% of A. Then, a lottery with weight 2/3rds on C and 1/3rd on A gives them each 60% of the gain between their best and worst)
A possible (but I admit, quite ugly) workaround: whenever there are very few decisions to be made introduce dummy bills that would not be actually carried out. MPs wouldn’t know about their existence. In this case Theory 1 might be able to negotiate their way into getting B.
Consider the following degenerate case: there is only one decision to be made, and your competing theories assess it as follows.
Theory 1: option A is vastly worse than option B.
Theory 2: option A is just a tiny bit better than option B.
And suppose you find theory 2 just slightly more probable than theory 1.
Then it seems like any parliamentary model is going to say that theory 2 wins, and you choose option A. That seems like a bad outcome.
Accordingly, I suggest that to arrive at a workable parliamentary model we need to do at least one of the following:
Disallow degenerate cases of this kind. (Seems wrong; e.g., suppose you have an important decision to make on your deathbed.)
Bite the bullet and say that in the situation above you really are going to choose A over B. (Seems pretty terrible.)
Take into account how strongly the delegates feel about the decision, in such a way that you’d choose B in this situation. (Handwavily it feels as if any way of doing this is going to constrain how much “tactical” voting the delegates can engage in.)
As you might gather, I find the last option the most promising.
Great example. As an alternative to your three options (or maybe this falls under your first bullet), maybe negotiation should happen behind a veil of ignorance about what decisions will actually need to be made; the delegates would arrive at a decision function for all possible decisions.
Your example does make me nervous, though, on the behalf of delegates who don’t have much to negotiate with. Maybe (as badger says) cardinal information does need to come into it.
Yes, I think we need something like this veil of ignorance approach.
In a paper (preprint) with Ord and MacAskill we prove that for similar procedures, you end up with cyclical preferences across choice situations if you try to decide after you know the choice situation. The parliamentary model isn’t quite within the scope of the proof, but I think more or less the same proof works. I’ll try to sketch it.
Suppose:
We have equal credence in Theory 1, Theory 2, and Theory 3
Theory 1 prefers A > B > C
Theory 2 prefers B > C > A
Theory 3 prefers C > A > B
Then in a decision between A and B there is no scope for negotiation, so as two of the theories prefer A the parliament will. Similarly in a choice between B and C the parliament will prefer B, and in a choice between C and A the parliament will prefer A.
This seems really similar to the problem Knightian uncertainty attempts to fix.
I think So8res’s solution is essentially your option 3, with the strength of the disagreements being taken into account in the utility function, and then once you really have everything you care about accounted for, then the best choice is the standard one.
I agree that some cardinal information needs to enter in the model to generate compromise. The question is whether we can map all theories onto the same utility scale or whether each agent gets their own scale. If we put everything on the same scale, it looks like we’re doing meta-utilitarianism. If each agent gets their own scale, compromise still makes sense without meta-value judgments.
Two outcomes is too degenerate if agents get their own scales, so suppose A, B, and C were options, theory 1 has ordinal preferences B > C > A, and theory 2 has preferences A > C > B. Depending on how much of a compromise C is for each agent, the outcome could vary between
choosing C (say if C is 99% as good as the ideal for each agent),
a 50⁄50 lottery over A and B (if C is only 1% better than the worst for each), or
some other lottery (for instance, 1 thinks C achieves 90% of B and 2 thinks C achieves 40% of A. Then, a lottery with weight 2/3rds on C and 1/3rd on A gives them each 60% of the gain between their best and worst)
A possible (but I admit, quite ugly) workaround: whenever there are very few decisions to be made introduce dummy bills that would not be actually carried out. MPs wouldn’t know about their existence. In this case Theory 1 might be able to negotiate their way into getting B.