Individual Rationality Needn’t Generalize to Rational Consensus
tl;dr
Organizations that enforce rationality at the collective level can get very different voting outcomes than organizations that enforce rationality at the individual level, per known results in social choice theory. This has implications for real-world expert panels.
Here, “rationality” is logical consistency—it is possible for the majority of members to vote to reject a conclusion while also believing the necessary conditions to accept it hold, and vice-versa, even if they all independently evaluated the precepts and arrived at the conclusion logically. This arises because of how majority votes work.
This post summarizes Philip Pettit’s 2002 paper outlining the issue and its implication for any deliberative democracy. It additionally summarizes Pettit and List’s impossibility result on judgement aggregation rules from the Stanford Encyclopedia of Philosophy.
Introduction to the Discursive Dilemma
The Doctrinal Paradox
Three judges have to decide by majority whether a defendant broke a contract.
Legal doctrine dictates that if the following two premises are true:
(a) the defendant was contractually obliged not to do action X, and
(b) the defendant did action X,
then the conclusion is that the defendant broke his contract. Let’s call this conclusion (c).
The judges get to decide which of the premises, (a) and (b), are true.
This is the simple conjunctive formula , and, taking one possible way the judges vote, we can construct a truth table for the same.
A | B | ||
---|---|---|---|
Judge 1 | True | True | True |
Judge 2 | True | False | False |
Judge 3 | False | True | False |
Majority | True | True | False |
In this permutation of votes, because the majority voted false for (c), the judges vote to pardon the defendant.
But here’s the paradox: if the majority of the judges thought (a) was true and (b) was true, then it should have implied that the majority thought (c) was true.
You now have the unfortunate case where the majority voted true for all the precepts, but also rejected the conclusion.
This demonstrates collective inconsistency arising from individual consistency.
Generalizations
A similar paradox can be constructed for disjunctive propositions . It scales to arbitrary groups of people.
In fact, Pettit identifies the following minimal conditions:
a. there is a conclusion to be decided among a group of people by reference to a conjunction (or disjunction) of independent or separable premises—the conclusion will be endorsed if relevant prem- ises are endorsed, and otherwise it will be rejected;
b. each member of the group forms a judgment on each of the premises and a corresponding judgment on the conclusion;
c. each of the premises is supported by a majority of members but those majorities do not coincide with one another;
d. the intersection of those majorities will support the conclu- sion, and the others reject it, in view of a; and
e. the intersection of the majorities is only a minority in the group as a whole.
Implications
Although cute-sounding at first, this paradox underscores a very important issue regarding voting procedures in organizations in practice.
First, organizations must choose to vote directly for the proposition at stake, or to vote for individual conditions and have the decision inferred from that. As the foregoing paradox demonstrates, the outcomes are very different in both—in the first procedure, the judges voted to pardon, but, under the second procedure, the judges would have had to convict.
Second, it is not the case that organizations can always choose to vote directly and ignore preserving collective consistency, as is usually argued about electoral voting. Pettit identifies two examples of organizations where collective consistency is absolutely necessary:
-
A committee that has been tasked to evaluate the merits of a case and arrive at a recommendation accordingly. This includes awards panels, juries, trusts acting on external instructions, and expert policy bodies.
-
Political or activist movements that seek to hold ethically or philosophically consistent positions, where members may desert if the movement does not appear to be holistically consistent.
Because of this generalization, the doctrinal paradox has been dubbed the discursive dilemma, given that the issue at stake needn’t have anything to do with legal doctrine at all.
Finally, it demonstrates that the root of this issue arises from the specific scheme of majority vote collection. It opens the door to thinking about alternate voting schemes that try to satisfy both individual and collective rationality.
An Impossibility Result For Collective Rationality In the Best Case
The more general problem of arriving at a way to collect votes on propositions is known as judgement aggregation.
Typically, we might want to be able to ensure that we can have a procedure for judgement aggregation that meets a few nice properties:
-
Plurality: There are no constraints on which available options people can vote for. In other words, if there are possible options, it is safe to assume that people may vote for any of the options.
-
Complete collective consistency: The judgment preserves collective consistency for all propositions. In other words, independent of the proposition, the majority judgement will always faithfully evaluate the proposition consistently.
-
Anonymity: If the proportion of people who voted stays the same, then the judgement stays the same, independent of who exactly voted. In other words, it doesn’t matter which subpopulation voted, only the relative propotion.
-
Systematicity: This is fairly technical, but is a slightly stronger version of independence of irrelevant alternatives, which states that a preference between two options shouldn’t be changed by introducing a third option. You might prefer to , but introducing shouldn’t make you suddenly prefer to , although you may prefer to or to .
Unfortunately, List and Pettit jointly proved in 2002 that arriving at a scheme like this is impossible. In order to arrive at a collectively consistent scheme, we must sacrifice some of the other properties.
An Exercise for the Interested Reader
So how would you preserve collective consistency and individual consistency?
Why is the collective decision of the three judges wrong? Two of the judges believe there was no breach of contract, although for different reasons. Therefore the defendant is acquitted. It seems to me clearly wrong to prefer a separate vote on A and B, round the results off to true/false, and then use those fictitious values to infer C.
The two judges for acquittal need not even have been disagreeing about any matter of substance. Judge 2 found that the defendant’s actions did not meet the contractual definition of what was forbidden, while Judge 3 found that the contractual definition was not what the defendant did, a distinction without a difference. (Judge 1 thinks the other two are just splitting hairs and expects to be vindicated when the plaintiff appeals.)
But it doesn’t imply that. There is no such thing as “the” majority. It’s a different majority every time. One might as well say that if someone believes (a) and someone believes (b), then someone must believe (c).
Logical reasoning only preserves truth, not probability, plausibility, desirability, or anything of that sort. So surely property (2) is a non-starter.
The paradox demonstrates that there are differences in outcome based on the way you aggregate majorities. It doesn’t claim that one aggregation rule is superior to the other.
That’s another way to say that collective decision isn’t “wrong”—the point of the paradox is to show that it depends on how you choose to measure that decision.
Naturally. But there are cases where you can’t avoid separate votes on A and B. Pettit provides two cases, which I have reproduced above (see “it is not the case that organizations can always choose to vote directly and ignore preserving collective consistency”).
The obvious case is where you are required to vote on A and B, and infer C from there. This can happen in a procedural context, because that’s just the way someone specified it.
The less-obvious case is where acquiring consensus on C directly is prohibitive or does not reflect the same result as acquiring consensus on A and B. Perhaps C is controversial or people have incentives to lie, but A and B are not. Perhaps A and B were ratified by Congress and now it is upto constitutional scholars to decide on the merits of C without being able to consult Congress as a whole.
Whatever the case, the consequences for decision-making are clear. We cannot build inferences of the form “the majority agreed on A”, “the majority agreed on B” so this implies “the majority agreed on C”. Yet, as the foregoing illustrates, such inferences are sometimes made out of necessity.
Nothing is just the way someone specified it. They specified it that way for a reason. It is standard wisdom in politics that if you control the agenda, it doesn’t matter how people vote. If you actually want the voters to decide on C, put that question to them. If the real question of the day is not being put to them, ask why.
I think I understand the confusion. When I say “vote”, I am not necessarily talking about electorates or plebiscites. In fact, Pettit’s paper is remarkable precisely for also considering situations that have nothing to do with politics or government.
Consider the case of a trust fund that must make decisions for the trust based on how the original creator specified it. For example, they may be charged to make investment decisions that best support a specific community or need. The executors of this trust try their hardest to meet the spirit as well as the letter of these instructions, so they end up adopting rules that require members to vote separately on whether a proposed action meets the spirit of the instructions and whether it meets the letter of the instructions. The rationale is that this ensures the executors as a whole have done their homework and cannot be held liable for missing one or the other requirement through a single vote.
The doctrinal paradox in this case demonstrates you can get different outcomes if you had them vote directly on whether it met spirit and letter, or had them vote separately on the components of the question.
I hope that this explains what I mean by “required to do it” by providing an incentive that has nothing to do with politics. I hope it also encourages a shift towards thinking in terms of systems and their consistency criterions.
I won’t respond to the rest of the comment because discourse about political agenda is not relevant to this discussion.
Neither am I. The “standard wisdom” I quoted applies to the very broadest understanding of “politics”: the theory of collective decision-making.
They didn’t “end up” adopting those rules, they chose those rules. Which are clearly the wrong rules.
In all this I’m also not seeing a place for the people participating in these joint decisions to discuss matters. Having each “voter” (see above) make their decision in isolation, on an agenda set by someone else, who will then combine the votes into a joint decision on questions never put, is a prima facie absurd way to do business, except for the one setting those rules and choosing the questions.
If you liked this post, you will love Amartya Sen’s Collective Choice and Social Welfare. Originally written in 1970 and expanded in 2017, this is a thorough development of the many paradoxes in collective choice algorithms (voting schemes, ways to aggregate individual utility, and so on.)
My sense is the AI alignment community has not taken these sorts of results seriously. Preference aggregation is non-trivial, so “aligning” an AI to individual preferences means something much different than “aligning” an AI to societal preferences. Different equally-principled ways of aggregating preferences will give different results, which means that someone somewhere will not get what they want. Hence an AI agent will always have some type of politics if only by virtue of its preference aggregation method, and we should be investigating which types we prefer.
I thought Incomplete Contracting and AI Alignment addressed this situation nicely:
How do we make a choice about the “right” politics/preference aggregation method for an AI? I don’t think there is or can be an a-priori answer here, so we need something else to break the tie. One strategy is to ask what the consequences of that each type of political system will be in the actual world, rather than an abstract behind-the-veil scenario. But more fundamentally I don’t know that we can do better than what humans have always done, which is group discussions with the intention of coming to a workable agreement. Perhaps an AI agent can and should participate in such discussions. It’s not just the formal process that makes voting systems work, but the perceived legitimacy and therefore good-faith participation of the people who will be governed by it, and this is what such discussion creates.
Well, the “ideal” way to aggregate beliefs is by Aumann agreement, and the “ideal” way to aggregate values is by linear combination of utility functions. Neither involve voting. So I’m not sure voting theory will play much of a role. It’s more intended for situations where everyone behaves strategically; a superintelligent AI with visibility into our natures should be able to skip most of it.
This is not obvious to me. Can you elaborate?
Aumann agreement isn’t an answer here, unless you assume strong Bayesianism, which I would advise against.
I have to say I don’t know why a linear combination of utility functions could be considered ideal. There are some pretty classic arguments against it, such as Rawls’ maximin principle, and more consequentialist arguments against allowing inequality in practice.
To expand the argument a bit: if many people have evidence-based beliefs about something, you could combine these beliefs by voting, but why bother? You have a superintelligent AI! You can peek into everyone’s heads, gather all the evidence, remove double-counting, and perform a joint update. That’s basically what Aumann agreement does—it doesn’t vote on beliefs, but instead tries to reach an end state that’s updated on all the evidence behind these beliefs. I think methods along these lines (combining evidence instead of beliefs) are more correct and should be used whenever we can afford them.
For more details on this, see the old post Share likelihood ratios, not posterior beliefs. Wei Dai and Hal Finney discuss a nice toy example in the comments: two people observe a private coinflip each, how do they combine their beliefs about the proposition that both coins came up heads? Combining the evidence is simple and gives the right answer, while other clever schemes give wrong answers.
Imagine that after doing the joint update, the agents agree to cooperate instead of fighting, and have a set of possible joint policies. Each joint policy leads to a tuple of expected utilities for all agents. The resulting set of points in N-dimensional space has a Pareto frontier. Each point on that Pareto frontier has a tangent hyperplane. So there’s some linear combination of utility functions that’s maximized at that point, modulo some tie-breaking if the frontier is perfectly flat there.
Right, this is where strong Bayesianism is required. You have to assume, for example, that everyone agrees on the set of hypotheses under consideration and the exact models to be used. This is not just an abstract plan for slicing the universe into manageable events, but the actual structure and properties of the measurement instruments that generate “evidence.” If we wish to act as well we also have to specify the set of possible interventions and their expected outcomes. These choices are well outside the scope of a Bayesian update (see e.g. Gelman and Shalizi or John Norton).
Also, I do not have a super-intelligent AI. I’m working on narrow AI alignment, and many of these systems have social choice problems too, for example recommender systems.
The Pareto frontier is a very weak constraint, and lots of points on it are bad. For a self-driving car that wants to drive both quickly and safely, both not moving at all and driving as fast as possible are on the frontier. For a distribution of wealth problem, “one person gets everything” is on the frontier. The hard problem is choosing between points on the frontier, that is, trading off one person’s utility against another. There is a long tradition of work within political economy which considers this problem in detail. It is, of course, partly a normative question, which is why norm-generation processes like voting are relevant.
But under these assumptions, combining evidence always gives the right answer. Compare with the example in the post: “vote on a, vote on b, vote on a^b” which just seems strange. Shouldn’t we try to use methods that give right answers to simple questions?
I think if you have a set of coefficients for comparing different people’s utilities (maybe derived by looking into their brains and measuring how much fun they feel), then that linear combination of utilities is almost tautologically the right solution. But if your only inputs are each person’s choices in some mechanism like voting, then each person’s utility function is only determined up to affine transform, and that’s not enough information to solve the problem.
For example, imagine two agents with utility functions A and B such that A<0, B<0, AB=1. So the Pareto frontier is one branch of a hyperbola. But if the agents instead had utility functions A’=2A and B’=B/2, the frontier would be the same hyperbola. Basically there’s no affine-invariant way to pick a point on that curve.
You could say that’s because the example uses unbounded utility functions. But they are unbounded only in the negative direction, which maybe isn’t so unrealistic. And anyway, the example suggests that even for bounded utility functions, any method would have to be sensitive to the far negative reaches of utility, which seems strange. Compare to what happens when you do have coefficients for comparing utilities, then the method is nicely local.
Does that make sense?
a) “Everyone does Bayesian updating according to the same hypothesis set, model, and measurement methods” strikes me as an extremely strong assumption, especially since we do not have strong theory that tells us the “right” way to select these hypothesis sets, models, and measurement instruments. I would argue that this makes Aumann agreement essentially useless in “open world” scenarios.
b) Why should uniquely consistent aggregation methods exist at all? A long line of folks including Condorcet, Arrow, Sen and Parfit have pointed out that when you start aggregating beliefs, utility, or preferences, there do not exist methods that always give unambiguously “correct” answers.
Sure, but finding the set of coefficients for comparing different people’s utilities is a hard problem in AI alignment, or political economy generally. Not only are there tremendous normative uncertainties here (“how much inequality is too much?”) but the problem of combining utilities a minefield of paradoxes even if you are just summing or averaging.
Yeah. I was more trying to argue that, compared to Bayesian ideas, voting doesn’t win you all that much.
I kinda lost track of what made me thin about it but one of the conditions for Aumann is that you think the other actor is rational. Then it seemed that it might be rational to not agree if you well-foundedly think the other party is irrational. I think the idea was that if both parties rely on public information and they compete to accomplish something based on that then the difference must come from how they process that information. If they were to process the information in the same way they would need to come to the same conclusion.
So in a way assuming Aumann agreement might secretly assume that everybody “deep down” has the same base policy which might be better warranted if one is looking at information access differences but for genuine opinion differences it becomes much more doubtful.
Explicit voting isn’t even necessary for this effect to show up. This is an explanation of a notable effect wherein a group of people appear to hold logically inconsistent beliefs from the perspective of outsiders.
Examples: -My (political out-group) believes X and ~X -(Subreddit) holds inconsistent beliefs
The obvious solution is to use probabilities rather than absolute judgements of true/false. Although we still have the issue that in general the average of two products is different from the product of two averages. This inconsistency is much smaller though, and can be dealt with a more nuanced calculation (accounting for the possibly correlated distributions behind the point estimates) if absolutely necessary.