Geometric Utilitarianism (And Why It Matters)

Do you like using numbers to represent uncertainty and preference, but also care about things like fairness and consent? Are you an altruist on a budget, looking to do the most good with some of your resources, but want to pursue other goals too? Are you looking for a way to align systems to the interests of many people? Geometric Utilitarianism might be right for you!

Classic Utilitarianism

The Harsanyi utilitarian theorem is an amazing result in social choice theory, which states that if a social choice function is both

then for any joint utility , must be equal to a weighted average of individual utilities that looks like , where is the dot product and are weights given to each agent’s utility that sum up to 1.

As Diffractor puts it here in their excellent Unifying Bargaining sequence:

Basically, if you want to aggregate utility functions, the only sane way to do so is to give everyone importance weights, and do a weighted sum of everyone’s individual utility functions.

Diffractor is using sane as a shorthand for VNM-rational here, which is extremely reasonable given the success of expected utility maximization as a model of rational decision-making. However, I have recently been radicalized by reading Scott Garrabrant’s very compelling Geometric Rationality sequence, which has significantly updated my thinking on many topics in rationality, including how to sensibly combine utilities. And I wanted to see if I could prove some results about what happens if we use a geometric weighted average of utilities that looks like when the weights sum to 1 and utilities are shifted to be non-negative. (Which I’ll be assuming throughout this post.)

Results About Geometric Utilitarianism

What might it mean for a group to be rational? Well at the very least, that group had better be doing something Pareto optimal. Otherwise we can shuffle around their behavior and get strictly more value for that group. And it turns out Pareto optimality is enough to let us parameterize all rational group behavior as maximizing some geometric weighted average of individual utilities.

This geometric utilitarian theorem for group rationality is analogous to the VNM theorem for individual rationality, which lets us model rational agents as maximizing expected utility.

In more mathy terms, here are the results (some I think are well-known and a few I think are new):

Main Results

  1. and are both Pareto monotone, and maximizing either can lead to Pareto optimality.

  2. Given any Pareto optimal joint utility , we can retroactively find weights and which make optimal according to and .

  3. Using 2, given the output of any Pareto optimal bargaining protocol or social choice function , we can find weights and which let us view as maximizing or . (Analogous to how we can view any agent with VNM-preferences as maximizing a utility function .) In general, viewing as a maximizer will yield more specific predictions, because:

  4. For points on the interior of the Pareto frontier where for all agents, we can calculate weights which make the unique optimum of . By contrast, even after making optimal according to , is indifferent everywhere on the Pareto frontier with the same slope as .

Bonus Results

  1. and are both smooth when for all agents. (They are continuous and infinitely differentiable in that range, and is smooth everywhere). whenever any agent’s utility is 0, and this leads to prefer compromises over extremes whenever .[1]

    1. and also both preserve geometric convexity where they’re continuous: if you feed in a convex set of feasible joint utilities, the result is a convex subset of . When is compact then so is its image in . (Bounded shapes get mapped to line segments or a single point.)

  2. is smooth where for all agents, and is smooth everywhere. Small changes to the weights and lead to small changes in and .

  3. When for all agents, is unique and continuous. In other words, when all agents have positive weight, individual utilities shift continuously as we change geometric weights. We can also pad in a way that makes unique and continuous for all , and an arbitrarily good approximation of maximizing .

    1. By contrast, varying and maximizing causes individual utilities to jump discontinuously, because maximizers exhibit a thrashing behavior when faced with linear trade-offs. Small changes in , or small changes in the trade-off being faced, can lead maximizers to thrash between maximizing one agent’s utility and another’s, with no inclination towards compromise anywhere along the way. This is the major way in which deviates from what we’d intuitively like out of a “utility aggregation” method.

    2. We can pick so that prefers a compromise over the extremes.

This inclination towards compromise is a big deal, and is the property that means isn’t VNM-rational. We can pick weights which make strictly prefer one particular convex combination of outcomes compared to any other, including the underlying pure outcomes. VNM-rational agents never have preferences that look like this.

Why Geometric Rationality?

Why would we take such a drastic move as building an agent with geometric preferences? It turns out that geometric agents handle trade-offs between multiple values much better than VNM agents.

For example, consider a VNM agent choosing how to split $100 between Alice and Bob, who each have utility functions that are linear in money (at least for amounts up to $100). No matter how we set the weights, the VNM axioms force to have one of the following optima:

  • Give Alice all the money

  • Give Bob all the money

  • Complete indifference between all splits

A VNM agent can’t prefer a compromise to both extremes, when trade-offs are linear.

Compare this to a geometric agent, which splits the $100 proportional to the weights assigned to Alice and Bob. The same contrast appears when considering how to spend resources advancing Alice and Bob’s interests. If Alice and Bob are constructing an agent to act on their behalf, this is probably more what they had in mind when they went looking for a weighted way to balance between their interests. There are geometric weights Alice and Bob can both agree to, and that bargaining range is simply empty when it comes to Harsanyi weights. Nash bargaining is a special case of geometric rationality where all agents are given equal weight.

The same phenomenon happens with lotteries. If a VNM agent has to decide how to allocate an indivisible good, such as a hat, it faces the same trilemma over lotteries about how to allocate it:

  • Giving Alice the hat is optimal

  • Giving Bob the hat is optimal

  • Complete indifference between all lotteries about how to allocate the hat

A VNM agent can’t prefer any weighted coin flip over both pure outcomes.

Again, a geometric agent facing the same decision will pick an option that splits expected utility proportional to the weights given to Alice and Bob. And as we’ll see in the next post, we can get even better results if Alice and Bob can make side payments to each other.

How Can We Apply These Results?

There are a few different lenses through which I think geometric aggregation is useful:

  • As a model of group rationality

  • As a model of individual rationality

  • As a moral framework

Group Rationality

Since anything that leads to a Pareto optimal outcome can be seen as maximizing for some , we can model any Pareto optimal bargaining solution or social choice function as maximizing some weighted geometric average of individual utilities. This becomes helpful constructively when we can identify the weights before knowing where to find the optima. For example, Nash bargaining maximizes the product of utilities , which means it also maximizes the -th root of the product of utilities .[2] This is the same as maximizing , which in turn is the same as maximizing when we set all of the weights .

We could also try to formalize the intuition that “every negotiator should benefit equally from the agreement.” The Kalai-Smorodinsky bargaining solution takes this approach, and Diffractor makes a compelling argument for it in their Unifying Bargaining sequence. If we standardize everyone’s utility function by shifting and scaling each into the interval [0, 1], then KS picks out the point on the Pareto frontier where all agents receive the same standardized utility. We can calculate the weights for this point and use them to guide a maximizer right there.

There is a special case of bargaining where the Pareto frontier is completely flat, and this is the case we saw earlier where classic linear utility aggregation simply cannot capture the idea of a negotiated agreement. This can happen when

  • Splitting a resource among agents that value it linearly

  • Deciding how to spend such a resource

  • Using such a resource for side payments

  • Negotiating the probability of a lottery

In this special case of a flat Pareto frontier, the Nash and KS solutions coincide exactly with “maximize economic surplus and split it equally.”

And it turns out that in general, we need something like side payments to actually achieve Pareto optimal results. Any sensible bargaining protocol ignores the scale factor of each agent’s utility function, since that’s a free parameter when choosing a utility function to represent each agent’s preferences. But that also means that all sensible bargaining protocols give completely nonsensical results when that scale factor actually matters, unless we use something like side payments to interpersonally compare utilities.

The next post of this sequence goes into more detail about how side payments reintroduce this geometric information that gets lost when using utility functions, and I want to call it out as an important component of group rationality. Money is the interpersonally comparable unit of caring, and we need something like that to even talk about concepts like economic surplus or claims like “Alice benefits more than Bob is harmed.”

Scott Garrabrant, Wei Dai and others have also pointed out the need for a broader concept of rationality than the VNM axioms when aggregating utilities. Groups of voluntarily coordinating agents, or voluntarily merged AIs, simply don’t behave like VNM-rational agents. I would actually suggest that we should view Harsanyi’s aggregation theorem as an impossibility result. If we require the aggregate to be VNM-rational, then the aggregate can’t represent a negotiated agreement among voluntary participants. Linear aggregation can’t represent voluntary coordination, because there are no weights that are mutually acceptable to all participants when trade-offs are linear.

Bargaining With Ourselves

There are also many contexts in which we can model ourselves as being made up of many sub-agents with different interests, and we can apply the same group rationality techniques to balance between them. Scott gives several examples in his Geometric Rationality sequence, and I recommend checking it out for more details:

In that last example, Scott describes an agent with both selfish and selfless desires. In geometric rationality, these desires are represented by different internal agents, which bargain over the decision that the overall agent will make. This is a nice mental tool, but it also makes quantitatively different predictions than VNM rationality, and I suspect that the geometric approach is a better match for how people naturally balance between conflicting desires.

For example, if you think of people as valuing the health of birds in the same way they value an elastic good like soft drinks, you might think that people’s willingness to spend money to protect birds from oil ponds would be sensitive to the ratio of dollars to birds helped. Whereas if you instead think of “Birds” as a coalition represented by a single internal agent, whose weight doesn’t change much with the actual number of birds being helped, this is one explanation for the observed less-than-linear relationship between “number of birds helped” and “people’s willingness to pay to help these birds.”

Is this a cognitive bias? Would you take a pill that induced a linear relationship between the size of problems in the world and your willingness to sacrifice to address them? How can an altruist ever justify spending money on themselves, when that same money can do so much good for others with so much less?

For me, the justification that feels the most satisfying is Scott Alexander’s amazing Nobody is Perfect, Everything is Commensurable. I give 10% of my income to effective charities, including the Animal Welfare Fund, and the rest I put towards all sorts of other purposes. Geometric rationality is all about proportional representation among internal desires, and not feeling like you need to spend all of your time and money on maximizing one particular form of value.

Upgrading Utilitarianism

Geometric utilitarianism seeks to improve on classic utilitarianism, and it has two free parameters which we can use to encode even more of our moral intuitions:

  • The feasible options

    • This encodes what is acceptable, and what externalities need to be internalized.

      • Can Alice pollute without compensating others that are negatively affected?

      • Can Alice change her hair style without compensating others that are negatively affected?

      • What forms of compensation are appropriate, if any?

  • The weights

    • This encodes our notions of fairness

      • How should the economic surplus from this decision be distributed?

The moral position that “people shouldn’t be negatively affected without their consent” is central to the philosophy of voluntarism, and we can make our utilitarianism more voluntarist by including more affected agents in our consideration when making decisions. This inclusion can look like:

  • Not negatively affecting agents by default

    • If Alice will pollute by default, any bargaining from that baseline will involve Bob paying her not to. (Or bargaining falling through because Alice profits more from polluting than Bob is willing or able to pay.)

    • If Alice doesn’t pollute by default, she only pollutes if she also pays Bob a fair share of the economic surplus generated. (Or she doesn’t pollute at all, if she benefits less than it would take to compensate Bob for that externality.)

  • Assigning agents positive weight in our utility aggregation function

    • This automatically requires that they be at least as well off as in the absence of an agreement, internalizing any externalities. It also gives them a share of the resulting economic surplus, proportional to their weight.

My current take is that answering “which externalities should be permitted without incurring liability” is complicated. It’s a decent chunk of the overall complexity of morality and social norms. I believe this question is central to «Boundaries» as a technical concept, and I recommend checking out that sequence for more details. Sometimes we need the consent of literally every affected party (e.g. sharing of private health information, anything to do with sex). Sometimes we just need the consent of a group, without needing the consent of every member (e.g. pollution, eminent domain, any other law enforcement). And sometimes we should be able to choose freely without needing to compensate anyone that doesn’t like our choice (e.g. hair style, private thoughts, boycotting businesses, any other legal right).

Drawing these boundaries is complicated, and this is only one factor which goes into designing . What actions are permissible, and under what circumstances? Medical ethics are vastly different from legal ethics, which are completely different from the standards regulating war between countries. How do we handle epistemic disagreements, or disagreements about how the boundaries should be drawn? What types of side payments are acceptable, in what contexts?

Similarly, captures our ideas of fairness, and these are also heavily context-dependent. Some interactions, like buying an apple, invoke notions of “fairly splitting the gains from trade.” Other aspects of human life are deliberately regulated competitions, where gains for one party are necessarily losses for another. And we have different notions of “fair and unfair practices” for competition between individuals for jobs, romantic partners, and social status. We have yet more notions of fairness for businesses competing for market share and favorable legislation. For athletes, for countries, for political candidates, our standards for fairness are complex and nuanced, but they all answer the question “Who should get what?”

Geometric utilitarianism factors the problem of morality into 3 sub-problems, and solves the last one:

  1. Decide on the feasible options

  2. Pick weights for each agent

  3. Combine these into a decision

This is an attempt to improve on classic utilitarianism, which didn’t include considerations of fairness, consent, or any other ethical standards that might be relevant to a decision. Utilitarian thought experiments tend to focus more on “what maximizes surplus” and less on “how to split it fairly” or “whose consent is needed for this decision anyway?”

If we were building a single powerful system to choose on our behalf, in full generality, well ideally we would stop and Not Do That. But if we’re building any system smart enough to understand our preferences, we wouldn’t want it to Shut Up and Multiply trying to maximize a linear aggregate of individual utilities while ignoring all of our other moral principles. For a system to make good choices across all domains, it needs to incorporate not just the complexity of each person’s values, but the complexity of how we want those values to influence decisions in each domain.

Choose Your Own Adventure

I’ve split the math off into its own sequence, and it’s got lots of pictures and interactive Geogebra toys to help build intuition, but mostly it’s about working through the details behind the results summarized in this post. The first post in that sequence goes through the proofs for the main results, with the details for a couple pieces broken out into their own posts. If you’re interested in the math behind those results, I’d start there!

The next post in this sequence is about side payments, and the absolutely critical role they play in allowing us to actually reach Pareto optimal outcomes. Feel free to treat the math posts like an appendix and keep going from here!

  1. ^

    This summary used to say that continuous everywhere, including around the boundary where for some agent. But this isn’t necessarily the case. Individual Utilities Shift Continuously as Geometric Weights Shift goes into the details, but I recommend starting with Proving the Geometric Utilitarian Theorem to get oriented.

  2. ^

    Maximization is invariant under applying a monotonic function. Which is obvious in retrospect but I spent some time thinking about derivatives before I read Scott pointing it out.