Suppose I express a partial preference over “good worlds” and another one over “bad worlds”, for example “when everyone’s needs for food, water and shelter are met, then it is better for there to be more social connection” and “when I am living in extreme poverty, I prefer to be in a country with a good social safety net”. These talk about mutually exclusive worlds, and so lead to two distinct ordered chains. Then, on average you assign the same utility to a good world and a bad world, which seems very bad. How do we avoid this issue?
Suppose I express a partial preference over “good worlds” and another one over “bad worlds”, for example “when everyone’s needs for food, water and shelter are met, then it is better for there to be more social connection” and “when I am living in extreme poverty, I prefer to be in a country with a good social safety net”. These talk about mutually exclusive worlds, and so lead to two distinct ordered chains. Then, on average you assign the same utility to a good world and a bad world, which seems very bad. How do we avoid this issue?
By adding in a third preference, which explicitely says that extreme poverty is worse than having all needs met.
These are just pieces of the total utility, remember. Even if they are full preferences, they are not all our preferences.