There is no relation to Simpson’s paradox. In Simpson’s paradox, each of the data points comes from the same one-dimensional x-axis, so as you keep increasing x, you can run through all the data points in one group, go out the other side, and then get to another group of data points. In preference aggregation, there is no analogous meaningful way to run through one agent considering each possible state of the universe, keep going, and get to another agent considering each possible state of the universe.
Good point. More relevantly, Simpson’s paradox relies on different groups containing different values of the independent variable. If each group contains each independent variable in equal measure, Simpson’s paradox cannot occur. The analogue of this in decision theory would be the probability distribution over outcomes. So if each agent has different beliefs about what A and B are, then it makes sense that everyone could prefer A over B but the FAI prefers B, but that’s because the FAI has better information, and knows that at least some people would prefer B if they had better information about what the options consisted of. If everyone would prefer A over B given the FAI’s beliefs, then that reason goes away, and the FAI should choose A. This latter situation is the one modeled in the post, and the former does not seem particularly relevant, since there’s no point in asking which option someone prefers given bad information if you could also apply their utility function to a better-informed estimate of the probabilities involved.
There is no relation to Simpson’s paradox. In Simpson’s paradox, each of the data points comes from the same one-dimensional x-axis, so as you keep increasing x, you can run through all the data points in one group, go out the other side, and then get to another group of data points. In preference aggregation, there is no analogous meaningful way to run through one agent considering each possible state of the universe, keep going, and get to another agent considering each possible state of the universe.
Simpson’s paradox can occur in discrete cases too, such as the example of a university showing a bias against women even though each of its constituent colleges was biased in their favour..
Good point. More relevantly, Simpson’s paradox relies on different groups containing different values of the independent variable. If each group contains each independent variable in equal measure, Simpson’s paradox cannot occur. The analogue of this in decision theory would be the probability distribution over outcomes. So if each agent has different beliefs about what A and B are, then it makes sense that everyone could prefer A over B but the FAI prefers B, but that’s because the FAI has better information, and knows that at least some people would prefer B if they had better information about what the options consisted of. If everyone would prefer A over B given the FAI’s beliefs, then that reason goes away, and the FAI should choose A. This latter situation is the one modeled in the post, and the former does not seem particularly relevant, since there’s no point in asking which option someone prefers given bad information if you could also apply their utility function to a better-informed estimate of the probabilities involved.