But under these assumptions, combining evidence always gives the right answer. Compare with the example in the post: “vote on a, vote on b, vote on a^b” which just seems strange. Shouldn’t we try to use methods that give right answers to simple questions?
a) “Everyone does Bayesian updating according to the same hypothesis set, model, and measurement methods” strikes me as an extremelystrong assumption, especially since we do not have strong theory that tells us the “right” way to select these hypothesis sets, models, and measurement instruments. I would argue that this makes Aumann agreement essentially useless in “open world” scenarios.
b) Why should uniquely consistent aggregation methods exist at all? A long line of folks including Condorcet, Arrow, Sen and Parfit have pointed out that when you start aggregating beliefs, utility, or preferences, there do not exist methods that always give unambiguously “correct” answers.
I think if you have a set of coefficients for comparing different people’s utilities (maybe derived by looking into their brains and measuring how much fun they feel), then that linear combination of utilities is almost tautologically the right solution.
Sure, but finding the set of coefficients for comparing different people’s utilities is a hard problem in AI alignment, or political economy generally. Not only are there tremendous normative uncertainties here (“how much inequality is too much?”) but the problem of combining utilities a minefield of paradoxes even if you are just summing or averaging.
a) “Everyone does Bayesian updating according to the same hypothesis set, model, and measurement methods” strikes me as an extremely strong assumption, especially since we do not have strong theory that tells us the “right” way to select these hypothesis sets, models, and measurement instruments. I would argue that this makes Aumann agreement essentially useless in “open world” scenarios.
b) Why should uniquely consistent aggregation methods exist at all? A long line of folks including Condorcet, Arrow, Sen and Parfit have pointed out that when you start aggregating beliefs, utility, or preferences, there do not exist methods that always give unambiguously “correct” answers.
Sure, but finding the set of coefficients for comparing different people’s utilities is a hard problem in AI alignment, or political economy generally. Not only are there tremendous normative uncertainties here (“how much inequality is too much?”) but the problem of combining utilities a minefield of paradoxes even if you are just summing or averaging.
Yeah. I was more trying to argue that, compared to Bayesian ideas, voting doesn’t win you all that much.