One classic but unpopular argument for agreement is as follows: if two agents disagreed, they would be collectively dutch-bookable; a bookie could bet intermediate positions with both of them, and be guaranteed to make money.
This argument has the advantage of being very practical. The fallout is that two disagreeing agents should bet with each other to pick up the profits, rather than waiting for the bookie to come around.
More generally, if two agents can negotiate with each other to achieve Pareto-improvements, Critch shows that they will behave like one agent with one prior and a coherent utility function. Critch also suggests that we can interpret the result in terms of agents making bets with each other—once all the fruitful bets have been made, the agents act like they have a common prior (which treats the two different priors as different hypotheses).
So the overall argument becomes: if people can negotiate to take pareto-improvements, then in the limit of this process, they’ll behave as if they had a common prior and shared preferences.
A practical version of this argument might involve an axiom like, if you and I have different preferences, then the “right thing to do” is to take both preferences into account. You and I can eventually reach agreement about what is right-in-this-sense, by negotiating Pareto improvements. This looks something like preference utilitarianism; in the limit of everyone negotiating, a grand coalition is established in which “what’s right” has reached full agreement between all participants. Any difference between our world and that world can be attributed to failures to take Pareto improvements, which we can think of as failure to approximate the ideal rationality.
This also involves behaving as if we also agree on matters of fact, since if we don’t, we’re Dutch-bookable, so we left money on the table and should negotiate another Pareto-improvement by betting on our disagreements.
Furthermore, everyone would agree on a common prior, in the sense that they would behave as if they were using a common prior.
Notice the relationship to the American Pragmatist definition of truth as what the scientific community would eventually agree on in the limit of investigation. “Right” becomes the limit of what everyone would agree on in the limit of negotiation.
Another argument for agreement which you haven’t mentioned is Robin Hanson’s Uncommon Priors Require Origin Disputes, which makes an argument I find quite fascinating, but will not try to summarize here.
True, this is an important limitation which I glossed over.
We can do slightly better by including any bet which all participants think they can resolve later—so for example, we can bet on total utilitarianism vs average utilitarianism if we think that we can eventually agree on the answer (at which point we would resolve the bet). However, this obviously still begs the question about Agreement, and so has a risk of never being resolved.
One classic but unpopular argument for agreement is as follows: if two agents disagreed, they would be collectively dutch-bookable; a bookie could bet intermediate positions with both of them, and be guaranteed to make money.
This argument has the advantage of being very practical. The fallout is that two disagreeing agents should bet with each other to pick up the profits, rather than waiting for the bookie to come around.
More generally, if two agents can negotiate with each other to achieve Pareto-improvements, Critch shows that they will behave like one agent with one prior and a coherent utility function. Critch also suggests that we can interpret the result in terms of agents making bets with each other—once all the fruitful bets have been made, the agents act like they have a common prior (which treats the two different priors as different hypotheses).
So the overall argument becomes: if people can negotiate to take pareto-improvements, then in the limit of this process, they’ll behave as if they had a common prior and shared preferences.
A practical version of this argument might involve an axiom like, if you and I have different preferences, then the “right thing to do” is to take both preferences into account. You and I can eventually reach agreement about what is right-in-this-sense, by negotiating Pareto improvements. This looks something like preference utilitarianism; in the limit of everyone negotiating, a grand coalition is established in which “what’s right” has reached full agreement between all participants. Any difference between our world and that world can be attributed to failures to take Pareto improvements, which we can think of as failure to approximate the ideal rationality.
This also involves behaving as if we also agree on matters of fact, since if we don’t, we’re Dutch-bookable, so we left money on the table and should negotiate another Pareto-improvement by betting on our disagreements.
Furthermore, everyone would agree on a common prior, in the sense that they would behave as if they were using a common prior.
Notice the relationship to the American Pragmatist definition of truth as what the scientific community would eventually agree on in the limit of investigation. “Right” becomes the limit of what everyone would agree on in the limit of negotiation.
Another argument for agreement which you haven’t mentioned is Robin Hanson’s Uncommon Priors Require Origin Disputes, which makes an argument I find quite fascinating, but will not try to summarize here.
Which is to say that if two agents disagree about something observable and quantifiable...
True, this is an important limitation which I glossed over.
We can do slightly better by including any bet which all participants think they can resolve later—so for example, we can bet on total utilitarianism vs average utilitarianism if we think that we can eventually agree on the answer (at which point we would resolve the bet). However, this obviously still begs the question about Agreement, and so has a risk of never being resolved.