Limiting it to economic/comparable values is convenient, but also very inaccurate for all known agents—utility is private and incomparable.
I think modeling utility functions as private information makes a lot of sense! One of the claims I’m making in this post is that utility valuations can be elicited and therefore compared.
My go-to example of an honest mechanism is a second-price auction, which we know we can implement from within the universe. The bids serve as a credible signal of valuation, and if everyone follows their incentives they’ll bid honestly. The person that values the item the most is declared the winner, and economic surplus is maximized.
(Assuming some background facts, which aren’t always true in practice, like everyone having enough money to express their preferences through bids. I used tokens in this example so that “willingness to pay” and “ability to pay” can always line up.)
We use the same technique when we talk about the gains from trade, which I think the Ultimatum game is intended to model. If a merchant values a shirt at $5, and I value it at $15, then there’s $10 of surplus to be split if we can agree on a price in that range.
Bob values the tokens more than Alice does. We can tell because he can buy them from her at a price she’s willing to accept. Side payments let us interpersonally compare valuations.
As I understand it, economic surplus isn’t a subjective quantity. It’s a measure of how much people would be willing to pay to go from the status quo to some better outcome. Which might start out as private information in people’s heads, but there is an objective answer and we can elicit the information needed to compute and maximize it.
a purely rational Alice should not expect/demand more than $1.00, which is the maximum she could get from the best possible (for her) split without side payments.
I don’t know of any results that suggest this should be true! My understanding of the classic analysis of the Ultimatum game is that if Bob makes a take-it-or-leave-it offer to Alice, where she would receive any tiny amount of money like $0.01, she should take it because $0.01 is better than $0.
My current take is that CDT-style thinking has crippled huge parts of economics and decision theory. The agreement of both parties is needed for this $1,000,000,000 of surplus to exist, if either walk away they both get nothing. The Ultimatum game is symmetric and the gains should be split symmetrically.
If we actually found ourselves in this situation, would we actually accept $1 out of $1 billion? Is that how we’d program a computer to handle this situation on our behalf? Is that the sort of reputation we’d want to be known for?
I think modeling utility functions as private information makes a lot of sense! One of the claims I’m making in this post is that utility valuations can be elicited and therefore compared.
My go-to example of an honest mechanism is a second-price auction, which we know we can implement from within the universe. The bids serve as a credible signal of valuation, and if everyone follows their incentives they’ll bid honestly. The person that values the item the most is declared the winner, and economic surplus is maximized.
(Assuming some background facts, which aren’t always true in practice, like everyone having enough money to express their preferences through bids. I used tokens in this example so that “willingness to pay” and “ability to pay” can always line up.)
We use the same technique when we talk about the gains from trade, which I think the Ultimatum game is intended to model. If a merchant values a shirt at $5, and I value it at $15, then there’s $10 of surplus to be split if we can agree on a price in that range.
Bob values the tokens more than Alice does. We can tell because he can buy them from her at a price she’s willing to accept. Side payments let us interpersonally compare valuations.
As I understand it, economic surplus isn’t a subjective quantity. It’s a measure of how much people would be willing to pay to go from the status quo to some better outcome. Which might start out as private information in people’s heads, but there is an objective answer and we can elicit the information needed to compute and maximize it.
I don’t know of any results that suggest this should be true! My understanding of the classic analysis of the Ultimatum game is that if Bob makes a take-it-or-leave-it offer to Alice, where she would receive any tiny amount of money like $0.01, she should take it because $0.01 is better than $0.
My current take is that CDT-style thinking has crippled huge parts of economics and decision theory. The agreement of both parties is needed for this $1,000,000,000 of surplus to exist, if either walk away they both get nothing. The Ultimatum game is symmetric and the gains should be split symmetrically.
If we actually found ourselves in this situation, would we actually accept $1 out of $1 billion? Is that how we’d program a computer to handle this situation on our behalf? Is that the sort of reputation we’d want to be known for?