I’d appreciate more spelling out in detail of the proposed algorithm sometime. E.g.
You each accept unfair splits with diminishing probability as those offers seem more unfair, such that it is always lower EV to offer a more unfair division.
Lower EV for your opponent, presumably. So they are disincentivised from offering more unfair divisions to you. But does that mean to implement this strategy you need to correctly guess your opponents utility function? If you are gullible and believe they have whatever utility function they say they have, can they exploit you by choosing a utility function that makes you still have a reasonably high probability of accepting a pretty unfair deal, and then proposing said unfair deal?
I believe it’s the algorithm from https://www.lesswrong.com/posts/z2YwmzuT7nWx62Kfh/cooperating-with-agents-with-different-ideas-of-fairness. Basically, if you’re offered an unfair deal (and the other trader isn’t willing to renegotiate), you should accept the trade with a probability just low enough that the other trader does worse in expectation than if they offered a fair trade. For example, if you think that a fair deal would provide $10 to both players over not trading and the other trader offers a deal where they get $15 and you get $4, then you should accept with probability 2/3−ϵ, so that in expectation they get less than if they offered a fair trade.
Any Pareto bargaining method is vulnerable to lying about utility functions, and so to have a chance at bargaining fairly, it’s necessary to have some idea of what your partner’s utility function is. I don’t think that using this method for dealing with unfair trades is especially vulnerable to deception, though possibly there’s some better way to deal with uncertainty over your partner’s utility function.
Thanks. I know it’s that algorithm, I just want a more detailed and comprehensive description of it, so I can look at the whole thing and understand the problems with it that remain.
“Any Pareto bargaining method is vulnerable...” Interesting, thanks! I take it there is a proof somewhere of this? Where can I read about this? What is a pareto bargaining method?
I feel like arguably “My bargaining protocol works great except that it incentivises people to try to fool each other about what their utility function is” …. is sorta like saying “my utopian legal system solves all social problems except for the one where people in power are incentivised to cheat/defect/abuse their authority.”
Though maybe that’s OK if we are talking about AI bargaining and they have magic mind-reading supertechnology that lets them access each other’s original (before strategic modification) utility functions?
Thanks. I know it’s that algorithm, I just want a more detailed and comprehensive description of it, so I can look at the whole thing and understand the problems with it that remain.
It’s really a class of algorithms, depending on how your opponent bargains, such that if the fair bargain (by your standard of fairness) gives X utility to you and Y utility to your partner, then you refuse to accept any other solution which gives your partner at least Y utility in expectation. So if they give you a take-it-or-leave-it offer which gives you positive utility and them Y’>Y utility, then you accept it with probability Y/Y’ - ϵ, such that their expected value from giving you that offer is Y - ϵ,. If they have a different standard of fairness which gives you X’ utility and them Y’ utility but also use Adabarian bargaining, then you should agree to a bargain which gives you X’ - ϵ, utility and them Y - ϵ, utility (this is always possible via randomizing over their bargaining solution, your bargaining solution, and not trading, so long as all the bargaining solutions give positive utility to everyone).
“Any Pareto bargaining method is vulnerable...” Interesting, thanks! I take it there is a proof somewhere of this? Where can I read about this? What is a pareto bargaining method?
Sorry, that should actually be Pareto bargaining solution, which is a just a solution which ends up on the Pareto frontier. In The Pareto World Liars Prospers is a good explainer, and https://www.jstor.org/stable/1914235 shows a general result that every bargaining solution which is invulnerable to strategic dishonesty is equivalent to a lottery over dictatorships (where one person gets to choose their ideal solution) and tuple methods (where the possible outcomes are restricted to a set of two).
I feel like arguably “My bargaining protocol works great except that it incentivises people to try to fool each other about what their utility function is” …. is sorta like saying “my utopian legal system solves all social problems except for the one where people in power are incentivised to cheat/defect/abuse their authority.”
I agree with this, but also it would be pretty great to have a legal system which would work if people in power didn’t abuse their authority; I don’t think any current legal system even has that. Designing methods robust to strategic manipulation is an important part of the problem, but not the only part, and I don’t think it’s unreasonable focus on other parts, especially since there are a lot of scenarios where approximating your partner’s utility function is possible. In particular, if monetary value can be assigned to everything being bargained over, then approximating utility as money is usually reasonable.
What would you say are the remaining problems that need to be solved, if we assume everyone has a way to accurately estimate everyone else’s utility function? The main one that comes to mind for me is, there are many possible solutions/equilibria/policy-sets that get to pareto-optimal outcomes, but they differ in how good they are for different players, and so it’s not enough that players be aware of a solution, and it’s also not even enough that there be one solution which stands out as extra salient—because players will be hoping to achieve a solution that is more favorable to them and might do various crazy things to try to achieve that. (This is a vague problem statement though, perhaps you can do better!)
The main one that comes to mind for me is, there are many possible solutions/equilibria/policy-sets that get to pareto-optimal outcomes, but they differ in how good they are for different players, and so it’s not enough that players be aware of a solution, and it’s also not even enough that there be one solution which stands out as extra salient—because players will be hoping to achieve a solution that is more favorable to them and might do various crazy things to try to achieve that.
This seems like it’s solved by just not letting your opponent get more utility than they would under the bargaining system you think is fair, no matter what crazy things they do? If there is a bargaining solution which stands out, then agents which strategize over which solution they propose will choose the salient one, since they expect their partner to do the same. I might be misunderstand what you’re getting at, though.
What would you say are the remaining problems that need to be solved, if we assume everyone has a way to accurately estimate everyone else’s utility function?
Finding something like a universally canonical bargaining solution would be great, as it would allow agents with knowledge of each other’s utility functions to achieve Pareto optimality. I think it’s not fully disentagleable from the question of incentivizing honesty, as I could imagine that there is some otherwise great bargaining solution that turns out to be unfixably vulnerable to dishonesty. Although, I do have an intuition that probably most reasonable bargaining solutions thought up in good faith are similar enough to each other that agents using different ones wouldn’t end up too far from the Pareto frontier, and so I’m not sure how important it is.
I think my answer is probably figuring out how to deal with strategic successor agents and dealing with counterfactuals. The successor agent problem is similar to the problem of lying about utility functions: if you’re dealing with a successor agent (or an agent which has modified its utility function), you need to bargain with it as though it had the utility function of its creator (or its original utility function), and figuring out how to deal with uncertainty over how an agent’s values have been modified by other agents or its past self seems important.
Bargaining solutions usually have the property that you can’t naively apply them to subgames, as different agents might value the subgames more or less, and an agent might be happy to accept a locally unfair deal for a better deal in another subgame. This is fine for sequential or simultaneous subgames, but some subgames only happen with some probability. Determining what would happen in counterfactual subgames is important for determining the fair solution in the occurring subgames, but verifying would counterfactually happen is often quite difficult.
In some sense, these are just subproblems of incentivizing honesty generally. I think the problem of incentivizing honesty is the overwhelming importance-weighted bulk of the remaining open problems in bargaining theory (relative to what I know), and it’s hard for me to think of an important problem that isn’t entangled with that in some way.
Huh. I don’t worry much about the problem of incentivizing honesty myself, because the cases I’m most worried about are cases where everyone can read everyone else’s minds (with some time lag). Do you think there’s basically no problem then, in those cases?
There’s still the problem of successor agents and self-modifying agents, where you need to set up incentives to create successor agents with the same utility functions and to not strategically self-modify, and I think a solution to that would probably also work as a solution to normal dishonesty.
I do expect that in a case where agents can also see each other’s histories, we can make bargaining go well with the bargaining theory we know (given that the agents try to bargain well, there are of course possible agents which don’t try to cooperate well).
In the cases I’m thinking about you don’t just read their minds now, you read their entire history, including predecessor agents. All is transparent. (Fictional but illustrative example: the French AGI and the Russian AGI are smart like sherlock holmes, they can deduce pretty much everything that happened in great detail leading up to and during the creation of each other + also they are still running on human hardware at human institutions and thanks to constant leaking and also the offence/defense balance favoring offense, they can see logs of what each other is and was thinking the entire time, including through various rounds of modification-to-successor agent.)
I’d like to understand why, if you think your trade partner is willing and able to change their offer based on your algorithm, you don’t set your baseline HIGHER than “fair”. If you have the power to manipulate the offer by being known to reject some, you should use that to get an even better deal, right?
It’s a combination of evidential reasoning and norm-setting. If you’re playing the ultimatum game over $10 with a similarly-reasoning opponent, then deciding to only accept an (8, 2) split mostly won’t increase the chance they give in, it will increase the chance that they also only accept an (8, 2) split, and so you’ll end up with $2 in expectation. The point of an idea of fairness is that, at least so long as there’s common knowledge of no hidden information, both players should agree on the fair split. So if, while bargaining with a similarly-reasoning opponent, you decide to only accept fair offers, this increases the chance that your opponent only accepts fair offers, and so you should end up agreeing, modulo factors which cause disagreement on what is fair.
Similarly, fair bargaining is a good norm to set, as, once it is a norm, it allows people to trade on/close to the Pareto frontier while disincentivizing any attempts to extort unfair bargains.
It’s a combination of evidential reasoning and norm-setting.
I see the norm-setting, which is exactly what I’m trying to point out. Norm-setting is outside the game, and won’t actually work with a lot of potential trading partners. I seem to be missing the evidential reasoning component, other than figuring out who has more power to “win” the race.
with a similarly-reasoning opponent
Again, this requirement weakens the argument greatly. It’s my primary objection—why do we believe that our correspondent is sufficiently similarly-reasoning for this to hold? If it’s set up long in advance that all humans can take or leave an 8,2 split, then those humans who’ve precommitted to reject that offer just get nothing (as does the offerer, but who knows what motivated that ancient alien)?
Yes, if it is known that you accept whatever they tell you before setting your fair price then you will probably get bad information and make deals that are bad for you.
For any given credence distribution you might have for their true utility, there are corresponding rejection functions that both disincentivize their offering unfair deals and get you high utility.
I’d appreciate more spelling out in detail of the proposed algorithm sometime. E.g.
Lower EV for your opponent, presumably. So they are disincentivised from offering more unfair divisions to you. But does that mean to implement this strategy you need to correctly guess your opponents utility function? If you are gullible and believe they have whatever utility function they say they have, can they exploit you by choosing a utility function that makes you still have a reasonably high probability of accepting a pretty unfair deal, and then proposing said unfair deal?
I believe it’s the algorithm from https://www.lesswrong.com/posts/z2YwmzuT7nWx62Kfh/cooperating-with-agents-with-different-ideas-of-fairness. Basically, if you’re offered an unfair deal (and the other trader isn’t willing to renegotiate), you should accept the trade with a probability just low enough that the other trader does worse in expectation than if they offered a fair trade. For example, if you think that a fair deal would provide $10 to both players over not trading and the other trader offers a deal where they get $15 and you get $4, then you should accept with probability 2/3−ϵ, so that in expectation they get less than if they offered a fair trade.
Any Pareto bargaining method is vulnerable to lying about utility functions, and so to have a chance at bargaining fairly, it’s necessary to have some idea of what your partner’s utility function is. I don’t think that using this method for dealing with unfair trades is especially vulnerable to deception, though possibly there’s some better way to deal with uncertainty over your partner’s utility function.
Thanks. I know it’s that algorithm, I just want a more detailed and comprehensive description of it, so I can look at the whole thing and understand the problems with it that remain.
“Any Pareto bargaining method is vulnerable...” Interesting, thanks! I take it there is a proof somewhere of this? Where can I read about this? What is a pareto bargaining method?
I feel like arguably “My bargaining protocol works great except that it incentivises people to try to fool each other about what their utility function is” …. is sorta like saying “my utopian legal system solves all social problems except for the one where people in power are incentivised to cheat/defect/abuse their authority.”
Though maybe that’s OK if we are talking about AI bargaining and they have magic mind-reading supertechnology that lets them access each other’s original (before strategic modification) utility functions?
It’s really a class of algorithms, depending on how your opponent bargains, such that if the fair bargain (by your standard of fairness) gives X utility to you and Y utility to your partner, then you refuse to accept any other solution which gives your partner at least Y utility in expectation. So if they give you a take-it-or-leave-it offer which gives you positive utility and them Y’>Y utility, then you accept it with probability Y/Y’ - ϵ, such that their expected value from giving you that offer is Y - ϵ,. If they have a different standard of fairness which gives you X’ utility and them Y’ utility but also use Adabarian bargaining, then you should agree to a bargain which gives you X’ - ϵ, utility and them Y - ϵ, utility (this is always possible via randomizing over their bargaining solution, your bargaining solution, and not trading, so long as all the bargaining solutions give positive utility to everyone).
Sorry, that should actually be Pareto bargaining solution, which is a just a solution which ends up on the Pareto frontier. In The Pareto World Liars Prospers is a good explainer, and https://www.jstor.org/stable/1914235 shows a general result that every bargaining solution which is invulnerable to strategic dishonesty is equivalent to a lottery over dictatorships (where one person gets to choose their ideal solution) and tuple methods (where the possible outcomes are restricted to a set of two).
I agree with this, but also it would be pretty great to have a legal system which would work if people in power didn’t abuse their authority; I don’t think any current legal system even has that. Designing methods robust to strategic manipulation is an important part of the problem, but not the only part, and I don’t think it’s unreasonable focus on other parts, especially since there are a lot of scenarios where approximating your partner’s utility function is possible. In particular, if monetary value can be assigned to everything being bargained over, then approximating utility as money is usually reasonable.
Thanks, this was super helpful!
What would you say are the remaining problems that need to be solved, if we assume everyone has a way to accurately estimate everyone else’s utility function? The main one that comes to mind for me is, there are many possible solutions/equilibria/policy-sets that get to pareto-optimal outcomes, but they differ in how good they are for different players, and so it’s not enough that players be aware of a solution, and it’s also not even enough that there be one solution which stands out as extra salient—because players will be hoping to achieve a solution that is more favorable to them and might do various crazy things to try to achieve that. (This is a vague problem statement though, perhaps you can do better!)
You’re welcome!
This seems like it’s solved by just not letting your opponent get more utility than they would under the bargaining system you think is fair, no matter what crazy things they do? If there is a bargaining solution which stands out, then agents which strategize over which solution they propose will choose the salient one, since they expect their partner to do the same. I might be misunderstand what you’re getting at, though.
Finding something like a universally canonical bargaining solution would be great, as it would allow agents with knowledge of each other’s utility functions to achieve Pareto optimality. I think it’s not fully disentagleable from the question of incentivizing honesty, as I could imagine that there is some otherwise great bargaining solution that turns out to be unfixably vulnerable to dishonesty. Although, I do have an intuition that probably most reasonable bargaining solutions thought up in good faith are similar enough to each other that agents using different ones wouldn’t end up too far from the Pareto frontier, and so I’m not sure how important it is.
I think my answer is probably figuring out how to deal with strategic successor agents and dealing with counterfactuals. The successor agent problem is similar to the problem of lying about utility functions: if you’re dealing with a successor agent (or an agent which has modified its utility function), you need to bargain with it as though it had the utility function of its creator (or its original utility function), and figuring out how to deal with uncertainty over how an agent’s values have been modified by other agents or its past self seems important.
Bargaining solutions usually have the property that you can’t naively apply them to subgames, as different agents might value the subgames more or less, and an agent might be happy to accept a locally unfair deal for a better deal in another subgame. This is fine for sequential or simultaneous subgames, but some subgames only happen with some probability. Determining what would happen in counterfactual subgames is important for determining the fair solution in the occurring subgames, but verifying would counterfactually happen is often quite difficult.
In some sense, these are just subproblems of incentivizing honesty generally. I think the problem of incentivizing honesty is the overwhelming importance-weighted bulk of the remaining open problems in bargaining theory (relative to what I know), and it’s hard for me to think of an important problem that isn’t entangled with that in some way.
Huh. I don’t worry much about the problem of incentivizing honesty myself, because the cases I’m most worried about are cases where everyone can read everyone else’s minds (with some time lag). Do you think there’s basically no problem then, in those cases?
There’s still the problem of successor agents and self-modifying agents, where you need to set up incentives to create successor agents with the same utility functions and to not strategically self-modify, and I think a solution to that would probably also work as a solution to normal dishonesty.
I do expect that in a case where agents can also see each other’s histories, we can make bargaining go well with the bargaining theory we know (given that the agents try to bargain well, there are of course possible agents which don’t try to cooperate well).
In the cases I’m thinking about you don’t just read their minds now, you read their entire history, including predecessor agents. All is transparent. (Fictional but illustrative example: the French AGI and the Russian AGI are smart like sherlock holmes, they can deduce pretty much everything that happened in great detail leading up to and during the creation of each other + also they are still running on human hardware at human institutions and thanks to constant leaking and also the offence/defense balance favoring offense, they can see logs of what each other is and was thinking the entire time, including through various rounds of modification-to-successor agent.)
I’d like to understand why, if you think your trade partner is willing and able to change their offer based on your algorithm, you don’t set your baseline HIGHER than “fair”. If you have the power to manipulate the offer by being known to reject some, you should use that to get an even better deal, right?
It’s a combination of evidential reasoning and norm-setting. If you’re playing the ultimatum game over $10 with a similarly-reasoning opponent, then deciding to only accept an (8, 2) split mostly won’t increase the chance they give in, it will increase the chance that they also only accept an (8, 2) split, and so you’ll end up with $2 in expectation. The point of an idea of fairness is that, at least so long as there’s common knowledge of no hidden information, both players should agree on the fair split. So if, while bargaining with a similarly-reasoning opponent, you decide to only accept fair offers, this increases the chance that your opponent only accepts fair offers, and so you should end up agreeing, modulo factors which cause disagreement on what is fair.
Similarly, fair bargaining is a good norm to set, as, once it is a norm, it allows people to trade on/close to the Pareto frontier while disincentivizing any attempts to extort unfair bargains.
I see the norm-setting, which is exactly what I’m trying to point out. Norm-setting is outside the game, and won’t actually work with a lot of potential trading partners. I seem to be missing the evidential reasoning component, other than figuring out who has more power to “win” the race.
Again, this requirement weakens the argument greatly. It’s my primary objection—why do we believe that our correspondent is sufficiently similarly-reasoning for this to hold? If it’s set up long in advance that all humans can take or leave an 8,2 split, then those humans who’ve precommitted to reject that offer just get nothing (as does the offerer, but who knows what motivated that ancient alien)?
Yes, if it is known that you accept whatever they tell you before setting your fair price then you will probably get bad information and make deals that are bad for you.
For any given credence distribution you might have for their true utility, there are corresponding rejection functions that both disincentivize their offering unfair deals and get you high utility.