Using Nash with maximin as the BATNA has some big advantages
it really motivates bargaining, as there are usually pareto improvements that are obvious, and near-pareto improvements beyond even that.
It’s literally impossible to do worse for any given individual. If you’re worried about the experience of the most unlucky/powerless member, this ensures you won’t degrade it with your negotiation.
I’m trying to compare your proposal to https://en.wikipedia.org/wiki/Shapley_value. On the surface, it seems similar—consider sub-coalitions to determine counterfactual contribution (doesn’t matter what the contribution unit is—any linearly aggregatable quantity, whether Utility or dollars should work).
I do worry a bit that in both Shapely and your system, it is acceptible to disappear people—the calculation where they don’t exist seems problematic when applied to actual people. It has the nice property of ignoring “outliers” (really, negative-value lives), but that’s only a nice property in theory, it would be horrific if actually applied.
it really motivates bargaining, as there are usually pareto improvements that are obvious, and near-pareto improvements beyond even that.
I couldn’t really parse this. What does it mean to “motivate bargaining” and why is it good?
If you’re worried about the experience of the most unlucky/powerless member, this ensures you won’t degrade it with your negotiation.
In practice, it’s pretty hard for a person to survive on their own, so usually not existing is at least as good as the minimax (or at least it’s not that much worse). It can actually be way, way better than the minimax, since the minimax implies every other person doing their collective best to make things as bad as possible for this person.
There is a huge difference: Shapley value assumes utility is transferable, and I don’t.
I do worry a bit that in both Shapely and your system, it is acceptible to disappear people—the calculation where they don’t exist seems problematic when applied to actual people. It has the nice property of ignoring “outliers” (really, negative-value lives), but that’s only a nice property in theory, it would be horrific if actually applied.
By “outliers” I don’t mean negative-value lives, I mean people who want everyone else to die and/or to suffer.
It is not especially acceptable in my system to disappear people: it is an outcome that is considered, but it only happens if enough people have a sufficiently strong preference for it. I do agree it might be better to come up with a system that somehow discounts “nosy” preferences, i.e. doesn’t put much weight on what Alice thinks Bob’s life should look like when it contradicts what Bob wants.
By “motivate bargaining”, I meant that humans aren’t rational utility maximizers, and the outcomes they will seek and accept are different, depending on the framing of the question. If you tell them that the rational baseline is low (and prove it using a very small set of assumptions), they’re more likely to accept a wider range of better (but not as much better as pure manipulation might give them) outcomes.
By negative-value lives, I meant negative to the aggregate you’re maximizing, not negative to themselves. Someone who gains by others’ suffering necessarily reduces the sum. The assumption that not existing is an acceptable outcome to those participants still feels problematic to me, but I do agree that eliminating unpleasant utility curves makes the problem tractable.
Using Nash with maximin as the BATNA has some big advantages
it really motivates bargaining, as there are usually pareto improvements that are obvious, and near-pareto improvements beyond even that.
It’s literally impossible to do worse for any given individual. If you’re worried about the experience of the most unlucky/powerless member, this ensures you won’t degrade it with your negotiation.
I’m trying to compare your proposal to https://en.wikipedia.org/wiki/Shapley_value. On the surface, it seems similar—consider sub-coalitions to determine counterfactual contribution (doesn’t matter what the contribution unit is—any linearly aggregatable quantity, whether Utility or dollars should work).
I do worry a bit that in both Shapely and your system, it is acceptible to disappear people—the calculation where they don’t exist seems problematic when applied to actual people. It has the nice property of ignoring “outliers” (really, negative-value lives), but that’s only a nice property in theory, it would be horrific if actually applied.
I couldn’t really parse this. What does it mean to “motivate bargaining” and why is it good?
In practice, it’s pretty hard for a person to survive on their own, so usually not existing is at least as good as the minimax (or at least it’s not that much worse). It can actually be way, way better than the minimax, since the minimax implies every other person doing their collective best to make things as bad as possible for this person.
There is a huge difference: Shapley value assumes utility is transferable, and I don’t.
By “outliers” I don’t mean negative-value lives, I mean people who want everyone else to die and/or to suffer.
It is not especially acceptable in my system to disappear people: it is an outcome that is considered, but it only happens if enough people have a sufficiently strong preference for it. I do agree it might be better to come up with a system that somehow discounts “nosy” preferences, i.e. doesn’t put much weight on what Alice thinks Bob’s life should look like when it contradicts what Bob wants.
By “motivate bargaining”, I meant that humans aren’t rational utility maximizers, and the outcomes they will seek and accept are different, depending on the framing of the question. If you tell them that the rational baseline is low (and prove it using a very small set of assumptions), they’re more likely to accept a wider range of better (but not as much better as pure manipulation might give them) outcomes.
By negative-value lives, I meant negative to the aggregate you’re maximizing, not negative to themselves. Someone who gains by others’ suffering necessarily reduces the sum. The assumption that not existing is an acceptable outcome to those participants still feels problematic to me, but I do agree that eliminating unpleasant utility curves makes the problem tractable.