Right, clearly what I said can’t be true for arbitrary U1 and U2, since there are obvious counterexamples. And I think you’re right that theoretically, bargaining just determines the coefficients of the linear combination of the two utility functions. But it seems hard to apply that theory in practice, whereas if U1 and U2 are largely independent and sublinear in resources, splitting resources between them equally (perhaps with some additional Pareto improvements to take care of any noticeable waste from pursuing two completely separate plans) seems like a fair solution that can be applied in practice.
(ETA side question: does your argument still work absent logical omniscience, for example if one learns additional logical facts after the initial bargaining? It seems like one might not necessarily want to stick with the original coefficients if they were negotiated based on an incomplete understanding of what outcomes are feasible, for example.)
I can’t tell what that combination is, which is odd. The non-smoothness is problematic. You run right up against the constraints—I don’t remember how to deal with this. Can you?
If you have N units of resources which can be devoted to either task A or task B, the ratios of resource used will be the ratio of votes.
I think it depends on what kind of contract you sign. So if I sign a contract that says “we decide according to this utility function” you get something different then a contract that says “We vote yes in these circumstances and no in those circumstances”. The second contract, you can renegotiate, and that can change the utility function.
ETA:
In the case where utility is linear in the set of decisions that go to each side, for any Pareto-optimal allocation that both parties prefer to the starting (random) alllocation, you can construct a set of prices that is consistent with that allocation. So you’re reduced to bargaining, which I guess means Nash arbitration.
I don’t know how to make decisions under logical uncertainty in general. But in our example I suppose you could try to phrase your uncertainty about logical facts you might learn in the future in Bayesian terms, and then factor it into the initial calculation.
Right, clearly what I said can’t be true for arbitrary U1 and U2, since there are obvious counterexamples. And I think you’re right that theoretically, bargaining just determines the coefficients of the linear combination of the two utility functions. But it seems hard to apply that theory in practice, whereas if U1 and U2 are largely independent and sublinear in resources, splitting resources between them equally (perhaps with some additional Pareto improvements to take care of any noticeable waste from pursuing two completely separate plans) seems like a fair solution that can be applied in practice.
(ETA side question: does your argument still work absent logical omniscience, for example if one learns additional logical facts after the initial bargaining? It seems like one might not necessarily want to stick with the original coefficients if they were negotiated based on an incomplete understanding of what outcomes are feasible, for example.)
My thoughts:
You do always get a linear combination.
I can’t tell what that combination is, which is odd. The non-smoothness is problematic. You run right up against the constraints—I don’t remember how to deal with this. Can you?
If you have N units of resources which can be devoted to either task A or task B, the ratios of resource used will be the ratio of votes.
I think it depends on what kind of contract you sign. So if I sign a contract that says “we decide according to this utility function” you get something different then a contract that says “We vote yes in these circumstances and no in those circumstances”. The second contract, you can renegotiate, and that can change the utility function.
ETA:
In the case where utility is linear in the set of decisions that go to each side, for any Pareto-optimal allocation that both parties prefer to the starting (random) alllocation, you can construct a set of prices that is consistent with that allocation. So you’re reduced to bargaining, which I guess means Nash arbitration.
I don’t know how to make decisions under logical uncertainty in general. But in our example I suppose you could try to phrase your uncertainty about logical facts you might learn in the future in Bayesian terms, and then factor it into the initial calculation.