At the end of the day I am left with the decision to either abandon unbounded utility maximization or indulge myself into the craziness of infinite ethics.
How about, for example, assigning .5 probability to a bounded utility function (U1), and .5 probability to an unbounded (or practically unbounded) utility function (U2)? You might object that taking the average of U1 and U2 still gives an unbounded utility function, but I think the right way to handle this kind of value uncertainty is by using a method like the one proposed by Bostrom and Ord, in which case you ought to end up spending roughly half of your time/resources on what U1 says you should do, and half on what U2 says you should do.
I haven’t studied all the discussions on the parliamentary model, but I’m finding it hard to understand what the implications are, and hard to judge how close to right it is. Maybe it would be enlightening if some of you who do understand the model took a shot at answering (or roughly approximating the answers to) some practice problems? I’m sure some of these are underspecified and anyone who wants to answer them should feel free to fill in details. Also, if it matters, feel free to answer as if I asked about mixed motivations rather than moral uncertainty:
I assign 50% probability to egoism and 50% to utilitarianism, and am going along splitting my resources about evenly between those two. Suddenly and completely unexpectedly, Omega shows up and cuts down my ability to affect my own happiness by a factor of one hundred trillion. Do I keep going along splitting my resources about evenly between egoism and utilitarianism?
I’m a Benthamite utilitarian but uncertain about the relative values of pleasure (measured in hedons, with a hedon calibrated as e.g. me eating a bowl of ice cream) and pain (measured in dolors, with a dolor calibrated as e.g. me slapping myself in the face). My probability distribution over the 10-log of the number of hedons that are equivalent to one dolor is normal with mean 2 and s.d. 2. Someone offers me the chance to undergo one dolor but get N hedons. For what N should I say yes?
I have a marshmallow in front of me. I’m 99% sure of a set of moral theories that all say I shouldn’t be eating it because of future negative consequences. However, I have this voice telling me that the only thing that matters in all the history of the universe is that I eat this exact marshmallow in the next exact minute and I assign 1% probability to it being right. What do I do?
I’m 80% sure that I should be utilitarian, 15% sure that I should be egoist, and 5% sure that all that matters is that egoism plays no part in my decision. I’m given a chance to save 100 lives at the price of my own. What do I do?
I’m 100% sure that the only thing that intrinsically matters is whether a light bulb is on or off, but I’m 60% sure that it should be on and 40% sure that it should be off. I’m given an infinite sequence of opportunities to flip the switch (and no opportunity to improve my estimates). What do I do?
There are 1000 people in the universe. I think my life is worth M of theirs, with the 10-log of M uniformly distributed from −3 to 3. I will be given the opportunity to either save my own life or 30 other people’s lives, but first I will be given the opportunity to either save 3 people’s lives or learn the exact value of M with certainty. What do I do?
Why spend only half on U1? Spend (1 - epsilon). And write a lottery ticket giving the U2-oriented decision maker the power with probability epsilon. Since epsilon infinity = infinity, you still get infinite expected* utility (according to U2). And you also get pretty close to the max possible according to U1.
Infinity has uses even beyond allocating hotel rooms. (HT to A. Hajek)
Of course, Hajek’s reasoning also makes it difficult to locate exactly what it is that U2 “says you should do”.
In general, it should be impossible to allocate 0 to U2 in this sense. What’s the probability that an angel comes down and magically forces you to do the U2 decision? Around epsilon, i’d say.
U2 then becomes totally meaningless, and we are back with a bounded utility function.
you ought to end up spending roughly half of your time/resources on what U1 says you should do, and half on what U2 says you should do
That can’t be right. What if U1 says you ought to buy an Xbox, then U2 says you ought to throw it away? Looks like a waste of resources. To avoid such wastes, your behavior must be Bayesian-rational. That means it must be governed by a utility function U3. What U3 is defined by the parliamentary model? You say it’s not averaging, but it has to be some function defined in terms of U1 and U2.
We’ve discussed a similar problem proposed by Stuart on the mailing list and I believe I gave a good argument (on Jan 21, 2011) that U3 must be some linear combination of U1 and U2 if you want to have nice things like Pareto-optimality. All bargaining should be collapsed into the initial moment, and output the coefficients of the linear combination which never change from that point on.
Right, clearly what I said can’t be true for arbitrary U1 and U2, since there are obvious counterexamples. And I think you’re right that theoretically, bargaining just determines the coefficients of the linear combination of the two utility functions. But it seems hard to apply that theory in practice, whereas if U1 and U2 are largely independent and sublinear in resources, splitting resources between them equally (perhaps with some additional Pareto improvements to take care of any noticeable waste from pursuing two completely separate plans) seems like a fair solution that can be applied in practice.
(ETA side question: does your argument still work absent logical omniscience, for example if one learns additional logical facts after the initial bargaining? It seems like one might not necessarily want to stick with the original coefficients if they were negotiated based on an incomplete understanding of what outcomes are feasible, for example.)
I can’t tell what that combination is, which is odd. The non-smoothness is problematic. You run right up against the constraints—I don’t remember how to deal with this. Can you?
If you have N units of resources which can be devoted to either task A or task B, the ratios of resource used will be the ratio of votes.
I think it depends on what kind of contract you sign. So if I sign a contract that says “we decide according to this utility function” you get something different then a contract that says “We vote yes in these circumstances and no in those circumstances”. The second contract, you can renegotiate, and that can change the utility function.
ETA:
In the case where utility is linear in the set of decisions that go to each side, for any Pareto-optimal allocation that both parties prefer to the starting (random) alllocation, you can construct a set of prices that is consistent with that allocation. So you’re reduced to bargaining, which I guess means Nash arbitration.
I don’t know how to make decisions under logical uncertainty in general. But in our example I suppose you could try to phrase your uncertainty about logical facts you might learn in the future in Bayesian terms, and then factor it into the initial calculation.
How about, for example, assigning .5 probability to a bounded utility function (U1), and .5 probability to an unbounded (or practically unbounded) utility function (U2)? You might object that taking the average of U1 and U2 still gives an unbounded utility function, but I think the right way to handle this kind of value uncertainty is by using a method like the one proposed by Bostrom and Ord, in which case you ought to end up spending roughly half of your time/resources on what U1 says you should do, and half on what U2 says you should do.
I haven’t studied all the discussions on the parliamentary model, but I’m finding it hard to understand what the implications are, and hard to judge how close to right it is. Maybe it would be enlightening if some of you who do understand the model took a shot at answering (or roughly approximating the answers to) some practice problems? I’m sure some of these are underspecified and anyone who wants to answer them should feel free to fill in details. Also, if it matters, feel free to answer as if I asked about mixed motivations rather than moral uncertainty:
I assign 50% probability to egoism and 50% to utilitarianism, and am going along splitting my resources about evenly between those two. Suddenly and completely unexpectedly, Omega shows up and cuts down my ability to affect my own happiness by a factor of one hundred trillion. Do I keep going along splitting my resources about evenly between egoism and utilitarianism?
I’m a Benthamite utilitarian but uncertain about the relative values of pleasure (measured in hedons, with a hedon calibrated as e.g. me eating a bowl of ice cream) and pain (measured in dolors, with a dolor calibrated as e.g. me slapping myself in the face). My probability distribution over the 10-log of the number of hedons that are equivalent to one dolor is normal with mean 2 and s.d. 2. Someone offers me the chance to undergo one dolor but get N hedons. For what N should I say yes?
I have a marshmallow in front of me. I’m 99% sure of a set of moral theories that all say I shouldn’t be eating it because of future negative consequences. However, I have this voice telling me that the only thing that matters in all the history of the universe is that I eat this exact marshmallow in the next exact minute and I assign 1% probability to it being right. What do I do?
I’m 80% sure that I should be utilitarian, 15% sure that I should be egoist, and 5% sure that all that matters is that egoism plays no part in my decision. I’m given a chance to save 100 lives at the price of my own. What do I do?
I’m 100% sure that the only thing that intrinsically matters is whether a light bulb is on or off, but I’m 60% sure that it should be on and 40% sure that it should be off. I’m given an infinite sequence of opportunities to flip the switch (and no opportunity to improve my estimates). What do I do?
There are 1000 people in the universe. I think my life is worth M of theirs, with the 10-log of M uniformly distributed from −3 to 3. I will be given the opportunity to either save my own life or 30 other people’s lives, but first I will be given the opportunity to either save 3 people’s lives or learn the exact value of M with certainty. What do I do?
Why spend only half on U1? Spend (1 - epsilon). And write a lottery ticket giving the U2-oriented decision maker the power with probability epsilon. Since epsilon infinity = infinity, you still get infinite expected* utility (according to U2). And you also get pretty close to the max possible according to U1.
Infinity has uses even beyond allocating hotel rooms. (HT to A. Hajek)
Of course, Hajek’s reasoning also makes it difficult to locate exactly what it is that U2 “says you should do”.
In general, it should be impossible to allocate 0 to U2 in this sense. What’s the probability that an angel comes down and magically forces you to do the U2 decision? Around epsilon, i’d say.
U2 then becomes totally meaningless, and we are back with a bounded utility function.
That can’t be right. What if U1 says you ought to buy an Xbox, then U2 says you ought to throw it away? Looks like a waste of resources. To avoid such wastes, your behavior must be Bayesian-rational. That means it must be governed by a utility function U3. What U3 is defined by the parliamentary model? You say it’s not averaging, but it has to be some function defined in terms of U1 and U2.
We’ve discussed a similar problem proposed by Stuart on the mailing list and I believe I gave a good argument (on Jan 21, 2011) that U3 must be some linear combination of U1 and U2 if you want to have nice things like Pareto-optimality. All bargaining should be collapsed into the initial moment, and output the coefficients of the linear combination which never change from that point on.
Right, clearly what I said can’t be true for arbitrary U1 and U2, since there are obvious counterexamples. And I think you’re right that theoretically, bargaining just determines the coefficients of the linear combination of the two utility functions. But it seems hard to apply that theory in practice, whereas if U1 and U2 are largely independent and sublinear in resources, splitting resources between them equally (perhaps with some additional Pareto improvements to take care of any noticeable waste from pursuing two completely separate plans) seems like a fair solution that can be applied in practice.
(ETA side question: does your argument still work absent logical omniscience, for example if one learns additional logical facts after the initial bargaining? It seems like one might not necessarily want to stick with the original coefficients if they were negotiated based on an incomplete understanding of what outcomes are feasible, for example.)
My thoughts:
You do always get a linear combination.
I can’t tell what that combination is, which is odd. The non-smoothness is problematic. You run right up against the constraints—I don’t remember how to deal with this. Can you?
If you have N units of resources which can be devoted to either task A or task B, the ratios of resource used will be the ratio of votes.
I think it depends on what kind of contract you sign. So if I sign a contract that says “we decide according to this utility function” you get something different then a contract that says “We vote yes in these circumstances and no in those circumstances”. The second contract, you can renegotiate, and that can change the utility function.
ETA:
In the case where utility is linear in the set of decisions that go to each side, for any Pareto-optimal allocation that both parties prefer to the starting (random) alllocation, you can construct a set of prices that is consistent with that allocation. So you’re reduced to bargaining, which I guess means Nash arbitration.
I don’t know how to make decisions under logical uncertainty in general. But in our example I suppose you could try to phrase your uncertainty about logical facts you might learn in the future in Bayesian terms, and then factor it into the initial calculation.