I haven’t studied all the discussions on the parliamentary model, but I’m finding it hard to understand what the implications are, and hard to judge how close to right it is. Maybe it would be enlightening if some of you who do understand the model took a shot at answering (or roughly approximating the answers to) some practice problems? I’m sure some of these are underspecified and anyone who wants to answer them should feel free to fill in details. Also, if it matters, feel free to answer as if I asked about mixed motivations rather than moral uncertainty:
I assign 50% probability to egoism and 50% to utilitarianism, and am going along splitting my resources about evenly between those two. Suddenly and completely unexpectedly, Omega shows up and cuts down my ability to affect my own happiness by a factor of one hundred trillion. Do I keep going along splitting my resources about evenly between egoism and utilitarianism?
I’m a Benthamite utilitarian but uncertain about the relative values of pleasure (measured in hedons, with a hedon calibrated as e.g. me eating a bowl of ice cream) and pain (measured in dolors, with a dolor calibrated as e.g. me slapping myself in the face). My probability distribution over the 10-log of the number of hedons that are equivalent to one dolor is normal with mean 2 and s.d. 2. Someone offers me the chance to undergo one dolor but get N hedons. For what N should I say yes?
I have a marshmallow in front of me. I’m 99% sure of a set of moral theories that all say I shouldn’t be eating it because of future negative consequences. However, I have this voice telling me that the only thing that matters in all the history of the universe is that I eat this exact marshmallow in the next exact minute and I assign 1% probability to it being right. What do I do?
I’m 80% sure that I should be utilitarian, 15% sure that I should be egoist, and 5% sure that all that matters is that egoism plays no part in my decision. I’m given a chance to save 100 lives at the price of my own. What do I do?
I’m 100% sure that the only thing that intrinsically matters is whether a light bulb is on or off, but I’m 60% sure that it should be on and 40% sure that it should be off. I’m given an infinite sequence of opportunities to flip the switch (and no opportunity to improve my estimates). What do I do?
There are 1000 people in the universe. I think my life is worth M of theirs, with the 10-log of M uniformly distributed from −3 to 3. I will be given the opportunity to either save my own life or 30 other people’s lives, but first I will be given the opportunity to either save 3 people’s lives or learn the exact value of M with certainty. What do I do?
I haven’t studied all the discussions on the parliamentary model, but I’m finding it hard to understand what the implications are, and hard to judge how close to right it is. Maybe it would be enlightening if some of you who do understand the model took a shot at answering (or roughly approximating the answers to) some practice problems? I’m sure some of these are underspecified and anyone who wants to answer them should feel free to fill in details. Also, if it matters, feel free to answer as if I asked about mixed motivations rather than moral uncertainty:
I assign 50% probability to egoism and 50% to utilitarianism, and am going along splitting my resources about evenly between those two. Suddenly and completely unexpectedly, Omega shows up and cuts down my ability to affect my own happiness by a factor of one hundred trillion. Do I keep going along splitting my resources about evenly between egoism and utilitarianism?
I’m a Benthamite utilitarian but uncertain about the relative values of pleasure (measured in hedons, with a hedon calibrated as e.g. me eating a bowl of ice cream) and pain (measured in dolors, with a dolor calibrated as e.g. me slapping myself in the face). My probability distribution over the 10-log of the number of hedons that are equivalent to one dolor is normal with mean 2 and s.d. 2. Someone offers me the chance to undergo one dolor but get N hedons. For what N should I say yes?
I have a marshmallow in front of me. I’m 99% sure of a set of moral theories that all say I shouldn’t be eating it because of future negative consequences. However, I have this voice telling me that the only thing that matters in all the history of the universe is that I eat this exact marshmallow in the next exact minute and I assign 1% probability to it being right. What do I do?
I’m 80% sure that I should be utilitarian, 15% sure that I should be egoist, and 5% sure that all that matters is that egoism plays no part in my decision. I’m given a chance to save 100 lives at the price of my own. What do I do?
I’m 100% sure that the only thing that intrinsically matters is whether a light bulb is on or off, but I’m 60% sure that it should be on and 40% sure that it should be off. I’m given an infinite sequence of opportunities to flip the switch (and no opportunity to improve my estimates). What do I do?
There are 1000 people in the universe. I think my life is worth M of theirs, with the 10-log of M uniformly distributed from −3 to 3. I will be given the opportunity to either save my own life or 30 other people’s lives, but first I will be given the opportunity to either save 3 people’s lives or learn the exact value of M with certainty. What do I do?