Suppose that I would tentatively choose to torture one person to save a googolplex people from dust specks, and that additionally I would choose torture to save only a googol people from a papercut. Do I have circular preferences if I would be much, much more willing to save a googolplex people from dust specks by giving paper cuts to googol people than to save either group from specks or paper cuts by torturing one person?
I can achieve the exact same total utility by giving specks to googolplex people, giving papercuts to a googol people, or torturing one person. If I had to save 3^^^3 people from dust specks I’d give 3^^^3*googol/googolplex people paper cuts instead of torturing anyone. I’d much prefer saving 3^^^3 people from dust specks by subjecting perhaps 2^^^2 people to a relatively troublesome dust speck. So why exactly do I prefer troublesome dust specks over papercuts over torture even if utility is maximized either way? I think that I’m probably doing utilitarianism as more of a maximin calculation; maximizing the minimum individual utility function in some way. I can’t maximize total utility in the cases where additional utility for some people must be bought at the cost of negative utility for others; it requires more of a fair exchange between individuals in order to increase total utility.
2^^^2 is 4, so I’d choose that in a heartbeat. 2^^^3 is the kind of number you were probably thinking about. Though, if we’re choosing fair-sounding situations, I’d like to cut one of my fingernails too short to generate a MJ/K of negentropy.
I’ve got one way of thinking this problem through that seems to fit with what you’re saying – though of course, it has its own flaws: represent each person’s utility (is that the right word in this case) such that 0 is the maximum possible utility they can have, then map each individual’s utility with x ⟼ -(e^(-x)), so that lots of harm to one person is weighted higher than tiny harms to many people. This is almost certainly a case of forcing the model to say what we want it to say.
Suppose that I would tentatively choose to torture one person to save a googolplex people from dust specks, and that additionally I would choose torture to save only a googol people from a papercut. Do I have circular preferences if I would be much, much more willing to save a googolplex people from dust specks by giving paper cuts to googol people than to save either group from specks or paper cuts by torturing one person?
I can achieve the exact same total utility by giving specks to googolplex people, giving papercuts to a googol people, or torturing one person. If I had to save 3^^^3 people from dust specks I’d give 3^^^3*googol/googolplex people paper cuts instead of torturing anyone. I’d much prefer saving 3^^^3 people from dust specks by subjecting perhaps 2^^^2 people to a relatively troublesome dust speck. So why exactly do I prefer troublesome dust specks over papercuts over torture even if utility is maximized either way? I think that I’m probably doing utilitarianism as more of a maximin calculation; maximizing the minimum individual utility function in some way. I can’t maximize total utility in the cases where additional utility for some people must be bought at the cost of negative utility for others; it requires more of a fair exchange between individuals in order to increase total utility.
2^^^2 is 4, so I’d choose that in a heartbeat. 2^^^3 is the kind of number you were probably thinking about. Though, if we’re choosing fair-sounding situations, I’d like to cut one of my fingernails too short to generate a MJ/K of negentropy.
I’ve got one way of thinking this problem through that seems to fit with what you’re saying – though of course, it has its own flaws: represent each person’s utility (is that the right word in this case) such that 0 is the maximum possible utility they can have, then map each individual’s utility with x ⟼ -(e^(-x)), so that lots of harm to one person is weighted higher than tiny harms to many people. This is almost certainly a case of forcing the model to say what we want it to say.