What if the utility monster had a highly parallel brain, composing 10^40 separate processes, each of which was of humanlike intelligence, that collectively operated the monster body? As it consumes more people it is able to generate/sustain/speed up more internal threads (somewhat similar to something that might happen if whole brain emulations or AIs are much less energy-intensive than humans)
Then equality considerations would also favor feeding humanity to the monster. Would you want to feed it in that case?
If not, the objection may simply not be about some impersonal feature like aggregate happiness or equality. An alternative place to look would be the idea that morality partly reflects efficient rules for social cooperation. Ex ante, a large majority of people can agree to social insurance because they or their loved ones might need it, or to reduce the threat of the disenfranchised poor. But unless one takes a strongly Rawlsian position (“I might have been a utility monster mental process”), then endorsing the pro-utility monster stance is predictably worse for everyone except the utility monster, so they’re reluctant.
Upvoted. Before talking about a “utility monster” we should try to imagine a realistic one. One where we could believe that the utility is really increasing without diminishing returns.
Because if I can’t imagine a real utility monsters, then my intuitions about it are really intuitions about something that is not a utility monster, but claims to be one, to get more resources. And obviously the correct solution in that case would be to oppose the claims of such fake utility monster. Now our brains just need to invent some rationalization that does not include saying that the utility monster is fake.
What if the utility monster had a highly parallel brain, composing 10^40 separate processes each of which was of humanlike intelligence that collectively operated the monster body?
If that was the case it would not be a utility monster. It would be a bunch of people piloting a giant robot that is capable of birthing more people. A utility monster is supposed to be one distinct individual.
Then equality considerations would also favor feeding humanity to the monster. Would you want to feed it in that case?
This is equivalent to asking me if I would kill someone in order to cause a new person to be born. No, I would not do that. This is probably partly due to one of those other possible values I discussed, such as consideration for prior existing people, or valuing a high average utility that counts dead people towards the average.
It may also be due to the large inequality in lifespan between the person who was devoured and the new poeple their death created.
An alternative place to look would be the idea that morality partly reflects efficient rules for social cooperation.
I don’t know. If I was given the choice by Omega to create one of two worlds, one with a cannibalistic utility monster, and one with a utility monster that shared with others, and was assured that neither I, nor anyone I knew, nor anyone else on Earth, would ever interact with these worlds again, I still think that I would be motivated by my moral convictions to choose the second world.
You could chalk that up to signalling or habit forming, but I don’t think that’s it either. If Omega told me that it would erase the proceeding from my brain, so it would not contribute to forming good habits or allow me to signal, I’d still believe I had a moral duty to choose the second world.
If that was the case it would not be a utility monster. It would be a bunch of people piloting a giant robot that is capable of birthing more people. A utility monster is supposed to be one distinct individual.
Your ethical theory is in deep trouble if it depends on a notion of ‘distinct individual’ in any crucial way. It is easy to imagine scenarios where there is a continuous path from robot-piloting people to one giant hive mind. (Kaj wrote a whole paper about such stuff: Coalescing minds: Brain uploading-related group mind scenarios) Or we can split brain hemispheres and give both of them their own robotic bodies.
I imagine it is possible to develop some ethical theory that could handle creatures capable of merging and splitting. One possibility might be to count “utility functions” instead of individuals. This, would, of course result in weird questions like if two people’s preferences stop counting when they merge and then count again when they split. But at least it would stop someone from giving themselves a moral right to everything by making enough ems of themself.
Again, this idea probably has problems that need to be worked out. I very much doubt than I could figure out all the ethical implications in one response when Kaj wasn’t able to in a huge paper. But I don’t think it’s an insurmountable problem.
What if the utility monster had a highly parallel brain, composing 10^40 separate processes, each of which was of humanlike intelligence, that collectively operated the monster body? As it consumes more people it is able to generate/sustain/speed up more internal threads (somewhat similar to something that might happen if whole brain emulations or AIs are much less energy-intensive than humans)
Then equality considerations would also favor feeding humanity to the monster. Would you want to feed it in that case?
If not, the objection may simply not be about some impersonal feature like aggregate happiness or equality. An alternative place to look would be the idea that morality partly reflects efficient rules for social cooperation. Ex ante, a large majority of people can agree to social insurance because they or their loved ones might need it, or to reduce the threat of the disenfranchised poor. But unless one takes a strongly Rawlsian position (“I might have been a utility monster mental process”), then endorsing the pro-utility monster stance is predictably worse for everyone except the utility monster, so they’re reluctant.
Upvoted. Before talking about a “utility monster” we should try to imagine a realistic one. One where we could believe that the utility is really increasing without diminishing returns.
Because if I can’t imagine a real utility monsters, then my intuitions about it are really intuitions about something that is not a utility monster, but claims to be one, to get more resources. And obviously the correct solution in that case would be to oppose the claims of such fake utility monster. Now our brains just need to invent some rationalization that does not include saying that the utility monster is fake.
If that was the case it would not be a utility monster. It would be a bunch of people piloting a giant robot that is capable of birthing more people. A utility monster is supposed to be one distinct individual.
This is equivalent to asking me if I would kill someone in order to cause a new person to be born. No, I would not do that. This is probably partly due to one of those other possible values I discussed, such as consideration for prior existing people, or valuing a high average utility that counts dead people towards the average.
It may also be due to the large inequality in lifespan between the person who was devoured and the new poeple their death created.
I don’t know. If I was given the choice by Omega to create one of two worlds, one with a cannibalistic utility monster, and one with a utility monster that shared with others, and was assured that neither I, nor anyone I knew, nor anyone else on Earth, would ever interact with these worlds again, I still think that I would be motivated by my moral convictions to choose the second world.
You could chalk that up to signalling or habit forming, but I don’t think that’s it either. If Omega told me that it would erase the proceeding from my brain, so it would not contribute to forming good habits or allow me to signal, I’d still believe I had a moral duty to choose the second world.
Your ethical theory is in deep trouble if it depends on a notion of ‘distinct individual’ in any crucial way. It is easy to imagine scenarios where there is a continuous path from robot-piloting people to one giant hive mind. (Kaj wrote a whole paper about such stuff: Coalescing minds: Brain uploading-related group mind scenarios) Or we can split brain hemispheres and give both of them their own robotic bodies.
I imagine it is possible to develop some ethical theory that could handle creatures capable of merging and splitting. One possibility might be to count “utility functions” instead of individuals. This, would, of course result in weird questions like if two people’s preferences stop counting when they merge and then count again when they split. But at least it would stop someone from giving themselves a moral right to everything by making enough ems of themself.
Again, this idea probably has problems that need to be worked out. I very much doubt than I could figure out all the ethical implications in one response when Kaj wasn’t able to in a huge paper. But I don’t think it’s an insurmountable problem.
Maybe the utilitarians around here could cash up for a kickstarter of an indie B-movie with a title like this? It would make for good propaganda.