What if the utility monster had a highly parallel brain, composing 10^40 separate processes each of which was of humanlike intelligence that collectively operated the monster body?
If that was the case it would not be a utility monster. It would be a bunch of people piloting a giant robot that is capable of birthing more people. A utility monster is supposed to be one distinct individual.
Then equality considerations would also favor feeding humanity to the monster. Would you want to feed it in that case?
This is equivalent to asking me if I would kill someone in order to cause a new person to be born. No, I would not do that. This is probably partly due to one of those other possible values I discussed, such as consideration for prior existing people, or valuing a high average utility that counts dead people towards the average.
It may also be due to the large inequality in lifespan between the person who was devoured and the new poeple their death created.
An alternative place to look would be the idea that morality partly reflects efficient rules for social cooperation.
I don’t know. If I was given the choice by Omega to create one of two worlds, one with a cannibalistic utility monster, and one with a utility monster that shared with others, and was assured that neither I, nor anyone I knew, nor anyone else on Earth, would ever interact with these worlds again, I still think that I would be motivated by my moral convictions to choose the second world.
You could chalk that up to signalling or habit forming, but I don’t think that’s it either. If Omega told me that it would erase the proceeding from my brain, so it would not contribute to forming good habits or allow me to signal, I’d still believe I had a moral duty to choose the second world.
If that was the case it would not be a utility monster. It would be a bunch of people piloting a giant robot that is capable of birthing more people. A utility monster is supposed to be one distinct individual.
Your ethical theory is in deep trouble if it depends on a notion of ‘distinct individual’ in any crucial way. It is easy to imagine scenarios where there is a continuous path from robot-piloting people to one giant hive mind. (Kaj wrote a whole paper about such stuff: Coalescing minds: Brain uploading-related group mind scenarios) Or we can split brain hemispheres and give both of them their own robotic bodies.
I imagine it is possible to develop some ethical theory that could handle creatures capable of merging and splitting. One possibility might be to count “utility functions” instead of individuals. This, would, of course result in weird questions like if two people’s preferences stop counting when they merge and then count again when they split. But at least it would stop someone from giving themselves a moral right to everything by making enough ems of themself.
Again, this idea probably has problems that need to be worked out. I very much doubt than I could figure out all the ethical implications in one response when Kaj wasn’t able to in a huge paper. But I don’t think it’s an insurmountable problem.
If that was the case it would not be a utility monster. It would be a bunch of people piloting a giant robot that is capable of birthing more people. A utility monster is supposed to be one distinct individual.
This is equivalent to asking me if I would kill someone in order to cause a new person to be born. No, I would not do that. This is probably partly due to one of those other possible values I discussed, such as consideration for prior existing people, or valuing a high average utility that counts dead people towards the average.
It may also be due to the large inequality in lifespan between the person who was devoured and the new poeple their death created.
I don’t know. If I was given the choice by Omega to create one of two worlds, one with a cannibalistic utility monster, and one with a utility monster that shared with others, and was assured that neither I, nor anyone I knew, nor anyone else on Earth, would ever interact with these worlds again, I still think that I would be motivated by my moral convictions to choose the second world.
You could chalk that up to signalling or habit forming, but I don’t think that’s it either. If Omega told me that it would erase the proceeding from my brain, so it would not contribute to forming good habits or allow me to signal, I’d still believe I had a moral duty to choose the second world.
Your ethical theory is in deep trouble if it depends on a notion of ‘distinct individual’ in any crucial way. It is easy to imagine scenarios where there is a continuous path from robot-piloting people to one giant hive mind. (Kaj wrote a whole paper about such stuff: Coalescing minds: Brain uploading-related group mind scenarios) Or we can split brain hemispheres and give both of them their own robotic bodies.
I imagine it is possible to develop some ethical theory that could handle creatures capable of merging and splitting. One possibility might be to count “utility functions” instead of individuals. This, would, of course result in weird questions like if two people’s preferences stop counting when they merge and then count again when they split. But at least it would stop someone from giving themselves a moral right to everything by making enough ems of themself.
Again, this idea probably has problems that need to be worked out. I very much doubt than I could figure out all the ethical implications in one response when Kaj wasn’t able to in a huge paper. But I don’t think it’s an insurmountable problem.