Er, doesn’t that just mean human morality assigns low desirability to the outcome innocent bystander killed to use organs?
That’s why I put “I am unsure how you define utilitarism”. If you just evaluate the outcome, then you see f(1 dead)+f(5 alive). If you evaluate the whole process, you see “f(1 guy killed as an innocent bystander) + f(5 alive)”, which may have a much lower desirability due to morality impact.
The same consideration applies to the OP: If you only evaluate the final outcome: you may think that killing hard to satisfy people is a good thing. However if you add the morality penalty of killing innocent people, then the equation suddenly changes.
The question of 1/multi-dimensional objective remains: the extreme liberal moralism would say that it is not allowed to take one dollar from a person, even if it could pay for saving one life, or killing one innocent bystander is wrong even if it could save billion lifes. Just because our agents are autonomous entities and they have unalienable rights to life, property, freedom, that can’t be violated, even for the greater good.
The above problems can only be solved if the moral agents voluntarily opt into a system that takes away a portion of their individual freedom for a greater good. However this system should not give arbitrary power to a single entity but every (immoral) violation of autonomy should happen for a well defined “higher” purpose.
I don’t say that this is the definitive way to address morality abstractly in the presence of a superintelligent entity, these are just reiterations of some of the moral principles our liberal western democracy are built upon.
That’s why I put “I am unsure how you define utilitarism”. If you just evaluate the outcome, then you see f(1 dead)+f(5 alive). If you evaluate the whole process, you see “f(1 guy killed as an innocent bystander) + f(5 alive)”, which may have a much lower desirability due to morality impact.
The same consideration applies to the OP: If you only evaluate the final outcome: you may think that killing hard to satisfy people is a good thing. However if you add the morality penalty of killing innocent people, then the equation suddenly changes.
The question of 1/multi-dimensional objective remains: the extreme liberal moralism would say that it is not allowed to take one dollar from a person, even if it could pay for saving one life, or killing one innocent bystander is wrong even if it could save billion lifes. Just because our agents are autonomous entities and they have unalienable rights to life, property, freedom, that can’t be violated, even for the greater good.
The above problems can only be solved if the moral agents voluntarily opt into a system that takes away a portion of their individual freedom for a greater good. However this system should not give arbitrary power to a single entity but every (immoral) violation of autonomy should happen for a well defined “higher” purpose.
I don’t say that this is the definitive way to address morality abstractly in the presence of a superintelligent entity, these are just reiterations of some of the moral principles our liberal western democracy are built upon.