No form of the official theory in the papers I read, at the very least.
Many applications or implementations of utilitarianism or utilitarian (-like) systems do, however, enforce rules that if one agent’s weighed utility loss improves the total weighed utility of multiple other agents by a significant margin, that is what is right to do. The margin’s size and specific numbers and uncertainty values will vary by system.
It seems to me that these two paragraphs contaradict each other. Do you think the “he should” means something different to “it is right for him to do so”?
No, they don’t have any major differences in utilitarian systems.
It seems I was confused when trying to answer your question. Utilitarianism can be seen as an abstract system of rules to compute stuff.
Certain ways to apply those rules to compute stuff are also called utilitarianism, including the philosophy that the maximum total utility of a population should preclude over the utility of one individual.
If utilitarianism is simply the set of rules you use to compute which things are best for one single purely selfish agent, then no, nothing concludes that the agent should sacrifice anything. If you adhere to the classical philosophy related to those rules, then yes, any human will conclude what I’ve said in that second paragraph in the grandparent (or something similar). This latter (the philosophy) is historically what appeared first, and is also what’s exposed on wikipedia’s page on utilitarianism.
It seems to me that these two paragraphs contaradict each other. Do you think the “he should” means something different to “it is right for him to do so”?
No, they don’t have any major differences in utilitarian systems.
It seems I was confused when trying to answer your question. Utilitarianism can be seen as an abstract system of rules to compute stuff.
Certain ways to apply those rules to compute stuff are also called utilitarianism, including the philosophy that the maximum total utility of a population should preclude over the utility of one individual.
If utilitarianism is simply the set of rules you use to compute which things are best for one single purely selfish agent, then no, nothing concludes that the agent should sacrifice anything. If you adhere to the classical philosophy related to those rules, then yes, any human will conclude what I’ve said in that second paragraph in the grandparent (or something similar). This latter (the philosophy) is historically what appeared first, and is also what’s exposed on wikipedia’s page on utilitarianism.
Isn’t that decision theory?