Your usage of the words “subjective” and “objective” is confusing.
Utilitarianism doesn’t forbid that each individual person (agent) have different things they value (utility functions). As such, there is no universal specific simple rule that can apply to all possible agents to maximize “morality” (total sum utility).
It is “objective” in the sense that if you know all the utility functions, and try to achieve the maximum possible total utility, this is the best thing to do from an external standpoint. It is also “objective” in the sense that when your own utility is maximized, that is the best possible thing that you could have, regardless of whatever anyone might think about it.
However, it is also “subjective” in the sense that each individual can have their own utility function, and it can be whatever you could imagine. There are no restrictions in utilitarianism itself. My utility is not your utility, unless your utility function has a component that values my utility and you have full knowledge of my utility (or even if you don’t, but that’s a theoretical nitpick).
Utilitarianism alone doesn’t apply to, persuade, or justify any action that affects values to anyone else. It can be abused as such, but that’s not what it’s there for, AFAIK.
I think specific applications of utilitarianism might say that modifying the values of yourself or of others would be beneficial even in terms of your current utility function.
When things start getting interesting is when not only are some values implemented as variable-weight within the function, but the functions themselves become part of the calculation, and utility functions become modular and partially recursive.
I’m currently convinced that there’s at least one (perhaps well-hidden) such recursive module of utility-for-utility-functions currently built into the human brain, and that clever hacking of this module might be very beneficial in the long run.
Utilitarianism alone doesn’t apply to, persuade, or justify any action that affects values to anyone else. It can be abused as such, but that’s not what it’s there for,
Are you saying that no form of utilitariansim will ever conclude fhat one person should sacrifice some value for the benefit of the many?
No form of the official theory in the papers I read, at the very least.
Many applications or implementations of utilitarianism or utilitarian (-like) systems do, however, enforce rules that if one agent’s weighed utility loss improves the total weighed utility of multiple other agents by a significant margin, that is what is right to do. The margin’s size and specific numbers and uncertainty values will vary by system.
I’ve never seen a system that would enforce such rules without a weighing function for the utilities of some kind to correct for limited information and uncertainty and diminishing-returns-like problems.
No form of the official theory in the papers I read, at the very least.
Many applications or implementations of utilitarianism or utilitarian (-like) systems do, however, enforce rules that if one agent’s weighed utility loss improves the total weighed utility of multiple other agents by a significant margin, that is what is right to do. The margin’s size and specific numbers and uncertainty values will vary by system.
It seems to me that these two paragraphs contaradict each other. Do you think the “he should” means something different to “it is right for him to do so”?
No, they don’t have any major differences in utilitarian systems.
It seems I was confused when trying to answer your question. Utilitarianism can be seen as an abstract system of rules to compute stuff.
Certain ways to apply those rules to compute stuff are also called utilitarianism, including the philosophy that the maximum total utility of a population should preclude over the utility of one individual.
If utilitarianism is simply the set of rules you use to compute which things are best for one single purely selfish agent, then no, nothing concludes that the agent should sacrifice anything. If you adhere to the classical philosophy related to those rules, then yes, any human will conclude what I’ve said in that second paragraph in the grandparent (or something similar). This latter (the philosophy) is historically what appeared first, and is also what’s exposed on wikipedia’s page on utilitarianism.
Your usage of the words “subjective” and “objective” is confusing.
Utilitarianism doesn’t forbid that each individual person (agent) have different things they value (utility functions). As such, there is no universal specific simple rule that can apply to all possible agents to maximize “morality” (total sum utility).
It is “objective” in the sense that if you know all the utility functions, and try to achieve the maximum possible total utility, this is the best thing to do from an external standpoint. It is also “objective” in the sense that when your own utility is maximized, that is the best possible thing that you could have, regardless of whatever anyone might think about it.
However, it is also “subjective” in the sense that each individual can have their own utility function, and it can be whatever you could imagine. There are no restrictions in utilitarianism itself. My utility is not your utility, unless your utility function has a component that values my utility and you have full knowledge of my utility (or even if you don’t, but that’s a theoretical nitpick).
Utilitarianism alone doesn’t apply to, persuade, or justify any action that affects values to anyone else. It can be abused as such, but that’s not what it’s there for, AFAIK.
I think specific applications of utilitarianism might say that modifying the values of yourself or of others would be beneficial even in terms of your current utility function.
Yeah.
When things start getting interesting is when not only are some values implemented as variable-weight within the function, but the functions themselves become part of the calculation, and utility functions become modular and partially recursive.
I’m currently convinced that there’s at least one (perhaps well-hidden) such recursive module of utility-for-utility-functions currently built into the human brain, and that clever hacking of this module might be very beneficial in the long run.
Are you saying that no form of utilitariansim will ever conclude fhat one person should sacrifice some value for the benefit of the many?
No form of the official theory in the papers I read, at the very least.
Many applications or implementations of utilitarianism or utilitarian (-like) systems do, however, enforce rules that if one agent’s weighed utility loss improves the total weighed utility of multiple other agents by a significant margin, that is what is right to do. The margin’s size and specific numbers and uncertainty values will vary by system.
I’ve never seen a system that would enforce such rules without a weighing function for the utilities of some kind to correct for limited information and uncertainty and diminishing-returns-like problems.
It seems to me that these two paragraphs contaradict each other. Do you think the “he should” means something different to “it is right for him to do so”?
No, they don’t have any major differences in utilitarian systems.
It seems I was confused when trying to answer your question. Utilitarianism can be seen as an abstract system of rules to compute stuff.
Certain ways to apply those rules to compute stuff are also called utilitarianism, including the philosophy that the maximum total utility of a population should preclude over the utility of one individual.
If utilitarianism is simply the set of rules you use to compute which things are best for one single purely selfish agent, then no, nothing concludes that the agent should sacrifice anything. If you adhere to the classical philosophy related to those rules, then yes, any human will conclude what I’ve said in that second paragraph in the grandparent (or something similar). This latter (the philosophy) is historically what appeared first, and is also what’s exposed on wikipedia’s page on utilitarianism.
Isn’t that decision theory?