What are the basic assumptions of ultilarianism and how are they justified? I was talking about ethics with a friend and after a bunch of questions like “Why is utilitarianism good?” and “Why is it good for people to be happy?” I pretty quickly started to sound like an idiot.
The value of a world-state is determined by the amount of value for the individuals in it
The function that determines the value of a world state is monotonic in its arguments (we often, but not always, require linearity as well)
The function that determines the value of a world state does not depend on the order of its arguments (a world where you are happy and I am sad is the same as one where I am happy and you are sad)
The rightness of actions is determined wholey by the value of their (expected) consequences.
and then either
An action is right iff no other action has better (expected) consequences
or
An action is right in proportion to to the goodness of its consequences
What are the basic assumptions of ultilarianism and how are they justified? I was talking about ethics with a friend and after a bunch of questions like “Why is utilitarianism good?” and “Why is it good for people to be happy?” I pretty quickly started to sound like an idiot.
I like this (long) informal explanation, written by Yvain.
See also.
Each possible world-state has a value
The value of a world-state is determined by the amount of value for the individuals in it
The function that determines the value of a world state is monotonic in its arguments (we often, but not always, require linearity as well)
The function that determines the value of a world state does not depend on the order of its arguments (a world where you are happy and I am sad is the same as one where I am happy and you are sad)
The rightness of actions is determined wholey by the value of their (expected) consequences.
and then either
An action is right iff no other action has better (expected) consequences
or
An action is right in proportion to to the goodness of its consequences