Do we want to promote a theory that says “the very best thing is right, everything else is wrong”,
No. That just means the better your imagination gets, the less you do.
Consequentialism solves all of this:
Give each possible world a “goodness” or “awesomeness” or “rightness” number (utility)
Figure out the probability distribution over possible outcomes of each action you could take.
Choose the action that has highest mean awesomeness.
If something is impossible, it won’t be reachable from the action set and therefore won’t come into it. If something is bad, but nothing you can do will change it, it will cancel out. If some outcome is not actually preferable to some other outcome, you will have marked it as such in your utility assignment. If something good also comes with something worse, the utility of that possibility should reflect that. Etcetera.
In practice, you don’t actually compute this, because it is uncomputable. Instead you follow simple rules that get you good results, like “don’t throw away money” and “don’t kill people” and “feed yourself” (Notice how the rules are justified by appealing to their expected consequences, though).
Thank you. As I understand it, “Consequentialism” means the idea that you should optimize outcomes.… It is a theory of right action. It requires a theory of “goodness” to go along with it. So, you’re saying that “awesomeness” or “utility” is what is to be measured or approximated. Is that utilitarianism?
So, you’re saying that “awesomeness” or “utility” is what is to be measured or approximated. Is that utilitarianism?
No.
There are two different concepts that “utility” refers to. VNM utility is “that for which the calculus of expectation is legitimate”. ie. it encodes your preferences, with no implication about what those preferences may be, except that they behave senisibly under uncertainty.
Utilitarian utility is an older (I think) concept referring to a particular assignment of utilities involving a sum of people’s individual utilities, possibly computed from happiness or something. I think utilitarianism is wrong, but that’s just me.
I was referring to VNM utility, so you are correct that we also need a theory of goodness to assign utilities. See my “morality is awesome” post for a half-baked but practially useful solution to that problem.
No. That just means the better your imagination gets, the less you do.
Consequentialism solves all of this:
Give each possible world a “goodness” or “awesomeness” or “rightness” number (utility)
Figure out the probability distribution over possible outcomes of each action you could take.
Choose the action that has highest mean awesomeness.
If something is impossible, it won’t be reachable from the action set and therefore won’t come into it. If something is bad, but nothing you can do will change it, it will cancel out. If some outcome is not actually preferable to some other outcome, you will have marked it as such in your utility assignment. If something good also comes with something worse, the utility of that possibility should reflect that. Etcetera.
In practice, you don’t actually compute this, because it is uncomputable. Instead you follow simple rules that get you good results, like “don’t throw away money” and “don’t kill people” and “feed yourself” (Notice how the rules are justified by appealing to their expected consequences, though).
Thank you. As I understand it, “Consequentialism” means the idea that you should optimize outcomes.… It is a theory of right action. It requires a theory of “goodness” to go along with it. So, you’re saying that “awesomeness” or “utility” is what is to be measured or approximated. Is that utilitarianism?
No.
There are two different concepts that “utility” refers to. VNM utility is “that for which the calculus of expectation is legitimate”. ie. it encodes your preferences, with no implication about what those preferences may be, except that they behave senisibly under uncertainty.
Utilitarian utility is an older (I think) concept referring to a particular assignment of utilities involving a sum of people’s individual utilities, possibly computed from happiness or something. I think utilitarianism is wrong, but that’s just me.
I was referring to VNM utility, so you are correct that we also need a theory of goodness to assign utilities. See my “morality is awesome” post for a half-baked but practially useful solution to that problem.
Got it. Much appreciated.
No problem. Glad to have someone curious asking questions and tryign to learn!