I’ll make it more explicit with an example: here is a possible moral declaration: “give all your free time to charity”. Here is another: “you ought to provide your friend’s child with a university education if your friend cannot afford it, but you can, (barely)”.
These seem very harsh. Lets consider two scenarios: 1) you can do it, but it would leave you very unhappy and financially or mentally impoverished.
2) you cannot do it, because such demands taken to the logical conclusion results in awful outcomes for you.
If 1, then I suppose that should be considered in the calculation, and so my question is irrelevant to consequentialism.
If 2, then it seems like the best action is impossible. By “B” I meant the second best action, say giving some time to charity, or donating some books to your friend’s child.
Do we want to promote a theory that says “the very best thing is right, everything else is wrong”, or “the best thing that ‘makes sense’ is still considered good, even if, were it possible, another action would be better”?
I realize that ‘makes sense’ carries a ton of baggage and is very vague. I’m having some difficulty articulating my self.
As for applicability, thanks, I will look at those.
Ah, I see. I’m pretty sure you’ve run up against the “ought implies can” issue, not the issue of demandingness. IIRC, this is a contested principle, but I don’t really know much about it other than Kant originally endorsing it. I think the first part of Larks’ answer gives you a good idea of what consequentialists would say in response to this issue.
Do we want to promote a theory that says “the very best thing is right, everything else is wrong”,
No. That just means the better your imagination gets, the less you do.
Consequentialism solves all of this:
Give each possible world a “goodness” or “awesomeness” or “rightness” number (utility)
Figure out the probability distribution over possible outcomes of each action you could take.
Choose the action that has highest mean awesomeness.
If something is impossible, it won’t be reachable from the action set and therefore won’t come into it. If something is bad, but nothing you can do will change it, it will cancel out. If some outcome is not actually preferable to some other outcome, you will have marked it as such in your utility assignment. If something good also comes with something worse, the utility of that possibility should reflect that. Etcetera.
In practice, you don’t actually compute this, because it is uncomputable. Instead you follow simple rules that get you good results, like “don’t throw away money” and “don’t kill people” and “feed yourself” (Notice how the rules are justified by appealing to their expected consequences, though).
Thank you. As I understand it, “Consequentialism” means the idea that you should optimize outcomes.… It is a theory of right action. It requires a theory of “goodness” to go along with it. So, you’re saying that “awesomeness” or “utility” is what is to be measured or approximated. Is that utilitarianism?
So, you’re saying that “awesomeness” or “utility” is what is to be measured or approximated. Is that utilitarianism?
No.
There are two different concepts that “utility” refers to. VNM utility is “that for which the calculus of expectation is legitimate”. ie. it encodes your preferences, with no implication about what those preferences may be, except that they behave senisibly under uncertainty.
Utilitarian utility is an older (I think) concept referring to a particular assignment of utilities involving a sum of people’s individual utilities, possibly computed from happiness or something. I think utilitarianism is wrong, but that’s just me.
I was referring to VNM utility, so you are correct that we also need a theory of goodness to assign utilities. See my “morality is awesome” post for a half-baked but practially useful solution to that problem.
Consequentialism is a method for choosing an action from the set of possible actions. If “the best action is impossible” it shouldn’t have been in the option set in the first place.
Plain Scalar Consequentialism: Of any two things a person might do at any given moment, one is better than another to the extent that the its overall consequences are better than the other’s overall consequences.
I’ll make it more explicit with an example: here is a possible moral declaration: “give all your free time to charity”. Here is another: “you ought to provide your friend’s child with a university education if your friend cannot afford it, but you can, (barely)”.
These seem very harsh. Lets consider two scenarios: 1) you can do it, but it would leave you very unhappy and financially or mentally impoverished.
2) you cannot do it, because such demands taken to the logical conclusion results in awful outcomes for you.
If 1, then I suppose that should be considered in the calculation, and so my question is irrelevant to consequentialism.
If 2, then it seems like the best action is impossible. By “B” I meant the second best action, say giving some time to charity, or donating some books to your friend’s child.
Do we want to promote a theory that says “the very best thing is right, everything else is wrong”, or “the best thing that ‘makes sense’ is still considered good, even if, were it possible, another action would be better”?
I realize that ‘makes sense’ carries a ton of baggage and is very vague. I’m having some difficulty articulating my self.
As for applicability, thanks, I will look at those.
Ah, I see. I’m pretty sure you’ve run up against the “ought implies can” issue, not the issue of demandingness. IIRC, this is a contested principle, but I don’t really know much about it other than Kant originally endorsing it. I think the first part of Larks’ answer gives you a good idea of what consequentialists would say in response to this issue.
No. That just means the better your imagination gets, the less you do.
Consequentialism solves all of this:
Give each possible world a “goodness” or “awesomeness” or “rightness” number (utility)
Figure out the probability distribution over possible outcomes of each action you could take.
Choose the action that has highest mean awesomeness.
If something is impossible, it won’t be reachable from the action set and therefore won’t come into it. If something is bad, but nothing you can do will change it, it will cancel out. If some outcome is not actually preferable to some other outcome, you will have marked it as such in your utility assignment. If something good also comes with something worse, the utility of that possibility should reflect that. Etcetera.
In practice, you don’t actually compute this, because it is uncomputable. Instead you follow simple rules that get you good results, like “don’t throw away money” and “don’t kill people” and “feed yourself” (Notice how the rules are justified by appealing to their expected consequences, though).
Thank you. As I understand it, “Consequentialism” means the idea that you should optimize outcomes.… It is a theory of right action. It requires a theory of “goodness” to go along with it. So, you’re saying that “awesomeness” or “utility” is what is to be measured or approximated. Is that utilitarianism?
No.
There are two different concepts that “utility” refers to. VNM utility is “that for which the calculus of expectation is legitimate”. ie. it encodes your preferences, with no implication about what those preferences may be, except that they behave senisibly under uncertainty.
Utilitarian utility is an older (I think) concept referring to a particular assignment of utilities involving a sum of people’s individual utilities, possibly computed from happiness or something. I think utilitarianism is wrong, but that’s just me.
I was referring to VNM utility, so you are correct that we also need a theory of goodness to assign utilities. See my “morality is awesome” post for a half-baked but practially useful solution to that problem.
Got it. Much appreciated.
No problem. Glad to have someone curious asking questions and tryign to learn!
Consequentialism is a method for choosing an action from the set of possible actions. If “the best action is impossible” it shouldn’t have been in the option set in the first place.
However, I think you might like to look into scalar consequentialism.
Thank you!