It’s common for consequentialists to evaluate actions in terms of expected rather than actual consequences: the philanthropist who donates to an efficient charity is generally not thought less morally good if some uncontrollable and unpredictable event prevents the good she expected to achieve. While we are ready to say that what happened in such a case was bad, we would not say that it was a moral bad, at least not on the philanthropists part.
I don’t accept this premise. A philanthropist whose actions lead to good consequences is morally better than a philanthropist whose actions lead to less-good consequences, wholly independent of their actual intention. This just seems like one of the fundamental aspects of consequentialism, to me.
And Kant would go further to point out that it’s not quite just expected consequences either. We do not evaluate equally a philanthropist who donates to an efficient charity to spite her neighbor (expecting, but ignoring, the fact that this donation will also do some good for others) and one who donates out of a desire to do some good for others (say, expecting but ignoring the fact that this donation will also upset her neighbor). Both philanthropists expect the same consequences to play out, but we do not evaluate them equally.
Further, I would evaluate these two philanthropists exactly the same way, as long as the externalities of spiting neighbors don’t escalate to a level where they have substantial moral weight. Someone who saves a child because he is interesting in seducing their mother and someone who saves a child out of pure altruism may not be equally moral, but if you only have this single instance with which to judge them, then they must be considered so.
A philanthropist whose actions lead to good consequences is morally better than a philanthropist whose actions lead to less-good consequences, wholly independent of their actual intention.
So suppose two people, Abe and Ben, donated to an efficient charity. Abe intends to do some good for others. Ben intends this as the first but crucial stage of an elaborate plan to murder a rival. This plan is foiled, with the result that Ben’s money simply goes to the charity and does its work as normal. You would say that the actions of Abe and Ben are morally equal?
Assuming Ben’s plan was foiled for reasons beyond his control or expectation, would you then say that the deciding factor in determining the moral worth of Ben’s action was something beyond his control or expectation?
So suppose two people, Abe and Ben, donated to an efficient charity. Abe intends to do some good for others. Ben intends this as the first but crucial stage of an elaborate plan to murder a rival. This plan is foiled, with the result that Ben’s money simply goes to the charity and does its work as normal. You would say that the actions of Abe and Ben are morally equal?
Yes, their particular acts of charity were morally equal, so long as their donations were equal.
Assuming Ben’s plan was foiled for reasons beyond his control or expectation, would you then say that the deciding factor in determining the moral worth of Ben’s action was something beyond his control or expectation?
The deciding factor in determining the moral worth of Ben’s actions was “out of his hands,” to a certain extent. He isn’t awarded point for trying.
Yes, their particular acts of charity were morally equal, so long as their donations were equal....The deciding factor in determining the moral worth of Ben’s actions was “out of his hands,” to a certain extent.
Hm! Those are surprising answers. I drew my initial argument from Kant’s Groundwork, and so far as I can tell, Kant doesn’t expect his reader to give the answer you did. So I’m at a loss as to what he would say to you now. I’m no Kantian, but I have to say I find myself unable to judge Abe and Ben’s actions as you have.
From the way you had written the previous few comments, I had a feeling you weren’t expecting me to react as I did (and I have to note, you have been by far the more logically polite partner in this discussion so far.)
I drew my initial argument from Kant’s Groundwork, and so far as I can tell, Kant doesn’t expect his reader to give the answer you did.
This seems a common occurrence in the philosophy of that era. Hume is constantly asking rhetorical questions of his readers and assuming that they answer the same way he does...
I’m no Kantian, but I have to say I find myself unable to judge Abe and Ben’s actions as you have.
If I had to guess, I would say that our disagreement boils down to a definitional one rather than one involving empirical facts, in a rather unsurprising manner.
If I had to guess, I would say that our disagreement boils down to a definitional one rather than one involving empirical facts, in a rather unsurprising manner.
We aren’t in disagreement about any facts, but are simply using the term ‘moral judgement’ in different ways. I take moral judgement to be an after-the-fact calculation, and you take it to be a statement about intentionality and agency. You would, presumably, agree with me that Abe and Ben’s actions netted the same results, and I will agree with you that Abe’s motivations were “in better faith” than Ben’s, so we’ve essentially reached a resolution.
Well, I would say that Abe and Ben’s respective actions have different moral value, and you’ve said that they have the same moral value. I think we at least disagree about this, or do you think we’re using some relevant terms differently?
I think we at least disagree about this, or do you think we’re using some relevant terms differently?
I think we disagree on the meaning of terms related to the word ‘moral’ and nothing further. We aren’t generating different expectations, and there’s no empirical test we could run to find out which one of us is correct.
I don’t accept this premise. A philanthropist whose actions lead to good consequences is morally better than a philanthropist whose actions lead to less-good consequences, wholly independent of their actual intention. This just seems like one of the fundamental aspects of consequentialism, to me.
Further, I would evaluate these two philanthropists exactly the same way, as long as the externalities of spiting neighbors don’t escalate to a level where they have substantial moral weight. Someone who saves a child because he is interesting in seducing their mother and someone who saves a child out of pure altruism may not be equally moral, but if you only have this single instance with which to judge them, then they must be considered so.
So suppose two people, Abe and Ben, donated to an efficient charity. Abe intends to do some good for others. Ben intends this as the first but crucial stage of an elaborate plan to murder a rival. This plan is foiled, with the result that Ben’s money simply goes to the charity and does its work as normal. You would say that the actions of Abe and Ben are morally equal?
Assuming Ben’s plan was foiled for reasons beyond his control or expectation, would you then say that the deciding factor in determining the moral worth of Ben’s action was something beyond his control or expectation?
Yes, their particular acts of charity were morally equal, so long as their donations were equal.
The deciding factor in determining the moral worth of Ben’s actions was “out of his hands,” to a certain extent. He isn’t awarded point for trying.
Hm! Those are surprising answers. I drew my initial argument from Kant’s Groundwork, and so far as I can tell, Kant doesn’t expect his reader to give the answer you did. So I’m at a loss as to what he would say to you now. I’m no Kantian, but I have to say I find myself unable to judge Abe and Ben’s actions as you have.
From the way you had written the previous few comments, I had a feeling you weren’t expecting me to react as I did (and I have to note, you have been by far the more logically polite partner in this discussion so far.)
This seems a common occurrence in the philosophy of that era. Hume is constantly asking rhetorical questions of his readers and assuming that they answer the same way he does...
If I had to guess, I would say that our disagreement boils down to a definitional one rather than one involving empirical facts, in a rather unsurprising manner.
Could you elaborate on this?
We aren’t in disagreement about any facts, but are simply using the term ‘moral judgement’ in different ways. I take moral judgement to be an after-the-fact calculation, and you take it to be a statement about intentionality and agency. You would, presumably, agree with me that Abe and Ben’s actions netted the same results, and I will agree with you that Abe’s motivations were “in better faith” than Ben’s, so we’ve essentially reached a resolution.
Well, I would say that Abe and Ben’s respective actions have different moral value, and you’ve said that they have the same moral value. I think we at least disagree about this, or do you think we’re using some relevant terms differently?
I think we disagree on the meaning of terms related to the word ‘moral’ and nothing further. We aren’t generating different expectations, and there’s no empirical test we could run to find out which one of us is correct.
Hm, I think you may be right. I cannot for the life of me think of an empirical test that would decide the issue.