He would say, I think, that there is no way to draw boundary lines around ‘consequences’ that doesn’t place all moral weight on something like the intention of the action.
Well then the Kantian would pick B because he intends to not violate the CI with his actions? I’m not actually sure how this is different than valuing the consequences of your actions at all?
Well then the Kantian would pick B because he intends to not violate the CI with his actions?
In your initial set up, you said that A and B differ in that A’s consequences violate the CI, while B’s consequences do not. I’m claiming that, for Kant, consequences aren’t evaluable in terms of the CI, and so we don’t yet have on the table a way for a Kantian to distinguish A and B. Consequences aren’t morally evaluable, Kant would say, in the very intuitive sense in which astronomical phenomena aren’t morally evaluable (granting that we sometimes assess astronomical phenomena as good or bad in a non-moral sense).
I once again find Kantianism immensely counter-intuitive and confusing, so at this point I must thank you for correcting my misconceptions and undoing my rationalizations. :)
I’ll try to present an argument toward Kant’s views in a clear way. The argument will consist of a couple of hopefully non-puzzling scenarios for moral evaluation, an evaluation I expect at least to be intuitive to you (though perhaps not endorsed wholeheartedly), leading to the conclusion that you do not in fact concern yourself with consequences when making a moral evaluation. At some point, I expect, I’ll make a claim that you disagree with, and at that point we can discuss, if you like, where the disagreement lies exactly. So:
It’s common for consequentialists to evaluate actions in terms of expected rather than actual consequences: the philanthropist who donates to an efficient charity is generally not thought less morally good if some uncontrollable and unpredictable event prevents the good she expected to achieve. While we are ready to say that what happened in such a case was bad, we would not say that it was a moral bad, at least not on the philanthropists part.
If we grant this, then we have already admitted that the important factor in moral evaluations are not any actual events in the world, but rather something like expected consequences. In other words, moral evaluation deals with a mental event related to an action (i.e. the expectation of a certain consequence), not, or at least not directly, the consequence of that event.
And Kant would go further to point out that it’s not quite just expected consequences either. We do not evaluate equally a philanthropist who donates to an efficient charity to spite her neighbor (expecting, but ignoring, the fact that this donation will also do some good for others) and one who donates out of a desire to do some good for others (say, expecting but ignoring the fact that this donation will also upset her neighbor). Both philanthropists expect the same consequences to play out, but we do not evaluate them equally.
So it is not expected (rather than actual) consequences that are the important factor in moral evaluations, because we can detect differences in our evaluations even when these are equal. Rather, Kant would go on to say, we evaluate actions on the basis of the reasons people have for bringing about the consequences they expect. (There are other options here, of course, and so the argument could go on).
If you’ve accepted every premise thus far, I think you’re pretty close to being in range of Kant’s argument for the CI. Has that helped?
It’s common for consequentialists to evaluate actions in terms of expected rather than actual consequences: the philanthropist who donates to an efficient charity is generally not thought less morally good if some uncontrollable and unpredictable event prevents the good she expected to achieve. While we are ready to say that what happened in such a case was bad, we would not say that it was a moral bad, at least not on the philanthropists part.
I don’t accept this premise. A philanthropist whose actions lead to good consequences is morally better than a philanthropist whose actions lead to less-good consequences, wholly independent of their actual intention. This just seems like one of the fundamental aspects of consequentialism, to me.
And Kant would go further to point out that it’s not quite just expected consequences either. We do not evaluate equally a philanthropist who donates to an efficient charity to spite her neighbor (expecting, but ignoring, the fact that this donation will also do some good for others) and one who donates out of a desire to do some good for others (say, expecting but ignoring the fact that this donation will also upset her neighbor). Both philanthropists expect the same consequences to play out, but we do not evaluate them equally.
Further, I would evaluate these two philanthropists exactly the same way, as long as the externalities of spiting neighbors don’t escalate to a level where they have substantial moral weight. Someone who saves a child because he is interesting in seducing their mother and someone who saves a child out of pure altruism may not be equally moral, but if you only have this single instance with which to judge them, then they must be considered so.
A philanthropist whose actions lead to good consequences is morally better than a philanthropist whose actions lead to less-good consequences, wholly independent of their actual intention.
So suppose two people, Abe and Ben, donated to an efficient charity. Abe intends to do some good for others. Ben intends this as the first but crucial stage of an elaborate plan to murder a rival. This plan is foiled, with the result that Ben’s money simply goes to the charity and does its work as normal. You would say that the actions of Abe and Ben are morally equal?
Assuming Ben’s plan was foiled for reasons beyond his control or expectation, would you then say that the deciding factor in determining the moral worth of Ben’s action was something beyond his control or expectation?
So suppose two people, Abe and Ben, donated to an efficient charity. Abe intends to do some good for others. Ben intends this as the first but crucial stage of an elaborate plan to murder a rival. This plan is foiled, with the result that Ben’s money simply goes to the charity and does its work as normal. You would say that the actions of Abe and Ben are morally equal?
Yes, their particular acts of charity were morally equal, so long as their donations were equal.
Assuming Ben’s plan was foiled for reasons beyond his control or expectation, would you then say that the deciding factor in determining the moral worth of Ben’s action was something beyond his control or expectation?
The deciding factor in determining the moral worth of Ben’s actions was “out of his hands,” to a certain extent. He isn’t awarded point for trying.
Yes, their particular acts of charity were morally equal, so long as their donations were equal....The deciding factor in determining the moral worth of Ben’s actions was “out of his hands,” to a certain extent.
Hm! Those are surprising answers. I drew my initial argument from Kant’s Groundwork, and so far as I can tell, Kant doesn’t expect his reader to give the answer you did. So I’m at a loss as to what he would say to you now. I’m no Kantian, but I have to say I find myself unable to judge Abe and Ben’s actions as you have.
From the way you had written the previous few comments, I had a feeling you weren’t expecting me to react as I did (and I have to note, you have been by far the more logically polite partner in this discussion so far.)
I drew my initial argument from Kant’s Groundwork, and so far as I can tell, Kant doesn’t expect his reader to give the answer you did.
This seems a common occurrence in the philosophy of that era. Hume is constantly asking rhetorical questions of his readers and assuming that they answer the same way he does...
I’m no Kantian, but I have to say I find myself unable to judge Abe and Ben’s actions as you have.
If I had to guess, I would say that our disagreement boils down to a definitional one rather than one involving empirical facts, in a rather unsurprising manner.
If I had to guess, I would say that our disagreement boils down to a definitional one rather than one involving empirical facts, in a rather unsurprising manner.
We aren’t in disagreement about any facts, but are simply using the term ‘moral judgement’ in different ways. I take moral judgement to be an after-the-fact calculation, and you take it to be a statement about intentionality and agency. You would, presumably, agree with me that Abe and Ben’s actions netted the same results, and I will agree with you that Abe’s motivations were “in better faith” than Ben’s, so we’ve essentially reached a resolution.
Well, I would say that Abe and Ben’s respective actions have different moral value, and you’ve said that they have the same moral value. I think we at least disagree about this, or do you think we’re using some relevant terms differently?
I think we at least disagree about this, or do you think we’re using some relevant terms differently?
I think we disagree on the meaning of terms related to the word ‘moral’ and nothing further. We aren’t generating different expectations, and there’s no empirical test we could run to find out which one of us is correct.
Presumably, a consequentialist would assert that insofar as I evaluate a philanthropist who acts out of spite differently than a philanthropist who acts out of altruism even if (implausibly) I expect both philanthropists to cause the same consequences in the long run, I am not making a moral judgment in so doing, but some other kind of judgment, perhaps an aesthetic one.
The reason I would evaluate a philanthropist who acts out of spite differently from a philanthropist who acts out of altruism is precisely because I don’t expect both philanthropists to cause the same consequences in the long run.
You’re right, that was careless of me. I intended the hypothetical only to be about the evaluations of their respective actions, not them as people. This is at least partly because Kantian deontology (as I understand it) doesn’t allow for any direct evaluations of people, only actions.
This wouldn’t be a convincing reply, I think, unless the consequentialist could come up with some reason for thinking such an evaluation is aesthetic other than ‘if it were a moral evaluation, it would conflict with consequentialism’. That is, assuming, the consequentialist wants to appeal to common, actual moral evaluation in defending the plausibility of her view. She may not.
Convincing to whom? I mean, I agree completely that a virtue ethicist, for example, would not find it convincing. But neither is the assertion that it is a moral judgment convincing to a consequentialist.
If I’ve understood you, you expect even a consequentialist to say “Oh, you’re right, the judgment that a spiteful act of philanthropy is worse than an altruistic act of philanthropy whose expected consequences are the same is a moral judgment, and therefore moral judgments aren’t really about expected consequences.”
It’s not at all clear to me that a consequentialist who isn’t confused would actually say that.
X: Behold A and B in their hypothetical shenanigans. That you will tend to judge the action of A morally better than that of B is evidence that you make moral evaluations in accordance with moral theory M (on which they are morally dissimilar) rather than moral theory N (according to which they are equivalent). This is evidence for the truth of M.
Y: I grant you that I judge A to be better than B, but this isn’t a moral judgement (and so not evidence for M). This is, rather, an aesthetic judgement.
X: What is your reason for thinking this judgement is aesthetic rather than moral?
Y: I am an Nist. If it were a moral judgement, it would be evidence for M.
X should not find this convincing. Neither should Y, or anyone else. Y’s argument is terrible.
We could fix Y’s argument by having him go back and deny that he judges A’s act to be morally different from B’s. This is what Berry did. Or Y could defend his claim, on independent grounds, that his judgement is aesthetic and not moral. Or Y could go back and deny that his actual moral evaluations being in accordance with M are evidence for M.
(shrug) At the risk of repeating myself: what Y would actually say supposing Y were not a conveniently poor debater is not “I am an Nist” but rather “Because what makes a judgment of an act a moral judgment is N, and the judgment of A to be better than B has nothing to do with N.”
X might disagree with Y about what makes a judgment a moral judgment—in fact, if X is not an Nist, it seems likely that X does disagree—but X simply insisting that “A is better than B” is a moral judgment because X says so is unconvincing.
We could fix Y’s argument by having him go back and deny that he judges A’s act to be morally different from B’s.
There’s no going back involved. In this example Y has said all along that Y doesn’t judge A’s act to be morally different from B’s.
It seems to me that what you’re suggesting constitutes logical rudeness on the consequentialist’s part. The argument ran like this:
Take a hypothetical case involving A and B. You are asked to make a moral judgement. If you judge A and B’s actions differently, you are judging as if M is true. If you judge them to be the same, you are judging as if N is true.
The reply you provided wouldn’t be relevant if you said right away that that A and B’s actions are morally the same. It’s only relevant if you’ve judged them to be different (in some way) in response to the hypothetical. Your reply is then that this judgement turns out not to be a moral judgement at all, but an irrelevant aesthetic judgement. This is logically rude because I asked you to make a moral judgement in the first place. You should have just said right off that you don’t judge the two cases differently.
If someone asks me to make a moral judgment about whether A and B’s actions are morally the same, and I judge that they are morally different, and then later I say that they are morally equivalent, I’m clearly being inconsistent. Perhaps I’m being logically rude, perhaps I’m confused, perhaps I’ve changed my mind.
If someone asks me to compare A and B, and I judge that A is better than B, and then later I say that they are morally equivalent, another possibility is that I was not making what I consider a moral judgment in the first place.
I’m confused as to why, upon being asked for a moral evaluation in the course of a discussion on consequentialism and deontology, someone would offer me an aesthetic evaluation they themselves consider irrelevant to the moral question. I don’t think my request for an evaluation was very ambiguous: Berry understood and answered accordingly, and it would surely be strange to think I had asked for an aesthetic evaluation in the middle of a defense of deontology. So I don’t understand how your suggestion would add anything to the discussion.
In the hypothetical discussion you asked me to consider, X makes an assertion about Y’s moral judgments, and Y replies that what X is referring to isn’t a moral judgment. Hence, I said “In this example Y has said all along that Y doesn’t judge A’s act to be morally different from B’s,” and you replied “It seems to me that what you’re suggesting constitutes logical rudeness on the consequentialist’s part.”
I, apparently incorrectly, assumed we were still talking about your hypothetical example.
Now, it seems you’re talking instead about your earlier conversation with Berry, which I haven’t read. I’ll take your word for it that my suggestion would not add anything to that discussion.
Well then the Kantian would pick B because he intends to not violate the CI with his actions? I’m not actually sure how this is different than valuing the consequences of your actions at all?
In your initial set up, you said that A and B differ in that A’s consequences violate the CI, while B’s consequences do not. I’m claiming that, for Kant, consequences aren’t evaluable in terms of the CI, and so we don’t yet have on the table a way for a Kantian to distinguish A and B. Consequences aren’t morally evaluable, Kant would say, in the very intuitive sense in which astronomical phenomena aren’t morally evaluable (granting that we sometimes assess astronomical phenomena as good or bad in a non-moral sense).
I once again find Kantianism immensely counter-intuitive and confusing, so at this point I must thank you for correcting my misconceptions and undoing my rationalizations. :)
I’ll try to present an argument toward Kant’s views in a clear way. The argument will consist of a couple of hopefully non-puzzling scenarios for moral evaluation, an evaluation I expect at least to be intuitive to you (though perhaps not endorsed wholeheartedly), leading to the conclusion that you do not in fact concern yourself with consequences when making a moral evaluation. At some point, I expect, I’ll make a claim that you disagree with, and at that point we can discuss, if you like, where the disagreement lies exactly. So:
It’s common for consequentialists to evaluate actions in terms of expected rather than actual consequences: the philanthropist who donates to an efficient charity is generally not thought less morally good if some uncontrollable and unpredictable event prevents the good she expected to achieve. While we are ready to say that what happened in such a case was bad, we would not say that it was a moral bad, at least not on the philanthropists part.
If we grant this, then we have already admitted that the important factor in moral evaluations are not any actual events in the world, but rather something like expected consequences. In other words, moral evaluation deals with a mental event related to an action (i.e. the expectation of a certain consequence), not, or at least not directly, the consequence of that event.
And Kant would go further to point out that it’s not quite just expected consequences either. We do not evaluate equally a philanthropist who donates to an efficient charity to spite her neighbor (expecting, but ignoring, the fact that this donation will also do some good for others) and one who donates out of a desire to do some good for others (say, expecting but ignoring the fact that this donation will also upset her neighbor). Both philanthropists expect the same consequences to play out, but we do not evaluate them equally.
So it is not expected (rather than actual) consequences that are the important factor in moral evaluations, because we can detect differences in our evaluations even when these are equal. Rather, Kant would go on to say, we evaluate actions on the basis of the reasons people have for bringing about the consequences they expect. (There are other options here, of course, and so the argument could go on).
If you’ve accepted every premise thus far, I think you’re pretty close to being in range of Kant’s argument for the CI. Has that helped?
I don’t accept this premise. A philanthropist whose actions lead to good consequences is morally better than a philanthropist whose actions lead to less-good consequences, wholly independent of their actual intention. This just seems like one of the fundamental aspects of consequentialism, to me.
Further, I would evaluate these two philanthropists exactly the same way, as long as the externalities of spiting neighbors don’t escalate to a level where they have substantial moral weight. Someone who saves a child because he is interesting in seducing their mother and someone who saves a child out of pure altruism may not be equally moral, but if you only have this single instance with which to judge them, then they must be considered so.
So suppose two people, Abe and Ben, donated to an efficient charity. Abe intends to do some good for others. Ben intends this as the first but crucial stage of an elaborate plan to murder a rival. This plan is foiled, with the result that Ben’s money simply goes to the charity and does its work as normal. You would say that the actions of Abe and Ben are morally equal?
Assuming Ben’s plan was foiled for reasons beyond his control or expectation, would you then say that the deciding factor in determining the moral worth of Ben’s action was something beyond his control or expectation?
Yes, their particular acts of charity were morally equal, so long as their donations were equal.
The deciding factor in determining the moral worth of Ben’s actions was “out of his hands,” to a certain extent. He isn’t awarded point for trying.
Hm! Those are surprising answers. I drew my initial argument from Kant’s Groundwork, and so far as I can tell, Kant doesn’t expect his reader to give the answer you did. So I’m at a loss as to what he would say to you now. I’m no Kantian, but I have to say I find myself unable to judge Abe and Ben’s actions as you have.
From the way you had written the previous few comments, I had a feeling you weren’t expecting me to react as I did (and I have to note, you have been by far the more logically polite partner in this discussion so far.)
This seems a common occurrence in the philosophy of that era. Hume is constantly asking rhetorical questions of his readers and assuming that they answer the same way he does...
If I had to guess, I would say that our disagreement boils down to a definitional one rather than one involving empirical facts, in a rather unsurprising manner.
Could you elaborate on this?
We aren’t in disagreement about any facts, but are simply using the term ‘moral judgement’ in different ways. I take moral judgement to be an after-the-fact calculation, and you take it to be a statement about intentionality and agency. You would, presumably, agree with me that Abe and Ben’s actions netted the same results, and I will agree with you that Abe’s motivations were “in better faith” than Ben’s, so we’ve essentially reached a resolution.
Well, I would say that Abe and Ben’s respective actions have different moral value, and you’ve said that they have the same moral value. I think we at least disagree about this, or do you think we’re using some relevant terms differently?
I think we disagree on the meaning of terms related to the word ‘moral’ and nothing further. We aren’t generating different expectations, and there’s no empirical test we could run to find out which one of us is correct.
Hm, I think you may be right. I cannot for the life of me think of an empirical test that would decide the issue.
Presumably, a consequentialist would assert that insofar as I evaluate a philanthropist who acts out of spite differently than a philanthropist who acts out of altruism even if (implausibly) I expect both philanthropists to cause the same consequences in the long run, I am not making a moral judgment in so doing, but some other kind of judgment, perhaps an aesthetic one.
The reason I would evaluate a philanthropist who acts out of spite differently from a philanthropist who acts out of altruism is precisely because I don’t expect both philanthropists to cause the same consequences in the long run.
Yes, I agree. That’s why I said “implausibly”. But the hypothetical hen proposed presumed this, and I chose not to fight it.
This seems like a judgement about the philanthropists, rather than the act of donating. My example was intended to discuss the act, not the agent.
Your wording suggests otherwise: “We do not evaluate equally a philanthropist who donates to an efficient charity to spite her neighbor...”
You’re right, that was careless of me. I intended the hypothetical only to be about the evaluations of their respective actions, not them as people. This is at least partly because Kantian deontology (as I understand it) doesn’t allow for any direct evaluations of people, only actions.
This wouldn’t be a convincing reply, I think, unless the consequentialist could come up with some reason for thinking such an evaluation is aesthetic other than ‘if it were a moral evaluation, it would conflict with consequentialism’. That is, assuming, the consequentialist wants to appeal to common, actual moral evaluation in defending the plausibility of her view. She may not.
Convincing to whom?
I mean, I agree completely that a virtue ethicist, for example, would not find it convincing.
But neither is the assertion that it is a moral judgment convincing to a consequentialist.
If I’ve understood you, you expect even a consequentialist to say “Oh, you’re right, the judgment that a spiteful act of philanthropy is worse than an altruistic act of philanthropy whose expected consequences are the same is a moral judgment, and therefore moral judgments aren’t really about expected consequences.”
It’s not at all clear to me that a consequentialist who isn’t confused would actually say that.
Me? Hopefully, the consequentialist as well.
Imagine this conversation:
X: Behold A and B in their hypothetical shenanigans. That you will tend to judge the action of A morally better than that of B is evidence that you make moral evaluations in accordance with moral theory M (on which they are morally dissimilar) rather than moral theory N (according to which they are equivalent). This is evidence for the truth of M.
Y: I grant you that I judge A to be better than B, but this isn’t a moral judgement (and so not evidence for M). This is, rather, an aesthetic judgement.
X: What is your reason for thinking this judgement is aesthetic rather than moral?
Y: I am an Nist. If it were a moral judgement, it would be evidence for M.
X should not find this convincing. Neither should Y, or anyone else. Y’s argument is terrible.
We could fix Y’s argument by having him go back and deny that he judges A’s act to be morally different from B’s. This is what Berry did. Or Y could defend his claim, on independent grounds, that his judgement is aesthetic and not moral. Or Y could go back and deny that his actual moral evaluations being in accordance with M are evidence for M.
(shrug) At the risk of repeating myself: what Y would actually say supposing Y were not a conveniently poor debater is not “I am an Nist” but rather “Because what makes a judgment of an act a moral judgment is N, and the judgment of A to be better than B has nothing to do with N.”
X might disagree with Y about what makes a judgment a moral judgment—in fact, if X is not an Nist, it seems likely that X does disagree—but X simply insisting that “A is better than B” is a moral judgment because X says so is unconvincing.
There’s no going back involved. In this example Y has said all along that Y doesn’t judge A’s act to be morally different from B’s.
It seems to me that what you’re suggesting constitutes logical rudeness on the consequentialist’s part. The argument ran like this:
Take a hypothetical case involving A and B. You are asked to make a moral judgement. If you judge A and B’s actions differently, you are judging as if M is true. If you judge them to be the same, you are judging as if N is true.
The reply you provided wouldn’t be relevant if you said right away that that A and B’s actions are morally the same. It’s only relevant if you’ve judged them to be different (in some way) in response to the hypothetical. Your reply is then that this judgement turns out not to be a moral judgement at all, but an irrelevant aesthetic judgement. This is logically rude because I asked you to make a moral judgement in the first place. You should have just said right off that you don’t judge the two cases differently.
If someone asks me to make a moral judgment about whether A and B’s actions are morally the same, and I judge that they are morally different, and then later I say that they are morally equivalent, I’m clearly being inconsistent. Perhaps I’m being logically rude, perhaps I’m confused, perhaps I’ve changed my mind.
If someone asks me to compare A and B, and I judge that A is better than B, and then later I say that they are morally equivalent, another possibility is that I was not making what I consider a moral judgment in the first place.
I’m confused as to why, upon being asked for a moral evaluation in the course of a discussion on consequentialism and deontology, someone would offer me an aesthetic evaluation they themselves consider irrelevant to the moral question. I don’t think my request for an evaluation was very ambiguous: Berry understood and answered accordingly, and it would surely be strange to think I had asked for an aesthetic evaluation in the middle of a defense of deontology. So I don’t understand how your suggestion would add anything to the discussion.
In the hypothetical discussion you asked me to consider, X makes an assertion about Y’s moral judgments, and Y replies that what X is referring to isn’t a moral judgment. Hence, I said “In this example Y has said all along that Y doesn’t judge A’s act to be morally different from B’s,” and you replied “It seems to me that what you’re suggesting constitutes logical rudeness on the consequentialist’s part.”
I, apparently incorrectly, assumed we were still talking about your hypothetical example.
Now, it seems you’re talking instead about your earlier conversation with Berry, which I haven’t read. I’ll take your word for it that my suggestion would not add anything to that discussion.
Dave, I think you’re pulling my leg. Your initial comment to me was from one of my posts to Berry, so of course you read it! I’m going to tap out.