Good luck. Nearly everything currently written on morality is horribly wrong. I took a few ethics classes, and they are mostly junk. Maybe things are better in proper philosophy, but I doubt it.
I have a hunch things are a good bit better in proper philosophy than you think. Admittedly, most intro and medium level courses regarding ethics are pretty terrible (this is obviously only from personal experience.) If I had to make a guess as to why that is, it’d probably be for the same reason I think most of the rest of philosophy courses could be better: too much focus on history.
In my intermediate level course, we barely talk about history at all. It is supposed to focus on “developments” in the last thirty years or so. The problem I have is that most profs think that philosophy is able to go about figuring out the truth without things like empirism, scientific study, neuroscience, probability and decision theory. Everything is very “intuitive” and I find that difficult to grasp.
For example, when discussing deontolgy, I asked why there should be absolute “requirements” as an argument against consequentialism, seeing that if it’s true that the best consequences would be take these requiremesnts into consequentialist accounts of outcomes, then that is what a conequentialist would (should) say as well! The professor’s answer and that of many students was: “That’s just the way it is. Some things ought not be done, only because they must ought not be done”. That is a hard pill for me to swallow. In this case I am much more comfortable with Eliezer’s Ethical Injunctions.
(The prof was not necessarily promoting dentology but was arguing on it’s behalf.)
Note that you could reverse this conversation: a deontologist could ask you why we should privilege the consequences so much, instead of just doing the right things regardless of the consequences. I would expect that your response would be pretty close to “that’s just the way it is, it’s the consequences that are the most important”—at least, I know that mine would be. And the deontologist would find this a very hard pill to swallow.
As Alicorn has pointed out, you can’t understand deontology by requiring an explanation on consequentialism’s terms, and you likewise can’t understand consequentialism by requiring an explanation on deontology’s terms. At some point, your moral reasoning has to bottom out to some set of moral intuitions which are just taken as axiomatic and cannot be justified.
Note that you could reverse this conversation: a deontologist could ask you why we should privilege the consequences so much, instead of just doing the right things regardless of the consequences. I would expect that your response would be pretty close to “that’s just the way it is, it’s the consequences that are the most important”—at least, I know that mine would be. And the deontologist would find this a very hard pill to swallow.
Except that deontology cares about consequences as well, so there’s no need to convince them that the consequences of our actions have moral weight. If act A’s consequences violate the Categorical Imperative, and act B’s consequences don’t, then the Kantian (for example) will pick act B.
The friction between deontology and consequentialism is that they disagree about what should be maximized, a distinction which is often simplified to consequentialists wanting to maximize the ‘Good’ and deontologists wanting to maximize the ‘Right’.
I’ll agree that past this point, much of the objections to the other side’s positions hit ‘moral bedrock’ and intuitions are often seen as the solution to this gap.
If act A’s consequences violate the Categorical Imperative, and act B’s consequences don’t, then the Kantian (for example) will pick act B.
For Kant, (and for all the Kantians I know of), consequences aren’t evaluable in terms of the categorical imperative. This is something like a category mistake. Kant is pretty explicit that the consequences of an action well and truly do not matter to the moral value of an action. He would say, I think, that there is no way to draw boundary lines around ‘consequences’ that doesn’t place all moral weight on something like the intention of the action.
He would say, I think, that there is no way to draw boundary lines around ‘consequences’ that doesn’t place all moral weight on something like the intention of the action.
Well then the Kantian would pick B because he intends to not violate the CI with his actions? I’m not actually sure how this is different than valuing the consequences of your actions at all?
Well then the Kantian would pick B because he intends to not violate the CI with his actions?
In your initial set up, you said that A and B differ in that A’s consequences violate the CI, while B’s consequences do not. I’m claiming that, for Kant, consequences aren’t evaluable in terms of the CI, and so we don’t yet have on the table a way for a Kantian to distinguish A and B. Consequences aren’t morally evaluable, Kant would say, in the very intuitive sense in which astronomical phenomena aren’t morally evaluable (granting that we sometimes assess astronomical phenomena as good or bad in a non-moral sense).
I once again find Kantianism immensely counter-intuitive and confusing, so at this point I must thank you for correcting my misconceptions and undoing my rationalizations. :)
I’ll try to present an argument toward Kant’s views in a clear way. The argument will consist of a couple of hopefully non-puzzling scenarios for moral evaluation, an evaluation I expect at least to be intuitive to you (though perhaps not endorsed wholeheartedly), leading to the conclusion that you do not in fact concern yourself with consequences when making a moral evaluation. At some point, I expect, I’ll make a claim that you disagree with, and at that point we can discuss, if you like, where the disagreement lies exactly. So:
It’s common for consequentialists to evaluate actions in terms of expected rather than actual consequences: the philanthropist who donates to an efficient charity is generally not thought less morally good if some uncontrollable and unpredictable event prevents the good she expected to achieve. While we are ready to say that what happened in such a case was bad, we would not say that it was a moral bad, at least not on the philanthropists part.
If we grant this, then we have already admitted that the important factor in moral evaluations are not any actual events in the world, but rather something like expected consequences. In other words, moral evaluation deals with a mental event related to an action (i.e. the expectation of a certain consequence), not, or at least not directly, the consequence of that event.
And Kant would go further to point out that it’s not quite just expected consequences either. We do not evaluate equally a philanthropist who donates to an efficient charity to spite her neighbor (expecting, but ignoring, the fact that this donation will also do some good for others) and one who donates out of a desire to do some good for others (say, expecting but ignoring the fact that this donation will also upset her neighbor). Both philanthropists expect the same consequences to play out, but we do not evaluate them equally.
So it is not expected (rather than actual) consequences that are the important factor in moral evaluations, because we can detect differences in our evaluations even when these are equal. Rather, Kant would go on to say, we evaluate actions on the basis of the reasons people have for bringing about the consequences they expect. (There are other options here, of course, and so the argument could go on).
If you’ve accepted every premise thus far, I think you’re pretty close to being in range of Kant’s argument for the CI. Has that helped?
It’s common for consequentialists to evaluate actions in terms of expected rather than actual consequences: the philanthropist who donates to an efficient charity is generally not thought less morally good if some uncontrollable and unpredictable event prevents the good she expected to achieve. While we are ready to say that what happened in such a case was bad, we would not say that it was a moral bad, at least not on the philanthropists part.
I don’t accept this premise. A philanthropist whose actions lead to good consequences is morally better than a philanthropist whose actions lead to less-good consequences, wholly independent of their actual intention. This just seems like one of the fundamental aspects of consequentialism, to me.
And Kant would go further to point out that it’s not quite just expected consequences either. We do not evaluate equally a philanthropist who donates to an efficient charity to spite her neighbor (expecting, but ignoring, the fact that this donation will also do some good for others) and one who donates out of a desire to do some good for others (say, expecting but ignoring the fact that this donation will also upset her neighbor). Both philanthropists expect the same consequences to play out, but we do not evaluate them equally.
Further, I would evaluate these two philanthropists exactly the same way, as long as the externalities of spiting neighbors don’t escalate to a level where they have substantial moral weight. Someone who saves a child because he is interesting in seducing their mother and someone who saves a child out of pure altruism may not be equally moral, but if you only have this single instance with which to judge them, then they must be considered so.
A philanthropist whose actions lead to good consequences is morally better than a philanthropist whose actions lead to less-good consequences, wholly independent of their actual intention.
So suppose two people, Abe and Ben, donated to an efficient charity. Abe intends to do some good for others. Ben intends this as the first but crucial stage of an elaborate plan to murder a rival. This plan is foiled, with the result that Ben’s money simply goes to the charity and does its work as normal. You would say that the actions of Abe and Ben are morally equal?
Assuming Ben’s plan was foiled for reasons beyond his control or expectation, would you then say that the deciding factor in determining the moral worth of Ben’s action was something beyond his control or expectation?
So suppose two people, Abe and Ben, donated to an efficient charity. Abe intends to do some good for others. Ben intends this as the first but crucial stage of an elaborate plan to murder a rival. This plan is foiled, with the result that Ben’s money simply goes to the charity and does its work as normal. You would say that the actions of Abe and Ben are morally equal?
Yes, their particular acts of charity were morally equal, so long as their donations were equal.
Assuming Ben’s plan was foiled for reasons beyond his control or expectation, would you then say that the deciding factor in determining the moral worth of Ben’s action was something beyond his control or expectation?
The deciding factor in determining the moral worth of Ben’s actions was “out of his hands,” to a certain extent. He isn’t awarded point for trying.
Yes, their particular acts of charity were morally equal, so long as their donations were equal....The deciding factor in determining the moral worth of Ben’s actions was “out of his hands,” to a certain extent.
Hm! Those are surprising answers. I drew my initial argument from Kant’s Groundwork, and so far as I can tell, Kant doesn’t expect his reader to give the answer you did. So I’m at a loss as to what he would say to you now. I’m no Kantian, but I have to say I find myself unable to judge Abe and Ben’s actions as you have.
From the way you had written the previous few comments, I had a feeling you weren’t expecting me to react as I did (and I have to note, you have been by far the more logically polite partner in this discussion so far.)
I drew my initial argument from Kant’s Groundwork, and so far as I can tell, Kant doesn’t expect his reader to give the answer you did.
This seems a common occurrence in the philosophy of that era. Hume is constantly asking rhetorical questions of his readers and assuming that they answer the same way he does...
I’m no Kantian, but I have to say I find myself unable to judge Abe and Ben’s actions as you have.
If I had to guess, I would say that our disagreement boils down to a definitional one rather than one involving empirical facts, in a rather unsurprising manner.
If I had to guess, I would say that our disagreement boils down to a definitional one rather than one involving empirical facts, in a rather unsurprising manner.
We aren’t in disagreement about any facts, but are simply using the term ‘moral judgement’ in different ways. I take moral judgement to be an after-the-fact calculation, and you take it to be a statement about intentionality and agency. You would, presumably, agree with me that Abe and Ben’s actions netted the same results, and I will agree with you that Abe’s motivations were “in better faith” than Ben’s, so we’ve essentially reached a resolution.
Well, I would say that Abe and Ben’s respective actions have different moral value, and you’ve said that they have the same moral value. I think we at least disagree about this, or do you think we’re using some relevant terms differently?
I think we at least disagree about this, or do you think we’re using some relevant terms differently?
I think we disagree on the meaning of terms related to the word ‘moral’ and nothing further. We aren’t generating different expectations, and there’s no empirical test we could run to find out which one of us is correct.
Presumably, a consequentialist would assert that insofar as I evaluate a philanthropist who acts out of spite differently than a philanthropist who acts out of altruism even if (implausibly) I expect both philanthropists to cause the same consequences in the long run, I am not making a moral judgment in so doing, but some other kind of judgment, perhaps an aesthetic one.
The reason I would evaluate a philanthropist who acts out of spite differently from a philanthropist who acts out of altruism is precisely because I don’t expect both philanthropists to cause the same consequences in the long run.
You’re right, that was careless of me. I intended the hypothetical only to be about the evaluations of their respective actions, not them as people. This is at least partly because Kantian deontology (as I understand it) doesn’t allow for any direct evaluations of people, only actions.
This wouldn’t be a convincing reply, I think, unless the consequentialist could come up with some reason for thinking such an evaluation is aesthetic other than ‘if it were a moral evaluation, it would conflict with consequentialism’. That is, assuming, the consequentialist wants to appeal to common, actual moral evaluation in defending the plausibility of her view. She may not.
Convincing to whom? I mean, I agree completely that a virtue ethicist, for example, would not find it convincing. But neither is the assertion that it is a moral judgment convincing to a consequentialist.
If I’ve understood you, you expect even a consequentialist to say “Oh, you’re right, the judgment that a spiteful act of philanthropy is worse than an altruistic act of philanthropy whose expected consequences are the same is a moral judgment, and therefore moral judgments aren’t really about expected consequences.”
It’s not at all clear to me that a consequentialist who isn’t confused would actually say that.
X: Behold A and B in their hypothetical shenanigans. That you will tend to judge the action of A morally better than that of B is evidence that you make moral evaluations in accordance with moral theory M (on which they are morally dissimilar) rather than moral theory N (according to which they are equivalent). This is evidence for the truth of M.
Y: I grant you that I judge A to be better than B, but this isn’t a moral judgement (and so not evidence for M). This is, rather, an aesthetic judgement.
X: What is your reason for thinking this judgement is aesthetic rather than moral?
Y: I am an Nist. If it were a moral judgement, it would be evidence for M.
X should not find this convincing. Neither should Y, or anyone else. Y’s argument is terrible.
We could fix Y’s argument by having him go back and deny that he judges A’s act to be morally different from B’s. This is what Berry did. Or Y could defend his claim, on independent grounds, that his judgement is aesthetic and not moral. Or Y could go back and deny that his actual moral evaluations being in accordance with M are evidence for M.
(shrug) At the risk of repeating myself: what Y would actually say supposing Y were not a conveniently poor debater is not “I am an Nist” but rather “Because what makes a judgment of an act a moral judgment is N, and the judgment of A to be better than B has nothing to do with N.”
X might disagree with Y about what makes a judgment a moral judgment—in fact, if X is not an Nist, it seems likely that X does disagree—but X simply insisting that “A is better than B” is a moral judgment because X says so is unconvincing.
We could fix Y’s argument by having him go back and deny that he judges A’s act to be morally different from B’s.
There’s no going back involved. In this example Y has said all along that Y doesn’t judge A’s act to be morally different from B’s.
It seems to me that what you’re suggesting constitutes logical rudeness on the consequentialist’s part. The argument ran like this:
Take a hypothetical case involving A and B. You are asked to make a moral judgement. If you judge A and B’s actions differently, you are judging as if M is true. If you judge them to be the same, you are judging as if N is true.
The reply you provided wouldn’t be relevant if you said right away that that A and B’s actions are morally the same. It’s only relevant if you’ve judged them to be different (in some way) in response to the hypothetical. Your reply is then that this judgement turns out not to be a moral judgement at all, but an irrelevant aesthetic judgement. This is logically rude because I asked you to make a moral judgement in the first place. You should have just said right off that you don’t judge the two cases differently.
If someone asks me to make a moral judgment about whether A and B’s actions are morally the same, and I judge that they are morally different, and then later I say that they are morally equivalent, I’m clearly being inconsistent. Perhaps I’m being logically rude, perhaps I’m confused, perhaps I’ve changed my mind.
If someone asks me to compare A and B, and I judge that A is better than B, and then later I say that they are morally equivalent, another possibility is that I was not making what I consider a moral judgment in the first place.
I’m confused as to why, upon being asked for a moral evaluation in the course of a discussion on consequentialism and deontology, someone would offer me an aesthetic evaluation they themselves consider irrelevant to the moral question. I don’t think my request for an evaluation was very ambiguous: Berry understood and answered accordingly, and it would surely be strange to think I had asked for an aesthetic evaluation in the middle of a defense of deontology. So I don’t understand how your suggestion would add anything to the discussion.
In the hypothetical discussion you asked me to consider, X makes an assertion about Y’s moral judgments, and Y replies that what X is referring to isn’t a moral judgment. Hence, I said “In this example Y has said all along that Y doesn’t judge A’s act to be morally different from B’s,” and you replied “It seems to me that what you’re suggesting constitutes logical rudeness on the consequentialist’s part.”
I, apparently incorrectly, assumed we were still talking about your hypothetical example.
Now, it seems you’re talking instead about your earlier conversation with Berry, which I haven’t read. I’ll take your word for it that my suggestion would not add anything to that discussion.
I didn’t think about it like that, that’s interesting. As I said though, I don’t think consequentialists and deontologists are so far apart. If I had to argue as a consequentialists I guess I would say that consequences matter because they are real effects, whereas moral intuitions like rightness don’t change anything apart from the mind of the agent. Example: if incest is wrong only because it is wrong, (assume there are no ill effects, including the lack of genetic diversity), to me it seems like the deontologists must argue what exactly makes it wrong. In terms of an analogous situation where it is the consequentialist defending him or herself, s/he can say that the consequences matter because they are dependant variables that change because of “independent” actions of agents. ( I mean independent mathematically, not in some libertarian free will sense).
If I had to argue as a consequentialists I guess I would say that consequences matter because they are real effects, whereas moral intuitions like rightness don’t change anything apart from the mind of the agent.
This strikes me as begging the question. You say here that consequences matter because they are real effects [and real effects matter]. But the (hardcore) deontologist won’t grant you the premise that real effects matter, since that is exactly what his denial of consequentialism amounts to: the effects of an action don’t matter to its moral value.
If you grant my criticism, this might be a good way to connect your views to the mainstream: write up with a criticism of a specific, living author’s defense of deontology, arguing validly from mutually accepted premises. Keep it brief, run it by your teacher, and then send it to that author. You’re very likely to get a response, I think, and this will serve to focus your attention on real points of disagreement.
I see how it appears that I was begging the question. I was unclear with what I meant. When I say that “consequences matter because they are real effects”, I only mean that consequences imply observable differences in outcomes. Rightness for its own sake seems to me to have no observational qualities, and so I think it is a bad explanation, because it can explain (or in this case, justify) any action. I think you are correct that I need to defend why real effects matter, though.
The ways in which this reminds me of my classroom experience are too many to count, but if the professor said something as idiotic as that to you, I’m really at a loss. Has he never heard of meta-ethics? Never read Mackie or studied Moral Realism?
Right? I would venture to guess that over 50% of students in my department are of the continental tradition and tend to think in anti-realist terms. I would then say 40% or more are of the analytic tradition, and love debating what things should be called, instead of facts. the remianing 10% are, I would say, very diverse, but I have encountered very few naturalists.
These numbers might be very inflated because of the neagtive associations I am experiencing currently. Nevertheless I am confident that I am correct within ten percentage points in either direction.
I think the professor really has some sophisticated views, but for the sake of the class level he is “dumbing it down” to intuitive “analysis”. He doesn’t often share his opinion in order to foster more debate and less “guesing the teacher’s passoword” which i think is a good thing for most philosophy students.
Right? I would venture to guess that over 50% of students in my department are of the continental tradition and tend to think in anti-realist terms. I would then say 40% or more are of the analytic tradition, and love debating what things should be called, instead of facts. the remianing 10% are, I would say, very diverse, but I have encountered very few naturalists.
Well, that’s pretty much the deontological claim: that there is something to an act being wrong other than its consequences.
For instance, some would assert that an act of incestuous sex is wrong even if all the standard negative consequences are denied: no deformed babies, no unhappy feelings, no scandal, and so on. Why? Because they say there exists a moral fact that incest is wrong, which is not merely a description or prediction of incestuous acts’ effects.
“An incestuous act of sex at time t” is a descriptive statement of the world which could be able to change the output of a utility function, just as “a scandal at time t + 1 week” or “a deformed baby born at time t + 9 months” could, right? Now, my personal utility function doesn’t seem to put any (terminal) value on the first statement either, but if someone else’s utility function does, what makes mine “consequentialist” and theirs not?
That’s pretty weird, considering that so-called “sophisticated” consequentialist theories (where you can say something like: although in this instance it would be better for me to do X than Y, overall it would be better to have a disposition to do Y than X, so I shall have such a disposition) have been a huge area of discussion recently. And yes, it’s bloody obvious and it’s a scandal it took so long for these kinds of ideas to get into contemporary philosophy.
Perhaps the prof meant that such a consequentialist account appears to tell you to follow certain “deontological” requirements, but for the wrong reason in some way. In much the same way that the existence of a vengeful God might make acting morally also selfishly rational, but if you acted morally out of self-interest then you would be doing it for the wrong reasons, and wouldn’t have actually got to the heart of things.
Alternatively, they’re just useless. Philosophy has a pretty high rate of that, but don’t throw out the baby with the bathwater! ;)
Yeah, we read Railton’s sophisticated consequentialism, which sounded pretty good. Norcross on why consequentialism is about offering suggestions and not requirements was also not too bad. I feel like the texts I am reading are more valuable than the classes, to be frank. Thanks for the input!
To answer a question you gave in the OP, Jackson’s views are very close to what Eliezer’s metaethics seem to be, and Railton has some similarities with Luke’s views.
Hmmm that’s right! I can’t believe I didn’t see that, thanks. I think Railton is more similar to Luke then Jackson is to Eliezer though, if I understand Eliezer well enough. Is there a comparison anywhere outlining the differences between what Eliezer and Luke think across different fields?
So the professor was playing Devil’s Advocate, in other words? I’m not familiar with the “requirements” argument he’s trying, but like a lot of people here, that’s because I think philosophy classes tend to be a waste of time. For primarily the reasons you list in the first paragraph. I’m a consequentialist, myself.
Do you actually think you’re having problems with understanding the Sequences, or just in comparing them with your Ethics classes?
It isn’t that I don’t understand the sequences on their own. It’s more that I don’t see a) how they relate to the “mainstream” (though I read Luke’s post on the various connections, morality seems to be sparse on the list, or I missed it). And b) what Eliezer in particular is trying to get across. The topics in the sequence are very widespread and don’t seem to be narrowing in on a particular idea. I found a humans guide to words many times more useful. Luke’s sequence was easier, but then there is a lot less material.
I think he was playing devil’s advocate. Thanks for the comment.
I think EY’s central point is something like: just because there’s no built-in morality for the universe, doesn’t mean there isn’t built-in morality for humans. At the same time, that “moral sense” does need care and feeding, otherwise you get slavery—and thinking spanking your kids is right.
(But it’s been a while since I’ve read the entire ME series, so I could have confused it with something else I’ve read.)
I have a hunch things are a good bit better in proper philosophy than you think. Admittedly, most intro and medium level courses regarding ethics are pretty terrible (this is obviously only from personal experience.) If I had to make a guess as to why that is, it’d probably be for the same reason I think most of the rest of philosophy courses could be better: too much focus on history.
In my intermediate level course, we barely talk about history at all. It is supposed to focus on “developments” in the last thirty years or so. The problem I have is that most profs think that philosophy is able to go about figuring out the truth without things like empirism, scientific study, neuroscience, probability and decision theory. Everything is very “intuitive” and I find that difficult to grasp.
For example, when discussing deontolgy, I asked why there should be absolute “requirements” as an argument against consequentialism, seeing that if it’s true that the best consequences would be take these requiremesnts into consequentialist accounts of outcomes, then that is what a conequentialist would (should) say as well! The professor’s answer and that of many students was: “That’s just the way it is. Some things ought not be done, only because they must ought not be done”. That is a hard pill for me to swallow. In this case I am much more comfortable with Eliezer’s Ethical Injunctions.
(The prof was not necessarily promoting dentology but was arguing on it’s behalf.)
Note that you could reverse this conversation: a deontologist could ask you why we should privilege the consequences so much, instead of just doing the right things regardless of the consequences. I would expect that your response would be pretty close to “that’s just the way it is, it’s the consequences that are the most important”—at least, I know that mine would be. And the deontologist would find this a very hard pill to swallow.
As Alicorn has pointed out, you can’t understand deontology by requiring an explanation on consequentialism’s terms, and you likewise can’t understand consequentialism by requiring an explanation on deontology’s terms. At some point, your moral reasoning has to bottom out to some set of moral intuitions which are just taken as axiomatic and cannot be justified.
Except that deontology cares about consequences as well, so there’s no need to convince them that the consequences of our actions have moral weight. If act A’s consequences violate the Categorical Imperative, and act B’s consequences don’t, then the Kantian (for example) will pick act B.
The friction between deontology and consequentialism is that they disagree about what should be maximized, a distinction which is often simplified to consequentialists wanting to maximize the ‘Good’ and deontologists wanting to maximize the ‘Right’.
I’ll agree that past this point, much of the objections to the other side’s positions hit ‘moral bedrock’ and intuitions are often seen as the solution to this gap.
For Kant, (and for all the Kantians I know of), consequences aren’t evaluable in terms of the categorical imperative. This is something like a category mistake. Kant is pretty explicit that the consequences of an action well and truly do not matter to the moral value of an action. He would say, I think, that there is no way to draw boundary lines around ‘consequences’ that doesn’t place all moral weight on something like the intention of the action.
Well then the Kantian would pick B because he intends to not violate the CI with his actions? I’m not actually sure how this is different than valuing the consequences of your actions at all?
In your initial set up, you said that A and B differ in that A’s consequences violate the CI, while B’s consequences do not. I’m claiming that, for Kant, consequences aren’t evaluable in terms of the CI, and so we don’t yet have on the table a way for a Kantian to distinguish A and B. Consequences aren’t morally evaluable, Kant would say, in the very intuitive sense in which astronomical phenomena aren’t morally evaluable (granting that we sometimes assess astronomical phenomena as good or bad in a non-moral sense).
I once again find Kantianism immensely counter-intuitive and confusing, so at this point I must thank you for correcting my misconceptions and undoing my rationalizations. :)
I’ll try to present an argument toward Kant’s views in a clear way. The argument will consist of a couple of hopefully non-puzzling scenarios for moral evaluation, an evaluation I expect at least to be intuitive to you (though perhaps not endorsed wholeheartedly), leading to the conclusion that you do not in fact concern yourself with consequences when making a moral evaluation. At some point, I expect, I’ll make a claim that you disagree with, and at that point we can discuss, if you like, where the disagreement lies exactly. So:
It’s common for consequentialists to evaluate actions in terms of expected rather than actual consequences: the philanthropist who donates to an efficient charity is generally not thought less morally good if some uncontrollable and unpredictable event prevents the good she expected to achieve. While we are ready to say that what happened in such a case was bad, we would not say that it was a moral bad, at least not on the philanthropists part.
If we grant this, then we have already admitted that the important factor in moral evaluations are not any actual events in the world, but rather something like expected consequences. In other words, moral evaluation deals with a mental event related to an action (i.e. the expectation of a certain consequence), not, or at least not directly, the consequence of that event.
And Kant would go further to point out that it’s not quite just expected consequences either. We do not evaluate equally a philanthropist who donates to an efficient charity to spite her neighbor (expecting, but ignoring, the fact that this donation will also do some good for others) and one who donates out of a desire to do some good for others (say, expecting but ignoring the fact that this donation will also upset her neighbor). Both philanthropists expect the same consequences to play out, but we do not evaluate them equally.
So it is not expected (rather than actual) consequences that are the important factor in moral evaluations, because we can detect differences in our evaluations even when these are equal. Rather, Kant would go on to say, we evaluate actions on the basis of the reasons people have for bringing about the consequences they expect. (There are other options here, of course, and so the argument could go on).
If you’ve accepted every premise thus far, I think you’re pretty close to being in range of Kant’s argument for the CI. Has that helped?
I don’t accept this premise. A philanthropist whose actions lead to good consequences is morally better than a philanthropist whose actions lead to less-good consequences, wholly independent of their actual intention. This just seems like one of the fundamental aspects of consequentialism, to me.
Further, I would evaluate these two philanthropists exactly the same way, as long as the externalities of spiting neighbors don’t escalate to a level where they have substantial moral weight. Someone who saves a child because he is interesting in seducing their mother and someone who saves a child out of pure altruism may not be equally moral, but if you only have this single instance with which to judge them, then they must be considered so.
So suppose two people, Abe and Ben, donated to an efficient charity. Abe intends to do some good for others. Ben intends this as the first but crucial stage of an elaborate plan to murder a rival. This plan is foiled, with the result that Ben’s money simply goes to the charity and does its work as normal. You would say that the actions of Abe and Ben are morally equal?
Assuming Ben’s plan was foiled for reasons beyond his control or expectation, would you then say that the deciding factor in determining the moral worth of Ben’s action was something beyond his control or expectation?
Yes, their particular acts of charity were morally equal, so long as their donations were equal.
The deciding factor in determining the moral worth of Ben’s actions was “out of his hands,” to a certain extent. He isn’t awarded point for trying.
Hm! Those are surprising answers. I drew my initial argument from Kant’s Groundwork, and so far as I can tell, Kant doesn’t expect his reader to give the answer you did. So I’m at a loss as to what he would say to you now. I’m no Kantian, but I have to say I find myself unable to judge Abe and Ben’s actions as you have.
From the way you had written the previous few comments, I had a feeling you weren’t expecting me to react as I did (and I have to note, you have been by far the more logically polite partner in this discussion so far.)
This seems a common occurrence in the philosophy of that era. Hume is constantly asking rhetorical questions of his readers and assuming that they answer the same way he does...
If I had to guess, I would say that our disagreement boils down to a definitional one rather than one involving empirical facts, in a rather unsurprising manner.
Could you elaborate on this?
We aren’t in disagreement about any facts, but are simply using the term ‘moral judgement’ in different ways. I take moral judgement to be an after-the-fact calculation, and you take it to be a statement about intentionality and agency. You would, presumably, agree with me that Abe and Ben’s actions netted the same results, and I will agree with you that Abe’s motivations were “in better faith” than Ben’s, so we’ve essentially reached a resolution.
Well, I would say that Abe and Ben’s respective actions have different moral value, and you’ve said that they have the same moral value. I think we at least disagree about this, or do you think we’re using some relevant terms differently?
I think we disagree on the meaning of terms related to the word ‘moral’ and nothing further. We aren’t generating different expectations, and there’s no empirical test we could run to find out which one of us is correct.
Hm, I think you may be right. I cannot for the life of me think of an empirical test that would decide the issue.
Presumably, a consequentialist would assert that insofar as I evaluate a philanthropist who acts out of spite differently than a philanthropist who acts out of altruism even if (implausibly) I expect both philanthropists to cause the same consequences in the long run, I am not making a moral judgment in so doing, but some other kind of judgment, perhaps an aesthetic one.
The reason I would evaluate a philanthropist who acts out of spite differently from a philanthropist who acts out of altruism is precisely because I don’t expect both philanthropists to cause the same consequences in the long run.
Yes, I agree. That’s why I said “implausibly”. But the hypothetical hen proposed presumed this, and I chose not to fight it.
This seems like a judgement about the philanthropists, rather than the act of donating. My example was intended to discuss the act, not the agent.
Your wording suggests otherwise: “We do not evaluate equally a philanthropist who donates to an efficient charity to spite her neighbor...”
You’re right, that was careless of me. I intended the hypothetical only to be about the evaluations of their respective actions, not them as people. This is at least partly because Kantian deontology (as I understand it) doesn’t allow for any direct evaluations of people, only actions.
This wouldn’t be a convincing reply, I think, unless the consequentialist could come up with some reason for thinking such an evaluation is aesthetic other than ‘if it were a moral evaluation, it would conflict with consequentialism’. That is, assuming, the consequentialist wants to appeal to common, actual moral evaluation in defending the plausibility of her view. She may not.
Convincing to whom?
I mean, I agree completely that a virtue ethicist, for example, would not find it convincing.
But neither is the assertion that it is a moral judgment convincing to a consequentialist.
If I’ve understood you, you expect even a consequentialist to say “Oh, you’re right, the judgment that a spiteful act of philanthropy is worse than an altruistic act of philanthropy whose expected consequences are the same is a moral judgment, and therefore moral judgments aren’t really about expected consequences.”
It’s not at all clear to me that a consequentialist who isn’t confused would actually say that.
Me? Hopefully, the consequentialist as well.
Imagine this conversation:
X: Behold A and B in their hypothetical shenanigans. That you will tend to judge the action of A morally better than that of B is evidence that you make moral evaluations in accordance with moral theory M (on which they are morally dissimilar) rather than moral theory N (according to which they are equivalent). This is evidence for the truth of M.
Y: I grant you that I judge A to be better than B, but this isn’t a moral judgement (and so not evidence for M). This is, rather, an aesthetic judgement.
X: What is your reason for thinking this judgement is aesthetic rather than moral?
Y: I am an Nist. If it were a moral judgement, it would be evidence for M.
X should not find this convincing. Neither should Y, or anyone else. Y’s argument is terrible.
We could fix Y’s argument by having him go back and deny that he judges A’s act to be morally different from B’s. This is what Berry did. Or Y could defend his claim, on independent grounds, that his judgement is aesthetic and not moral. Or Y could go back and deny that his actual moral evaluations being in accordance with M are evidence for M.
(shrug) At the risk of repeating myself: what Y would actually say supposing Y were not a conveniently poor debater is not “I am an Nist” but rather “Because what makes a judgment of an act a moral judgment is N, and the judgment of A to be better than B has nothing to do with N.”
X might disagree with Y about what makes a judgment a moral judgment—in fact, if X is not an Nist, it seems likely that X does disagree—but X simply insisting that “A is better than B” is a moral judgment because X says so is unconvincing.
There’s no going back involved. In this example Y has said all along that Y doesn’t judge A’s act to be morally different from B’s.
It seems to me that what you’re suggesting constitutes logical rudeness on the consequentialist’s part. The argument ran like this:
Take a hypothetical case involving A and B. You are asked to make a moral judgement. If you judge A and B’s actions differently, you are judging as if M is true. If you judge them to be the same, you are judging as if N is true.
The reply you provided wouldn’t be relevant if you said right away that that A and B’s actions are morally the same. It’s only relevant if you’ve judged them to be different (in some way) in response to the hypothetical. Your reply is then that this judgement turns out not to be a moral judgement at all, but an irrelevant aesthetic judgement. This is logically rude because I asked you to make a moral judgement in the first place. You should have just said right off that you don’t judge the two cases differently.
If someone asks me to make a moral judgment about whether A and B’s actions are morally the same, and I judge that they are morally different, and then later I say that they are morally equivalent, I’m clearly being inconsistent. Perhaps I’m being logically rude, perhaps I’m confused, perhaps I’ve changed my mind.
If someone asks me to compare A and B, and I judge that A is better than B, and then later I say that they are morally equivalent, another possibility is that I was not making what I consider a moral judgment in the first place.
I’m confused as to why, upon being asked for a moral evaluation in the course of a discussion on consequentialism and deontology, someone would offer me an aesthetic evaluation they themselves consider irrelevant to the moral question. I don’t think my request for an evaluation was very ambiguous: Berry understood and answered accordingly, and it would surely be strange to think I had asked for an aesthetic evaluation in the middle of a defense of deontology. So I don’t understand how your suggestion would add anything to the discussion.
In the hypothetical discussion you asked me to consider, X makes an assertion about Y’s moral judgments, and Y replies that what X is referring to isn’t a moral judgment. Hence, I said “In this example Y has said all along that Y doesn’t judge A’s act to be morally different from B’s,” and you replied “It seems to me that what you’re suggesting constitutes logical rudeness on the consequentialist’s part.”
I, apparently incorrectly, assumed we were still talking about your hypothetical example.
Now, it seems you’re talking instead about your earlier conversation with Berry, which I haven’t read. I’ll take your word for it that my suggestion would not add anything to that discussion.
Dave, I think you’re pulling my leg. Your initial comment to me was from one of my posts to Berry, so of course you read it! I’m going to tap out.
I didn’t think about it like that, that’s interesting. As I said though, I don’t think consequentialists and deontologists are so far apart. If I had to argue as a consequentialists I guess I would say that consequences matter because they are real effects, whereas moral intuitions like rightness don’t change anything apart from the mind of the agent. Example: if incest is wrong only because it is wrong, (assume there are no ill effects, including the lack of genetic diversity), to me it seems like the deontologists must argue what exactly makes it wrong. In terms of an analogous situation where it is the consequentialist defending him or herself, s/he can say that the consequences matter because they are dependant variables that change because of “independent” actions of agents. ( I mean independent mathematically, not in some libertarian free will sense).
Thanks for your help.
This strikes me as begging the question. You say here that consequences matter because they are real effects [and real effects matter]. But the (hardcore) deontologist won’t grant you the premise that real effects matter, since that is exactly what his denial of consequentialism amounts to: the effects of an action don’t matter to its moral value.
If you grant my criticism, this might be a good way to connect your views to the mainstream: write up with a criticism of a specific, living author’s defense of deontology, arguing validly from mutually accepted premises. Keep it brief, run it by your teacher, and then send it to that author. You’re very likely to get a response, I think, and this will serve to focus your attention on real points of disagreement.
Hey Hen,
Thanks for your suggestion, I like it.
I see how it appears that I was begging the question. I was unclear with what I meant. When I say that “consequences matter because they are real effects”, I only mean that consequences imply observable differences in outcomes. Rightness for its own sake seems to me to have no observational qualities, and so I think it is a bad explanation, because it can explain (or in this case, justify) any action. I think you are correct that I need to defend why real effects matter, though.
Jeremy
The ways in which this reminds me of my classroom experience are too many to count, but if the professor said something as idiotic as that to you, I’m really at a loss. Has he never heard of meta-ethics? Never read Mackie or studied Moral Realism?
Right? I would venture to guess that over 50% of students in my department are of the continental tradition and tend to think in anti-realist terms. I would then say 40% or more are of the analytic tradition, and love debating what things should be called, instead of facts. the remianing 10% are, I would say, very diverse, but I have encountered very few naturalists.
These numbers might be very inflated because of the neagtive associations I am experiencing currently. Nevertheless I am confident that I am correct within ten percentage points in either direction.
I think the professor really has some sophisticated views, but for the sake of the class level he is “dumbing it down” to intuitive “analysis”. He doesn’t often share his opinion in order to foster more debate and less “guesing the teacher’s passoword” which i think is a good thing for most philosophy students.
Out of curiosity, where to you go to school?
McGill University in Montreal. You?
The Open University in Israel (I’m not, strictly speaking, out of highschool yet, so this is all I got.)
Well, that’s pretty much the deontological claim: that there is something to an act being wrong other than its consequences.
For instance, some would assert that an act of incestuous sex is wrong even if all the standard negative consequences are denied: no deformed babies, no unhappy feelings, no scandal, and so on. Why? Because they say there exists a moral fact that incest is wrong, which is not merely a description or prediction of incestuous acts’ effects.
“An incestuous act of sex at time t” is a descriptive statement of the world which could be able to change the output of a utility function, just as “a scandal at time t + 1 week” or “a deformed baby born at time t + 9 months” could, right? Now, my personal utility function doesn’t seem to put any (terminal) value on the first statement either, but if someone else’s utility function does, what makes mine “consequentialist” and theirs not?
That’s pretty weird, considering that so-called “sophisticated” consequentialist theories (where you can say something like: although in this instance it would be better for me to do X than Y, overall it would be better to have a disposition to do Y than X, so I shall have such a disposition) have been a huge area of discussion recently. And yes, it’s bloody obvious and it’s a scandal it took so long for these kinds of ideas to get into contemporary philosophy.
Perhaps the prof meant that such a consequentialist account appears to tell you to follow certain “deontological” requirements, but for the wrong reason in some way. In much the same way that the existence of a vengeful God might make acting morally also selfishly rational, but if you acted morally out of self-interest then you would be doing it for the wrong reasons, and wouldn’t have actually got to the heart of things.
Alternatively, they’re just useless. Philosophy has a pretty high rate of that, but don’t throw out the baby with the bathwater! ;)
Yeah, we read Railton’s sophisticated consequentialism, which sounded pretty good. Norcross on why consequentialism is about offering suggestions and not requirements was also not too bad. I feel like the texts I am reading are more valuable than the classes, to be frank. Thanks for the input!
To answer a question you gave in the OP, Jackson’s views are very close to what Eliezer’s metaethics seem to be, and Railton has some similarities with Luke’s views.
Hmmm that’s right! I can’t believe I didn’t see that, thanks. I think Railton is more similar to Luke then Jackson is to Eliezer though, if I understand Eliezer well enough. Is there a comparison anywhere outlining the differences between what Eliezer and Luke think across different fields?
You should try some Brad Hooker. One of the most defensible versions of consequentialism out there.
Cool, I will check him out. Thanks.
So the professor was playing Devil’s Advocate, in other words? I’m not familiar with the “requirements” argument he’s trying, but like a lot of people here, that’s because I think philosophy classes tend to be a waste of time. For primarily the reasons you list in the first paragraph. I’m a consequentialist, myself.
Do you actually think you’re having problems with understanding the Sequences, or just in comparing them with your Ethics classes?
It isn’t that I don’t understand the sequences on their own. It’s more that I don’t see a) how they relate to the “mainstream” (though I read Luke’s post on the various connections, morality seems to be sparse on the list, or I missed it). And b) what Eliezer in particular is trying to get across. The topics in the sequence are very widespread and don’t seem to be narrowing in on a particular idea. I found a humans guide to words many times more useful. Luke’s sequence was easier, but then there is a lot less material.
I think he was playing devil’s advocate. Thanks for the comment.
I think EY’s central point is something like: just because there’s no built-in morality for the universe, doesn’t mean there isn’t built-in morality for humans. At the same time, that “moral sense” does need care and feeding, otherwise you get slavery—and thinking spanking your kids is right.
(But it’s been a while since I’ve read the entire ME series, so I could have confused it with something else I’ve read.)