If you’ve acclimated to torture it’s no longer torture.
If you’ve acclimated to torture the effects have likely left you with a life not worth living.
Torture isn’t something you can acclimate yourself to in hypotheticals. E.g., the interlocutor could say “oh you would acclimate to water boarding, well then I’ll scoop your brain out, intercept your sensory modalities, and feed you horror. but wait, just when you’re getting used to it I wipe your memory.”
All this misses the point of the hypothetical by being too focused on the details rather than the message. Have you told someone the trolley experiment and had them say something like “but I would call the police, or I’m not strong enough to push a fat man over” and have to reform the experiment over and over until they got the message?
Torture isn’t something you can acclimate yourself to in hypotheticals....
This is a fair point. Though my response was very much intended to be a joke.
All this misses the point of the hypothetical by being too focused on the details rather than the message. Have you told someone the trolley experiment and had them say something like “but I would call the police, or I’m not strong enough to push a fat man over” and have to reform the experiment over and over until they got the message?
I think this is wrong: saying you’d yell real loud or call the police or break the game somehow is exactly the right response. It shows that someone is engaging with the problem as a serious moral one, and it’s no accident that it’s people who hear these problems for the first time that react like this. They’re the only ones taking it seriously: moral reasoning is not hypothetical, and what they’re doing is refusing to treat the problem hypothetically.
Learning to operate within the hypothetical just means learning to stop seeing it as an opportunity for moral reasoning. After that, all we’re doing is trying to maximize a value under a theory. But that’s neither here nor there.
I think this is wrong: saying you’d yell real loud or call the police or break the game somehow is exactly the right response. It shows that someone is engaging with the problem as a serious moral one,
It is not clear to me that that is a more “right” response than engaging with the problem as a pedagogic tool in a way that aligns with the expectations of the person who set it to me. Indeed, I’m inclined to doubt it.
In much the same way: if I’m asked to multiply 367 by 1472 the response I would give in the real world is to launch a calculator application, but when asked to do this by the woman giving me a neuropsych exam after my stroke I didn’t do that, because I understood that the goal was not to find out the product of 367 and 1472 but rather to find out something about my brain that would be revealed by my attempt to calculate that product.
I agree with you that it’s no accident that people react like this to trolley problems, but I disagree with your analysis of the causes.
It is not clear to me that that is a more “right” response than engaging with the problem as a pedagogic tool in a way that aligns with the expectations of the person who set it to me.
You called the trolly problem a pedagogic tool: what do you have in mind here specifically? What sort of work do you take the trolly problem to be doing?
It clarifies the contrast between evaluating the rightness of an act in terms of the relative desirability of the likely states of the world after that act is performed or not performed, vs. evaluating the rightness of an act in other terms.
Okay, that sounds reasonable to me. But what do we mean by ‘act’ in this case? We could for instance imagine a trolly problem in which no one had the power to change the course of the train, and it just went down one track or the other on the basis of chance. We could still evaluate one outcome as better than the other (this must be the one man dying instead of five), but there’s no action.
Are we making a moral judgement in that case? Or do we reason differently when an agent is involved?
What I say about your proposed scenario is that the hypothetical world in which five people die is worse than the hypothetical world in which one person dies, all else being equal. So, no, my reasoning doesn’t change because there’s an agent involved.
But someone who evaluates the standard trolley problem differently might come to different conclusions.
For example, I know any number of deontologists who argue that the correct answer in the standard trolley problem is to let the five people die, because killing someone is worse than letting five people die. I’m not exactly sure what they would say about your proposed scenario, but I assume they would say in that case, since there’s no choice and therefore no “killing someone” involved, the world where five people die is worse.
Similarly, given someone like you who argues that the correct answer in the standard trolley problem is to “yell real loud or call the police or break the game somehow,” I’m not sure what you would say about your own proposed scenario.
It shows that someone is engaging with the problem as a serious moral one
I think it shows someone is trying to “solve” a hypothetical or be clever, because with a trivial amount of deliberation they would anticipate the interlocutors response and reform. Moreover, none of this engages the point of the exercise for which you’re free to argue without being opaque. E.g., “okay, clearly the point of this trolley experiment is to see if my moral intuitions align with consequentialism or utilitarianism, I don’t think this experiment does that because blah blah blah.”
Moreover, moral reasoning is hypothetical if you’re sufficiently reflective.
Moreover, moral reasoning is hypothetical if you’re sufficiently reflective.
Well, in what kinds of things does moral reasoning conclude? I suppose I would say ‘actions and evaluations’ or something like that. Can you think of anything else?
Moral reasoning should inform your moral intuitions—what you’ll do in the absence of an opportunity to reflect. How do you prepare your moral intuitions for handling future scenarios?
Well, regardless of whether we have time to reflect or not, I take it moral reasoning or moral intuitions conclude either in an action or in something like an evaluative judgement. This would distinguish such reasoning, I suppose, from theoretical reasoning which begins from and concludes in beliefs. Does that sound right to you?
An evaluative judgement is an action; you’re fundamentally saying moral reasoning has consequences. I agree with that, of course. I don’t think it disguishes it from theorical reasoning.
By ‘action’ I mean something someone might see you do, something undertaken intentionally with the aim of changing something around you. But when we ask someone to react to a trolly problem, we don’t expect them to act as a result of their reasoning (since there’s no actual trolly). We just want them to reply. So sometimes moral reasoning concludes merely in a judgement, and sometimes it concludes in an action (if we were actually in the trolly scenario, for example) that will, I suppose, also involve a judgement. Does all this seem reasonable to you?
This would go quicker if you gave your conclusion and then we talked about the assumptions, rather than building from the assumptions to the conclusion (I think it’s that you want to say hypotheticals produce different results than reality). But to answer your question, I don’t think that giving a result to the trolley problem merely results in a judgement. I think it also potentially results in reflective equilibrium of moral intuitions, which then possibly results in different decisions in the future (I’ve had this experience). I think it also potentially affects the interlocutor or audience.
This would go quicker if you gave your conclusion and then we talked about the assumptions, rather than building from the assumptions to the conclusion.
I’ve already given you my conclusion, such as it is: not that hypotheticals produce different results, but that reasoning about hypotheticals can’t be moral reasoning. I’m just trying to think through the problem myself, I don’t have a worked out theory here, or any kind of plan. If you have a more productive way to figure out how hypotheticals are related to moral reasoning then I’m happy to pursue that.
But to answer your question, I don’t think that giving a result to the trolley problem merely results in a judgement.
Right, but I’m just talking about the posing of the question as an invitation for someone to think about it. The aim or end result of that thinking is some kind of conclusion, and I’m just asking what kinds of conclusions moral reasoning ends in. Since we use moral reasoning in deciding how to act, I take it for granted that one kind of conclusion is an action: “It is right to X, and possible for me to X, therefore...” and then comes the action. When someone is addressing a trolly problem, they might think to themselves: “If one does X, one will get the result A, and if one does Y, one will get the result B. A is preferable to B, so...” and then comes the conclusion. The conclusion in this case is not an action, but just the proposition that ”...given the circumstances, one should do X.”
ETA: So, supposing that reasoning about the trolly problem here is moral reasoning (as opposed to, say, the sort of reasoning we’re doing when we play a game of chess) then moral reasoning can conclude sometimes in actions, and sometimes in judgements.
Suppose I sit down at time T1 to consider the hypothetical question of what responses I consider appropriate to various events, and I conclude that in response to event E1 I ought to take action A1. Then at T2, E1 occurs, and I take action A1 based on reasoning of the form “That’s E1, and I’ve previously decided that in case of E1 I should perform A1, so I’m going to perform A1.”
If I’ve understood you correctly, the only question being discussed here is whether the label “moral reasoning” properly applies to what occurs at T1, T2, both, or neither.
Can you give me an example of something that might be measurably different in the world under some possible set of conditions depending on which answer to that question turns out to be true?
If I’ve understood you correctly, the only question being discussed here is whether the label “moral reasoning” properly applies to what occurs at T1, T2, both, or neither.
You’ve understood me perfectly, and that’s an excellent way of putting things. I think there’s an interpretation of those variables such that both what occurs at T1 and at T2 could be called moral reasoning, especially if one expects E1 to occur. But suppose you just, by way of armchair reasoning, decide that if E1 ever happens, you’ll A1. Now suppose E1 has occured, but suppose also that you’ve forgotten the reasoning which lead you to conclude that A1 would be right: you remember the conclusion, but you’ve forgotten why you thought it. That scenario would, I believe, satisfy your description, and it would be a case in which your action is quite suspect. Not wholly so, since you may have good reason to believe your past decisions are reliable, but if you don’t know why you’re acting when you act, you’re not acting in a fully rational way.
I think it would be appropriate to say, in this case, that you are not to be morally praised (e.g. “you’re a good person”, “You’re a hero” etc.) for such an action (if it is good) in quite the measure you would be if you knew what you were doing. I bring up praise, just because this is an easy way for us to talk about what we consider to be the right response to morally good action, regardless of our theories. Does all this sound reasonable?
If what went on at T1 was fully moral reasoning, then no part of the moral action story seems to be left out: you reasoned your way to an action, and at some later time undertook that action. But if it’s true that we would consider an action in which you’ve forgotten your reasoning a defective action, less worthy of moral praise, then we consider it important that the reasoning be present to you as you act.
And I take it for granted, I suppose, that we don’t consider it terribly praiseworthy for someone to come to a bunch of good conclusions from the armchair and never make any effort to carry them out.
I’ll point out again that the phrase “moral reasoning” as you have been using it (to mean praiseworthy reasoning) is importantly different from how that phrase is being used by others.
That aside, I agree with you that in the scenario you describe, my reasoning at T2 (when E1 occurs) is not especially praiseworthy and thus does not especially merit the label “moral reasoning” as you’re using it. I don’t agree that my reasoning at T1 is not praiseworthy, though. If I sit down at T1 and work out the proper thing to do given E1, and I do that well enough that when E1 occurs at T2 I do the proper thing even though I’m not reasoning about it at T2, that seems compelling evidence that my reasoning at T1 is praiseworthy.
If I sit down at T1 and work out the proper thing to do given E1, and I do that well enough that when E1 occurs at T2 I do the proper thing even though I’m not reasoning about it at T2, that seems compelling evidence that my reasoning at T1 is praiseworthy.
Sure, we agree there, I just wanted to point out that the, shall we say, ‘presence’ of the reasoning in one’s action at T2 is both a necessary and sufficient condition for the action’s being morally praiseworthy if it’s good. The reasoning done at T1 is, of itself, neither necessary nor sufficient.
I don’t agree that the action at T2 is necessary. I would agree that in the absence of the action at T2, it would be difficult to know that the thinking at T1 was praiseworthy, but what makes the thinking at T1 praiseworthy is the fact that it led to a correct conclusion (“given E1 do A1”). It did not retroactively become praiseworthy when E1 occurred.
So you would say that deliberating to the right answer in a moral hypothetical is, on its own, something which should or could earn the deliberator moral praise?
Would you say that people can or ought to be praised or blamed for their answers to the trolly problem?
I would say that committing to a correct policy to implement in case of a particular event occurring is a good thing to have done. (It is sometimes an even better thing to have done if I can then articulate that policy, and perhaps even that commitment, in a compelling way to others.)
I think that’s an example of “deliberating to the right answer in a moral hypothetical earning moral praise” as you’re using those phrases, so I think yes, it’s something that could earn moral praise.
People certainly can be praised or blamed for their answers to the trolley problem—I’ve seen it happen myself—but that’s not terribly interesting.
More interestingly, yes, there are types of answers to the standard trolley problem I think deserve praise.
In case of a possible misunderstanding: I didn’t mean to imply that moral reasoning is literally hypothetical, but that hypotheticals can be a form of moral reasoning (and I hope we aren’t arguing about what ‘reasoning’ is). The problem that I think you have with this is that you believe hypothetical moral reasoning doesn’t generalize? If so, let me show you how that might work.
Hmm, save one person or let five people die.
My intuition tells me that killing is wrong.
Wait, what is intuition and why should I trust it?
I guess it’s the result of experience: cultural, personal, and evolution.
Now why should I trust that?
I suppose I shouldn’t because there’s no guarantee that any of that should result in the “right”
answer. Or even something that I actually prefer.
Hmm… If I look at the consequences, I see I prefer a world in which the five people live.
And this could go on and on until you’ve recalibrated your moral intuitions using hypothetical moral reasoning, and now when asked a similar hypothetical (or put in a similar situation) your immediate intuition is to look at the consequences. Why is the hypothetical part useful? It uncovers previously unquestioned assumptions. It’s also a nice compact form for discussing such issues.
but that hypotheticals can be a form of moral reasoning (and I hope we aren’t arguing about what ‘reasoning’ is).
We’re not, and I understand. We do disagree on that claim: I’m suggesting that no moral reasoning can be hypothetical, and that if some bit of reasoning proceeds from a hypothetical, we can know on the basis of that alone that it’s not really moral reasoning. I’m thinking of moral reasoning as the kind of reasoning you’re morally responsible for: if you reason rightly, you ought to be praised and proud, and if you reason wrongly, you ought to be blamed and ashamed. That sort of thing.
Hmm… If I look at the consequences, I see I prefer a world in which the five people live.
This is a good framing, thanks. By ‘on and on’ I assume you mean that the reasoner should go on to examine his decision to look at expected consequences, and perhaps more importantly his preference for the world in which five people live. After all, he shouldn’t trust that any more than the intuition, right?
I’m thinking of moral reasoning as the kind of reasoning you’re morally responsible for: if you reason rightly, you ought to be praised and proud, and if you reason wrongly, you ought to be blamed and ashamed. That sort of thing.
Can’t that apply to hypotheticals? If you come to the wrong conclusion you’re a horrible person, sort of thing.
I would probably call “moral reasoning” something along the lines of “reasoning about morals”. Even using your above definition, I think reasoning about morals using hypotheticals can result in a judgment, about what sort of action would be appropriate in the situation.
I’m thinking of moral reasoning as the kind of reasoning you’re morally responsible for: if you reason rightly, you ought to be praised and proud, and if you reason wrongly, you ought to be blamed and ashamed.
That can’t be what people normally mean by “moral reasoning”. Do you have a philosophy background?
I’m suggesting that no moral reasoning can be hypothetical
I don’t see why that would be the case. Cheap illustration:
TEACHER: Jimmy, suppose I tell you that P, and also that P implies Q. What does that tell you about Q? JIMMY: Q is true. TEACHER: That’s right Jimmy! Your reasoning is praiseworthy! JIMMY: Getting the right answer while reasoning about that hypothetical fills me with pride!
I don’t see why that would be the case. Cheap illustration:...
You’ve taken my conditional: “If something is moral reasoning, it is something for which we can be praised or blamed” for a biconditional. I only intend the former. ETA: I should say more. I don’t mean any kind of praise or blame, but the kind appropriate to morally good or bad action. One might believe that this isn’t different in kind from the sort of praise we offer in response to, say, excellence in playing the violin, but I haven’t gotten the sense that this view is on the table. If we agree that there is such a thing as distinctively moral praise or blame, then I’ll commit to the biconditional.
I suspect ABrooks is continuing his tradition of interpreting “X reasoning” to mean reasoning that has the property of being X, rather than reasoning about X.
If I’m right, I expect his reply here is that your example is not of hypothetical reasoning at all—supposing that actually happened, Jimmy really would be reasoning, so it would be actual reasoning. Sure, it would be reasoning about a hypothetical, but so what?
I share your sense, incidentally, that this is not what people normally mean, either by “moral reasoning” or “hypothetical reasoning.:”
I suspect ABrooks is continuing his tradition of interpreting “X reasoning” to mean reasoning that has the property of being X, rather than reasoning about X.
It’s not an interpretation, it’s a claim. If something is reasoning about moral subject matter, then, I claim, it is the sort of thing that is (morally) praiseworthy or blameworthy. When we call someone bad or good for something they’ve done, we at least in part mean to praise or blame their reasoning. And one of the reasons we call someone good or bad, or their action good or bad, is an evaluation of their reasoning as good or bad. And praise and blame are, of course, the products of moral reasoning. And we do consider them to be morally valued: to (excepting cases of ignorance) praise bad people is itself bad, and to blame good people is itself good.
Now, the claim I’m arguing against is the claim that there is another kind of moral reasoning which is a) neither praiseworthy, nor blameworthy, b) does not result in an action or an evaluation of an actual person or action, and c) is somehow tied to or predictive of reasoning that is praiseworthy, blameworthy, and resulting in action or actual evaluation.
So I’ve never intended ‘moral reasoning’ to mean ‘reasoning that is moral’ except as a consequence of my argument. That phrase means, in the first place, reasoning about moral matters. Same goes for how I’ve been understanding ‘hypothetical reasoning’. (ETA: though here, I can’t see how one could draw a distinction between ‘reasoning from a hypothetical’ and ‘reasoning that is hypothetical’. I’m not trying to talk about ‘reasoning about a hypothetical’ in the broadest sense, which might include coming up with trolly problems. I only mean to talk about reasoning that begins with a hypothetical.)
If something is reasoning about moral subject matter, then, I claim, it is the sort of thing that is (morally) praiseworthy or blameworthy.
Er. Just to make sure I understand this: is “whether it’s correct to put babies in a blender for fun” moral subject matter? If so, does it follow that if I am reasoning about whether it’s correct to put babies in a blender for fun, I am therefore something that is reasoning about moral subject matter? If so, does it follow that I am the sort of thing that is morally praiseworthy or blameworthy?
When we call someone bad or good for something they’ve done, we at least in part mean to praise or blame their reasoning.
Sure, if I were to say “Sam is a bad person” because Sam did X, I would likely be trying to imply something about the thought process that led Sam to do X.
And one of the reasons we call someone good or bad, or their action good or bad, is an evaluation of their reasoning as good or bad.
I agree that it’s possible for me to call Sam “good” or “bad” based on some aspect of their reasoning, as above, though I don’t really endorse that usage. I agree that it’s possible to call Sam’s act “good” or “bad” based on some aspect of Sam’s reasoning, although I don’t endorse that usage either. I agree that it’s possible to label reasoning that causes me to call either Sam or Sam’s act “good” or “bad” as “good reasoning” or “bad reasoning”, respectively, but this is neither something I could ever imagine myself doing, nor the interpretation I would naturally apply to labeling reasoning in this way.
And praise and blame are, of course, the products of moral reasoning.
That’s not clear to me.
to (excepting cases of ignorance) praise bad people is itself bad,
That’s not clear to me either.
and to blame good people is itself good.
That’s definitely not clear to me.
So I’ve never intended ‘moral reasoning’ to mean ‘reasoning that is moral’ except as a consequence of my argument. That phrase means, in the first place, reasoning about moral matters.
Ah, OK. That was in fact not clear; thanks for clarifying it.
Just to make sure I understand this: is “whether it’s correct to put babies in a blender for fun” moral subject matter?
Not necessarily, it may or may not be taken up as a moral question. We can, for example, study just how much fun it is and leave aside the question of its moral significance. If you’re reasoning about whether or not it’s right in some moral sense to put babies in a blender, then you’re doing something like moral reasoning, but if this were purely in the hypothetical then I think it would fall short. If you were seriously considering putting babies in a blender, then I think I’d want to call it moral reasoning, but in this case I think you could obviously be praised or blamed for your answer (well, maybe not praised so much).
and to blame good people is itself good.
That’s definitely not clear to me.
Sorry, typo. I mean’t ‘to blame good people (or to blame people for good actions) is bad.’ It shows some praiseworthy decency to appreciate the moral life of, I donno, MLK. It shows real character to stick up for a good but maligned person. Likewise, it shows some shallowness to have praised someone who only appeared good, but was in fact bad. And it shows some serious defect of character to praise someone we know to be bad (I donno, Manson?).
I agree that it’s possible for me to call Sam “good” or “bad” based on some aspect of their reasoning, as above, though I don’t really endorse that usage.
What’s the difference between agreeing here, and endorsing the usage?
OK, so just to be clear, you would say that the following are examples of moral reasoning...
“It would be fun to put this baby in that blender, and I want to have fun, but it would be wrong, so I won’t”
“It would be wrong to put this baby in that blender, and I don’t want to be wrong, but it would be fun, so I will”
...and the following are not:
“In general, putting babies in blenders would be fun, and I want to have fun, but in general it would be wrong, so if a situation arose where I had a baby and a blender and could put one inside the other with impunity, I would not do so, all else being equal.”
“In general, putting babies in blenders would be wrong, and I don’t want to be wrong, but in general it would be fun, so if a situation arose where I had a baby and a blender and could put one inside the other with impunity, I would do so, all else being equal.”
Yes? No?
If so, I continue to disagree with you; I absolutely would call those last two cases examples of moral reasoning. If not, I don’t think I’m understanding you at all.
What’s the difference between agreeing here, and endorsing the usage?
If A is some object or event that I observe, and L is a label in a language that consistently evokes a representation of A in the minds of native speakers, I agree that it’s possible for me to call A L. If using L to refer to A has other effects beyond evoking A, and I consider those effects to be bad, I might reject using L to refer to A.
For example, I agree that the label “faggot” reliably refers to a male homosexual in American English, but I don’t endorse the usage in most cases because it’s conventionally insulting. (There are exceptions.)
‘to blame good people (or to blame people for good actions) is bad.’ It shows some praiseworthy decency
Incidentally, here you demonstrate one of the behaviors that causes me not to endorse the usage of calling Sam “good” or “bad” in this case. First you went from making an observation about a particular act of reasoning to labeling the reasoner in a particular way, and now you’ve gone from labeling the reasoner in that way to inferring other facts about the reasoner. I would certainly agree that the various acts we’re talking about are evidence of praiseworthy decency on Sam’s part, but the way you are talking about it makes it very easy to make the mistake of treating them as logically equivalent to praiseworthy decency.
People do this all the time (e.g., fundamental attribution fallacy), and it causes a lot of problems.
I’m thinking of moral reasoning as the kind of reasoning you’re morally responsible for: if you reason rightly, you ought to be praised and proud, and if you reason wrongly, you ought to be blamed and ashamed.
Oh! I understand you now. Thanks for clarifying this.
If you’ve acclimated to torture it’s no longer torture.
If you’ve acclimated to torture the effects have likely left you with a life not worth living.
Torture isn’t something you can acclimate yourself to in hypotheticals. E.g., the interlocutor could say “oh you would acclimate to water boarding, well then I’ll scoop your brain out, intercept your sensory modalities, and feed you horror. but wait, just when you’re getting used to it I wipe your memory.”
All this misses the point of the hypothetical by being too focused on the details rather than the message. Have you told someone the trolley experiment and had them say something like “but I would call the police, or I’m not strong enough to push a fat man over” and have to reform the experiment over and over until they got the message?
This is a fair point. Though my response was very much intended to be a joke.
I think this is wrong: saying you’d yell real loud or call the police or break the game somehow is exactly the right response. It shows that someone is engaging with the problem as a serious moral one, and it’s no accident that it’s people who hear these problems for the first time that react like this. They’re the only ones taking it seriously: moral reasoning is not hypothetical, and what they’re doing is refusing to treat the problem hypothetically.
Learning to operate within the hypothetical just means learning to stop seeing it as an opportunity for moral reasoning. After that, all we’re doing is trying to maximize a value under a theory. But that’s neither here nor there.
It is not clear to me that that is a more “right” response than engaging with the problem as a pedagogic tool in a way that aligns with the expectations of the person who set it to me. Indeed, I’m inclined to doubt it.
In much the same way: if I’m asked to multiply 367 by 1472 the response I would give in the real world is to launch a calculator application, but when asked to do this by the woman giving me a neuropsych exam after my stroke I didn’t do that, because I understood that the goal was not to find out the product of 367 and 1472 but rather to find out something about my brain that would be revealed by my attempt to calculate that product.
I agree with you that it’s no accident that people react like this to trolley problems, but I disagree with your analysis of the causes.
You called the trolly problem a pedagogic tool: what do you have in mind here specifically? What sort of work do you take the trolly problem to be doing?
It clarifies the contrast between evaluating the rightness of an act in terms of the relative desirability of the likely states of the world after that act is performed or not performed, vs. evaluating the rightness of an act in other terms.
Okay, that sounds reasonable to me. But what do we mean by ‘act’ in this case? We could for instance imagine a trolly problem in which no one had the power to change the course of the train, and it just went down one track or the other on the basis of chance. We could still evaluate one outcome as better than the other (this must be the one man dying instead of five), but there’s no action.
Are we making a moral judgement in that case? Or do we reason differently when an agent is involved?
I don’t know who “we” are.
What I say about your proposed scenario is that the hypothetical world in which five people die is worse than the hypothetical world in which one person dies, all else being equal. So, no, my reasoning doesn’t change because there’s an agent involved.
But someone who evaluates the standard trolley problem differently might come to different conclusions.
For example, I know any number of deontologists who argue that the correct answer in the standard trolley problem is to let the five people die, because killing someone is worse than letting five people die. I’m not exactly sure what they would say about your proposed scenario, but I assume they would say in that case, since there’s no choice and therefore no “killing someone” involved, the world where five people die is worse.
Similarly, given someone like you who argues that the correct answer in the standard trolley problem is to “yell real loud or call the police or break the game somehow,” I’m not sure what you would say about your own proposed scenario.
I think it shows someone is trying to “solve” a hypothetical or be clever, because with a trivial amount of deliberation they would anticipate the interlocutors response and reform. Moreover, none of this engages the point of the exercise for which you’re free to argue without being opaque. E.g., “okay, clearly the point of this trolley experiment is to see if my moral intuitions align with consequentialism or utilitarianism, I don’t think this experiment does that because blah blah blah.”
Moreover, moral reasoning is hypothetical if you’re sufficiently reflective.
Well, in what kinds of things does moral reasoning conclude? I suppose I would say ‘actions and evaluations’ or something like that. Can you think of anything else?
Moral reasoning should inform your moral intuitions—what you’ll do in the absence of an opportunity to reflect. How do you prepare your moral intuitions for handling future scenarios?
Well, regardless of whether we have time to reflect or not, I take it moral reasoning or moral intuitions conclude either in an action or in something like an evaluative judgement. This would distinguish such reasoning, I suppose, from theoretical reasoning which begins from and concludes in beliefs. Does that sound right to you?
An evaluative judgement is an action; you’re fundamentally saying moral reasoning has consequences. I agree with that, of course. I don’t think it disguishes it from theorical reasoning.
By ‘action’ I mean something someone might see you do, something undertaken intentionally with the aim of changing something around you. But when we ask someone to react to a trolly problem, we don’t expect them to act as a result of their reasoning (since there’s no actual trolly). We just want them to reply. So sometimes moral reasoning concludes merely in a judgement, and sometimes it concludes in an action (if we were actually in the trolly scenario, for example) that will, I suppose, also involve a judgement. Does all this seem reasonable to you?
This would go quicker if you gave your conclusion and then we talked about the assumptions, rather than building from the assumptions to the conclusion (I think it’s that you want to say hypotheticals produce different results than reality). But to answer your question, I don’t think that giving a result to the trolley problem merely results in a judgement. I think it also potentially results in reflective equilibrium of moral intuitions, which then possibly results in different decisions in the future (I’ve had this experience). I think it also potentially affects the interlocutor or audience.
I’ve already given you my conclusion, such as it is: not that hypotheticals produce different results, but that reasoning about hypotheticals can’t be moral reasoning. I’m just trying to think through the problem myself, I don’t have a worked out theory here, or any kind of plan. If you have a more productive way to figure out how hypotheticals are related to moral reasoning then I’m happy to pursue that.
Right, but I’m just talking about the posing of the question as an invitation for someone to think about it. The aim or end result of that thinking is some kind of conclusion, and I’m just asking what kinds of conclusions moral reasoning ends in. Since we use moral reasoning in deciding how to act, I take it for granted that one kind of conclusion is an action: “It is right to X, and possible for me to X, therefore...” and then comes the action. When someone is addressing a trolly problem, they might think to themselves: “If one does X, one will get the result A, and if one does Y, one will get the result B. A is preferable to B, so...” and then comes the conclusion. The conclusion in this case is not an action, but just the proposition that ”...given the circumstances, one should do X.”
ETA: So, supposing that reasoning about the trolly problem here is moral reasoning (as opposed to, say, the sort of reasoning we’re doing when we play a game of chess) then moral reasoning can conclude sometimes in actions, and sometimes in judgements.
Suppose I sit down at time T1 to consider the hypothetical question of what responses I consider appropriate to various events, and I conclude that in response to event E1 I ought to take action A1. Then at T2, E1 occurs, and I take action A1 based on reasoning of the form “That’s E1, and I’ve previously decided that in case of E1 I should perform A1, so I’m going to perform A1.”
If I’ve understood you correctly, the only question being discussed here is whether the label “moral reasoning” properly applies to what occurs at T1, T2, both, or neither.
Can you give me an example of something that might be measurably different in the world under some possible set of conditions depending on which answer to that question turns out to be true?
You’ve understood me perfectly, and that’s an excellent way of putting things. I think there’s an interpretation of those variables such that both what occurs at T1 and at T2 could be called moral reasoning, especially if one expects E1 to occur. But suppose you just, by way of armchair reasoning, decide that if E1 ever happens, you’ll A1. Now suppose E1 has occured, but suppose also that you’ve forgotten the reasoning which lead you to conclude that A1 would be right: you remember the conclusion, but you’ve forgotten why you thought it. That scenario would, I believe, satisfy your description, and it would be a case in which your action is quite suspect. Not wholly so, since you may have good reason to believe your past decisions are reliable, but if you don’t know why you’re acting when you act, you’re not acting in a fully rational way.
I think it would be appropriate to say, in this case, that you are not to be morally praised (e.g. “you’re a good person”, “You’re a hero” etc.) for such an action (if it is good) in quite the measure you would be if you knew what you were doing. I bring up praise, just because this is an easy way for us to talk about what we consider to be the right response to morally good action, regardless of our theories. Does all this sound reasonable?
If what went on at T1 was fully moral reasoning, then no part of the moral action story seems to be left out: you reasoned your way to an action, and at some later time undertook that action. But if it’s true that we would consider an action in which you’ve forgotten your reasoning a defective action, less worthy of moral praise, then we consider it important that the reasoning be present to you as you act.
And I take it for granted, I suppose, that we don’t consider it terribly praiseworthy for someone to come to a bunch of good conclusions from the armchair and never make any effort to carry them out.
I’ll point out again that the phrase “moral reasoning” as you have been using it (to mean praiseworthy reasoning) is importantly different from how that phrase is being used by others.
That aside, I agree with you that in the scenario you describe, my reasoning at T2 (when E1 occurs) is not especially praiseworthy and thus does not especially merit the label “moral reasoning” as you’re using it. I don’t agree that my reasoning at T1 is not praiseworthy, though. If I sit down at T1 and work out the proper thing to do given E1, and I do that well enough that when E1 occurs at T2 I do the proper thing even though I’m not reasoning about it at T2, that seems compelling evidence that my reasoning at T1 is praiseworthy.
Sure, we agree there, I just wanted to point out that the, shall we say, ‘presence’ of the reasoning in one’s action at T2 is both a necessary and sufficient condition for the action’s being morally praiseworthy if it’s good. The reasoning done at T1 is, of itself, neither necessary nor sufficient.
I don’t agree that the action at T2 is necessary. I would agree that in the absence of the action at T2, it would be difficult to know that the thinking at T1 was praiseworthy, but what makes the thinking at T1 praiseworthy is the fact that it led to a correct conclusion (“given E1 do A1”). It did not retroactively become praiseworthy when E1 occurred.
So you would say that deliberating to the right answer in a moral hypothetical is, on its own, something which should or could earn the deliberator moral praise?
Would you say that people can or ought to be praised or blamed for their answers to the trolly problem?
I would say that committing to a correct policy to implement in case of a particular event occurring is a good thing to have done. (It is sometimes an even better thing to have done if I can then articulate that policy, and perhaps even that commitment, in a compelling way to others.)
I think that’s an example of “deliberating to the right answer in a moral hypothetical earning moral praise” as you’re using those phrases, so I think yes, it’s something that could earn moral praise.
People certainly can be praised or blamed for their answers to the trolley problem—I’ve seen it happen myself—but that’s not terribly interesting.
More interestingly, yes, there are types of answers to the standard trolley problem I think deserve praise.
In case of a possible misunderstanding: I didn’t mean to imply that moral reasoning is literally hypothetical, but that hypotheticals can be a form of moral reasoning (and I hope we aren’t arguing about what ‘reasoning’ is). The problem that I think you have with this is that you believe hypothetical moral reasoning doesn’t generalize? If so, let me show you how that might work.
And this could go on and on until you’ve recalibrated your moral intuitions using hypothetical moral reasoning, and now when asked a similar hypothetical (or put in a similar situation) your immediate intuition is to look at the consequences. Why is the hypothetical part useful? It uncovers previously unquestioned assumptions. It’s also a nice compact form for discussing such issues.
We’re not, and I understand. We do disagree on that claim: I’m suggesting that no moral reasoning can be hypothetical, and that if some bit of reasoning proceeds from a hypothetical, we can know on the basis of that alone that it’s not really moral reasoning. I’m thinking of moral reasoning as the kind of reasoning you’re morally responsible for: if you reason rightly, you ought to be praised and proud, and if you reason wrongly, you ought to be blamed and ashamed. That sort of thing.
This is a good framing, thanks. By ‘on and on’ I assume you mean that the reasoner should go on to examine his decision to look at expected consequences, and perhaps more importantly his preference for the world in which five people live. After all, he shouldn’t trust that any more than the intuition, right?
Can’t that apply to hypotheticals? If you come to the wrong conclusion you’re a horrible person, sort of thing.
I would probably call “moral reasoning” something along the lines of “reasoning about morals”. Even using your above definition, I think reasoning about morals using hypotheticals can result in a judgment, about what sort of action would be appropriate in the situation.
That can’t be what people normally mean by “moral reasoning”. Do you have a philosophy background?
I don’t see why that would be the case. Cheap illustration:
TEACHER: Jimmy, suppose I tell you that P, and also that P implies Q. What does that tell you about Q?
JIMMY: Q is true.
TEACHER: That’s right Jimmy! Your reasoning is praiseworthy!
JIMMY: Getting the right answer while reasoning about that hypothetical fills me with pride!
You’ve taken my conditional: “If something is moral reasoning, it is something for which we can be praised or blamed” for a biconditional. I only intend the former. ETA: I should say more. I don’t mean any kind of praise or blame, but the kind appropriate to morally good or bad action. One might believe that this isn’t different in kind from the sort of praise we offer in response to, say, excellence in playing the violin, but I haven’t gotten the sense that this view is on the table. If we agree that there is such a thing as distinctively moral praise or blame, then I’ll commit to the biconditional.
I suspect ABrooks is continuing his tradition of interpreting “X reasoning” to mean reasoning that has the property of being X, rather than reasoning about X.
If I’m right, I expect his reply here is that your example is not of hypothetical reasoning at all—supposing that actually happened, Jimmy really would be reasoning, so it would be actual reasoning. Sure, it would be reasoning about a hypothetical, but so what?
I share your sense, incidentally, that this is not what people normally mean, either by “moral reasoning” or “hypothetical reasoning.:”
It’s not an interpretation, it’s a claim. If something is reasoning about moral subject matter, then, I claim, it is the sort of thing that is (morally) praiseworthy or blameworthy. When we call someone bad or good for something they’ve done, we at least in part mean to praise or blame their reasoning. And one of the reasons we call someone good or bad, or their action good or bad, is an evaluation of their reasoning as good or bad. And praise and blame are, of course, the products of moral reasoning. And we do consider them to be morally valued: to (excepting cases of ignorance) praise bad people is itself bad, and to blame good people is itself good.
Now, the claim I’m arguing against is the claim that there is another kind of moral reasoning which is a) neither praiseworthy, nor blameworthy, b) does not result in an action or an evaluation of an actual person or action, and c) is somehow tied to or predictive of reasoning that is praiseworthy, blameworthy, and resulting in action or actual evaluation.
So I’ve never intended ‘moral reasoning’ to mean ‘reasoning that is moral’ except as a consequence of my argument. That phrase means, in the first place, reasoning about moral matters. Same goes for how I’ve been understanding ‘hypothetical reasoning’. (ETA: though here, I can’t see how one could draw a distinction between ‘reasoning from a hypothetical’ and ‘reasoning that is hypothetical’. I’m not trying to talk about ‘reasoning about a hypothetical’ in the broadest sense, which might include coming up with trolly problems. I only mean to talk about reasoning that begins with a hypothetical.)
I am sorry if that hasn’t been clear.
Er. Just to make sure I understand this: is “whether it’s correct to put babies in a blender for fun” moral subject matter? If so, does it follow that if I am reasoning about whether it’s correct to put babies in a blender for fun, I am therefore something that is reasoning about moral subject matter? If so, does it follow that I am the sort of thing that is morally praiseworthy or blameworthy?
Sure, if I were to say “Sam is a bad person” because Sam did X, I would likely be trying to imply something about the thought process that led Sam to do X.
I agree that it’s possible for me to call Sam “good” or “bad” based on some aspect of their reasoning, as above, though I don’t really endorse that usage. I agree that it’s possible to call Sam’s act “good” or “bad” based on some aspect of Sam’s reasoning, although I don’t endorse that usage either. I agree that it’s possible to label reasoning that causes me to call either Sam or Sam’s act “good” or “bad” as “good reasoning” or “bad reasoning”, respectively, but this is neither something I could ever imagine myself doing, nor the interpretation I would naturally apply to labeling reasoning in this way.
That’s not clear to me.
That’s not clear to me either.
That’s definitely not clear to me.
Ah, OK. That was in fact not clear; thanks for clarifying it.
Not necessarily, it may or may not be taken up as a moral question. We can, for example, study just how much fun it is and leave aside the question of its moral significance. If you’re reasoning about whether or not it’s right in some moral sense to put babies in a blender, then you’re doing something like moral reasoning, but if this were purely in the hypothetical then I think it would fall short. If you were seriously considering putting babies in a blender, then I think I’d want to call it moral reasoning, but in this case I think you could obviously be praised or blamed for your answer (well, maybe not praised so much).
Sorry, typo. I mean’t ‘to blame good people (or to blame people for good actions) is bad.’ It shows some praiseworthy decency to appreciate the moral life of, I donno, MLK. It shows real character to stick up for a good but maligned person. Likewise, it shows some shallowness to have praised someone who only appeared good, but was in fact bad. And it shows some serious defect of character to praise someone we know to be bad (I donno, Manson?).
What’s the difference between agreeing here, and endorsing the usage?
OK, so just to be clear, you would say that the following are examples of moral reasoning...
“It would be fun to put this baby in that blender, and I want to have fun, but it would be wrong, so I won’t”
“It would be wrong to put this baby in that blender, and I don’t want to be wrong, but it would be fun, so I will”
...and the following are not:
“In general, putting babies in blenders would be fun, and I want to have fun, but in general it would be wrong, so if a situation arose where I had a baby and a blender and could put one inside the other with impunity, I would not do so, all else being equal.”
“In general, putting babies in blenders would be wrong, and I don’t want to be wrong, but in general it would be fun, so if a situation arose where I had a baby and a blender and could put one inside the other with impunity, I would do so, all else being equal.”
Yes? No?
If so, I continue to disagree with you; I absolutely would call those last two cases examples of moral reasoning.
If not, I don’t think I’m understanding you at all.
If A is some object or event that I observe, and L is a label in a language that consistently evokes a representation of A in the minds of native speakers, I agree that it’s possible for me to call A L. If using L to refer to A has other effects beyond evoking A, and I consider those effects to be bad, I might reject using L to refer to A.
For example, I agree that the label “faggot” reliably refers to a male homosexual in American English, but I don’t endorse the usage in most cases because it’s conventionally insulting. (There are exceptions.)
Incidentally, here you demonstrate one of the behaviors that causes me not to endorse the usage of calling Sam “good” or “bad” in this case. First you went from making an observation about a particular act of reasoning to labeling the reasoner in a particular way, and now you’ve gone from labeling the reasoner in that way to inferring other facts about the reasoner. I would certainly agree that the various acts we’re talking about are evidence of praiseworthy decency on Sam’s part, but the way you are talking about it makes it very easy to make the mistake of treating them as logically equivalent to praiseworthy decency.
People do this all the time (e.g., fundamental attribution fallacy), and it causes a lot of problems.
Oh!
I understand you now.
Thanks for clarifying this.
Also...
Can you please clarify which of your comments in this thread you stand by, and which ones you don’t stand by?
I stand by everything I said about trolly problems. I don’t think an eternity of torture is preferable to a dust speck in one’s eye.