“Choosing infanticide over abandonment is pretty pointless, so why do it?”
“Killing another living thing doesn’t qualify as “euthanasia” if you do it for your benefit, not that being’s.”
Let me respond by a little story telling, without making a clear point.
I am not proving You wrong, just sharing my personal experience.
Warnings: depressive stories about ilnesses, probably bad reading.
I once was a friend with a boy with a progressive muscular dystrophy. It is a degenerative disease, where gradually, Your muscles stop working, and at the age of cca 20, most patients die, because they stop breathing.
If You have heard great stories about people on the wheelchair getting adapted to their situation, well, here adaptation can be only shorterm, because next year, You might not be able of doing what you can do now. The pain was not excruciating but there was some, the body which is deprived of excercise gives You this feedback. If he had a bad dream at night, he could not turn to the other side (a very usual remedy, most people do it without even realizing). The boy had 2 suicide attempts, although, frankly, he did not really mean them. He would make phonecalls to his friends in the evening to relieve his pain—very unwelcome calls. I sometimes pretended not to be at home, and I know other people who did the same (We were in our twenties). Then, his desperation was deepened by feeling he is not loved.
Once he was calling his psychologist, and caught her in the middle of a suicide attempt, poisoned by drugs—she repeated to him HIS previous statements from the previous phonecalls. I am not saying it was HIS fault, the lady clearly failed to safeguard the known risks of her profession (plus had other problems, departed partner etc.) I am just illustrating how hard it was sometimes to deal with him. (He called other people who saved her life, to close up this branch of the story).
His parents took great care of him up to the level of their financial abilities, plus using the limited help of our government. There were frequent conflicts between him and his parents, though, and made him feel unloved, again. On the other hand, his parents were deeply religious and, knowingly, had another baby with the same genetic defect later, they did not choose abortion. The older boy has died at the age of 28, his life being surprisingly long.
This story clearly contains aspects, which were not optimized, the parents could have earned more money and bring more comforts to his lives, he could have gotten a personal assistant at night, more physiotherapy excercises, a better computer, some lectures how to deal with people and get a girlfriend (his desires were strong), he could have tried harder to develop his talents and get a job, which would make him feel useful to society. (We persuaded him to get a job eventually, phone operator, lasted 1 year or so). His friens, including me, could have worked harder on their emotional maturity. But, can You see all the energy and resources to make a misery somewhat better ?
“Choosing infanticide over abandonment is pretty pointless, so why do it?” Abandoning a baby with a severe genetic defect at birth condemnes the baby to even lower quality of life in most government institutions, unless a millionaire chooses to adopt him.
I have a counterargument to my own reasoning right away—what if some parents killed their baby diagnosed with adrenoleukodystrophy (but with no developed symptomps yet) a year before Augusto and Michaela Odone invented the Lorenzo’s Oil for their son ? Such parents would have lost a potentially healthy baby, the baby would lose a realistic chance to live their normal life...
I am not really trying to win this argument, just explaining, why I sometimes TOY with the idea of infanticide being not so immoral, and considering it a form of euthanasia.
There’s plenty of diseases we can now deal with quite well because we didn’t infanticide or murder everyone who had them. This isn’t a coincidence that a treatment is found, if we killed everyone with a disease there would be no search for treatment.
More like, to determine whether people are paying any attention. (I once took an online personality test which included questions such as “I’ve never eaten before” to prevent people from using bots or similar to screw up their data.)
It’s hard to get people to answer such things straightforwardly. I once included “Some people have fingernails” in a poll, as about the most uncontroversially true thing I could think of, and participants found a way to argue that it wasn’t true—since “some” understates the proportion.
Well… Some people does usually implicate ‘not all people, and not even all people except a non-sizeable minority’, but if we go by implicatures rather than literal meanings, X has fingernails (in contexts where everyone knows X is a human), in my experience at least, usually implicates that X’s fingernails are not trimmed nearly as short as possible, since the literal meaning would be quite uninformative once you know X is a human.
To clarify: A = Dust speck in your eye, and your life is otherwise as it would have been without this deal. B = 3^^^3 years of torture, followed by death.
Is that an easy choice for you? If not, can you summarize your arguments in favor of choosing B?
If not, can you summarize your arguments in favor of choosing B?
Well, if I choose B, I’ll be alive for a very large number of years. I’ll be alive so long, that I expect that I’ll get used to anything deployed to torture me. And I’ll be alive so long, I’d need to study a fair amount of cosmology just to understand what my lifetime will involve, by way of the deaths and rebirths of whole universes or whatever. Some of that would be interesting to see.
The easy thought experiment would be dust speck vs. 3 years of torture followed by death. I think there, I’d go with the speck.
I’ll be alive so long, that I expect that I’ll get used to anything deployed to torture me.
Is this based on the experience of torture victims? I think that “get used to” would more closely resemble “catatonic” than “unperturbed.” I don’t think your ability to be interested would survive very long.
If you’ve acclimated to torture it’s no longer torture.
If you’ve acclimated to torture the effects have likely left you with a life not worth living.
Torture isn’t something you can acclimate yourself to in hypotheticals. E.g., the interlocutor could say “oh you would acclimate to water boarding, well then I’ll scoop your brain out, intercept your sensory modalities, and feed you horror. but wait, just when you’re getting used to it I wipe your memory.”
All this misses the point of the hypothetical by being too focused on the details rather than the message. Have you told someone the trolley experiment and had them say something like “but I would call the police, or I’m not strong enough to push a fat man over” and have to reform the experiment over and over until they got the message?
Torture isn’t something you can acclimate yourself to in hypotheticals....
This is a fair point. Though my response was very much intended to be a joke.
All this misses the point of the hypothetical by being too focused on the details rather than the message. Have you told someone the trolley experiment and had them say something like “but I would call the police, or I’m not strong enough to push a fat man over” and have to reform the experiment over and over until they got the message?
I think this is wrong: saying you’d yell real loud or call the police or break the game somehow is exactly the right response. It shows that someone is engaging with the problem as a serious moral one, and it’s no accident that it’s people who hear these problems for the first time that react like this. They’re the only ones taking it seriously: moral reasoning is not hypothetical, and what they’re doing is refusing to treat the problem hypothetically.
Learning to operate within the hypothetical just means learning to stop seeing it as an opportunity for moral reasoning. After that, all we’re doing is trying to maximize a value under a theory. But that’s neither here nor there.
I think this is wrong: saying you’d yell real loud or call the police or break the game somehow is exactly the right response. It shows that someone is engaging with the problem as a serious moral one,
It is not clear to me that that is a more “right” response than engaging with the problem as a pedagogic tool in a way that aligns with the expectations of the person who set it to me. Indeed, I’m inclined to doubt it.
In much the same way: if I’m asked to multiply 367 by 1472 the response I would give in the real world is to launch a calculator application, but when asked to do this by the woman giving me a neuropsych exam after my stroke I didn’t do that, because I understood that the goal was not to find out the product of 367 and 1472 but rather to find out something about my brain that would be revealed by my attempt to calculate that product.
I agree with you that it’s no accident that people react like this to trolley problems, but I disagree with your analysis of the causes.
It is not clear to me that that is a more “right” response than engaging with the problem as a pedagogic tool in a way that aligns with the expectations of the person who set it to me.
You called the trolly problem a pedagogic tool: what do you have in mind here specifically? What sort of work do you take the trolly problem to be doing?
It clarifies the contrast between evaluating the rightness of an act in terms of the relative desirability of the likely states of the world after that act is performed or not performed, vs. evaluating the rightness of an act in other terms.
Okay, that sounds reasonable to me. But what do we mean by ‘act’ in this case? We could for instance imagine a trolly problem in which no one had the power to change the course of the train, and it just went down one track or the other on the basis of chance. We could still evaluate one outcome as better than the other (this must be the one man dying instead of five), but there’s no action.
Are we making a moral judgement in that case? Or do we reason differently when an agent is involved?
What I say about your proposed scenario is that the hypothetical world in which five people die is worse than the hypothetical world in which one person dies, all else being equal. So, no, my reasoning doesn’t change because there’s an agent involved.
But someone who evaluates the standard trolley problem differently might come to different conclusions.
For example, I know any number of deontologists who argue that the correct answer in the standard trolley problem is to let the five people die, because killing someone is worse than letting five people die. I’m not exactly sure what they would say about your proposed scenario, but I assume they would say in that case, since there’s no choice and therefore no “killing someone” involved, the world where five people die is worse.
Similarly, given someone like you who argues that the correct answer in the standard trolley problem is to “yell real loud or call the police or break the game somehow,” I’m not sure what you would say about your own proposed scenario.
It shows that someone is engaging with the problem as a serious moral one
I think it shows someone is trying to “solve” a hypothetical or be clever, because with a trivial amount of deliberation they would anticipate the interlocutors response and reform. Moreover, none of this engages the point of the exercise for which you’re free to argue without being opaque. E.g., “okay, clearly the point of this trolley experiment is to see if my moral intuitions align with consequentialism or utilitarianism, I don’t think this experiment does that because blah blah blah.”
Moreover, moral reasoning is hypothetical if you’re sufficiently reflective.
Moreover, moral reasoning is hypothetical if you’re sufficiently reflective.
Well, in what kinds of things does moral reasoning conclude? I suppose I would say ‘actions and evaluations’ or something like that. Can you think of anything else?
Moral reasoning should inform your moral intuitions—what you’ll do in the absence of an opportunity to reflect. How do you prepare your moral intuitions for handling future scenarios?
Well, regardless of whether we have time to reflect or not, I take it moral reasoning or moral intuitions conclude either in an action or in something like an evaluative judgement. This would distinguish such reasoning, I suppose, from theoretical reasoning which begins from and concludes in beliefs. Does that sound right to you?
An evaluative judgement is an action; you’re fundamentally saying moral reasoning has consequences. I agree with that, of course. I don’t think it disguishes it from theorical reasoning.
By ‘action’ I mean something someone might see you do, something undertaken intentionally with the aim of changing something around you. But when we ask someone to react to a trolly problem, we don’t expect them to act as a result of their reasoning (since there’s no actual trolly). We just want them to reply. So sometimes moral reasoning concludes merely in a judgement, and sometimes it concludes in an action (if we were actually in the trolly scenario, for example) that will, I suppose, also involve a judgement. Does all this seem reasonable to you?
This would go quicker if you gave your conclusion and then we talked about the assumptions, rather than building from the assumptions to the conclusion (I think it’s that you want to say hypotheticals produce different results than reality). But to answer your question, I don’t think that giving a result to the trolley problem merely results in a judgement. I think it also potentially results in reflective equilibrium of moral intuitions, which then possibly results in different decisions in the future (I’ve had this experience). I think it also potentially affects the interlocutor or audience.
This would go quicker if you gave your conclusion and then we talked about the assumptions, rather than building from the assumptions to the conclusion.
I’ve already given you my conclusion, such as it is: not that hypotheticals produce different results, but that reasoning about hypotheticals can’t be moral reasoning. I’m just trying to think through the problem myself, I don’t have a worked out theory here, or any kind of plan. If you have a more productive way to figure out how hypotheticals are related to moral reasoning then I’m happy to pursue that.
But to answer your question, I don’t think that giving a result to the trolley problem merely results in a judgement.
Right, but I’m just talking about the posing of the question as an invitation for someone to think about it. The aim or end result of that thinking is some kind of conclusion, and I’m just asking what kinds of conclusions moral reasoning ends in. Since we use moral reasoning in deciding how to act, I take it for granted that one kind of conclusion is an action: “It is right to X, and possible for me to X, therefore...” and then comes the action. When someone is addressing a trolly problem, they might think to themselves: “If one does X, one will get the result A, and if one does Y, one will get the result B. A is preferable to B, so...” and then comes the conclusion. The conclusion in this case is not an action, but just the proposition that ”...given the circumstances, one should do X.”
ETA: So, supposing that reasoning about the trolly problem here is moral reasoning (as opposed to, say, the sort of reasoning we’re doing when we play a game of chess) then moral reasoning can conclude sometimes in actions, and sometimes in judgements.
Suppose I sit down at time T1 to consider the hypothetical question of what responses I consider appropriate to various events, and I conclude that in response to event E1 I ought to take action A1. Then at T2, E1 occurs, and I take action A1 based on reasoning of the form “That’s E1, and I’ve previously decided that in case of E1 I should perform A1, so I’m going to perform A1.”
If I’ve understood you correctly, the only question being discussed here is whether the label “moral reasoning” properly applies to what occurs at T1, T2, both, or neither.
Can you give me an example of something that might be measurably different in the world under some possible set of conditions depending on which answer to that question turns out to be true?
If I’ve understood you correctly, the only question being discussed here is whether the label “moral reasoning” properly applies to what occurs at T1, T2, both, or neither.
You’ve understood me perfectly, and that’s an excellent way of putting things. I think there’s an interpretation of those variables such that both what occurs at T1 and at T2 could be called moral reasoning, especially if one expects E1 to occur. But suppose you just, by way of armchair reasoning, decide that if E1 ever happens, you’ll A1. Now suppose E1 has occured, but suppose also that you’ve forgotten the reasoning which lead you to conclude that A1 would be right: you remember the conclusion, but you’ve forgotten why you thought it. That scenario would, I believe, satisfy your description, and it would be a case in which your action is quite suspect. Not wholly so, since you may have good reason to believe your past decisions are reliable, but if you don’t know why you’re acting when you act, you’re not acting in a fully rational way.
I think it would be appropriate to say, in this case, that you are not to be morally praised (e.g. “you’re a good person”, “You’re a hero” etc.) for such an action (if it is good) in quite the measure you would be if you knew what you were doing. I bring up praise, just because this is an easy way for us to talk about what we consider to be the right response to morally good action, regardless of our theories. Does all this sound reasonable?
If what went on at T1 was fully moral reasoning, then no part of the moral action story seems to be left out: you reasoned your way to an action, and at some later time undertook that action. But if it’s true that we would consider an action in which you’ve forgotten your reasoning a defective action, less worthy of moral praise, then we consider it important that the reasoning be present to you as you act.
And I take it for granted, I suppose, that we don’t consider it terribly praiseworthy for someone to come to a bunch of good conclusions from the armchair and never make any effort to carry them out.
I’ll point out again that the phrase “moral reasoning” as you have been using it (to mean praiseworthy reasoning) is importantly different from how that phrase is being used by others.
That aside, I agree with you that in the scenario you describe, my reasoning at T2 (when E1 occurs) is not especially praiseworthy and thus does not especially merit the label “moral reasoning” as you’re using it. I don’t agree that my reasoning at T1 is not praiseworthy, though. If I sit down at T1 and work out the proper thing to do given E1, and I do that well enough that when E1 occurs at T2 I do the proper thing even though I’m not reasoning about it at T2, that seems compelling evidence that my reasoning at T1 is praiseworthy.
If I sit down at T1 and work out the proper thing to do given E1, and I do that well enough that when E1 occurs at T2 I do the proper thing even though I’m not reasoning about it at T2, that seems compelling evidence that my reasoning at T1 is praiseworthy.
Sure, we agree there, I just wanted to point out that the, shall we say, ‘presence’ of the reasoning in one’s action at T2 is both a necessary and sufficient condition for the action’s being morally praiseworthy if it’s good. The reasoning done at T1 is, of itself, neither necessary nor sufficient.
I don’t agree that the action at T2 is necessary. I would agree that in the absence of the action at T2, it would be difficult to know that the thinking at T1 was praiseworthy, but what makes the thinking at T1 praiseworthy is the fact that it led to a correct conclusion (“given E1 do A1”). It did not retroactively become praiseworthy when E1 occurred.
So you would say that deliberating to the right answer in a moral hypothetical is, on its own, something which should or could earn the deliberator moral praise?
Would you say that people can or ought to be praised or blamed for their answers to the trolly problem?
I would say that committing to a correct policy to implement in case of a particular event occurring is a good thing to have done. (It is sometimes an even better thing to have done if I can then articulate that policy, and perhaps even that commitment, in a compelling way to others.)
I think that’s an example of “deliberating to the right answer in a moral hypothetical earning moral praise” as you’re using those phrases, so I think yes, it’s something that could earn moral praise.
People certainly can be praised or blamed for their answers to the trolley problem—I’ve seen it happen myself—but that’s not terribly interesting.
More interestingly, yes, there are types of answers to the standard trolley problem I think deserve praise.
In case of a possible misunderstanding: I didn’t mean to imply that moral reasoning is literally hypothetical, but that hypotheticals can be a form of moral reasoning (and I hope we aren’t arguing about what ‘reasoning’ is). The problem that I think you have with this is that you believe hypothetical moral reasoning doesn’t generalize? If so, let me show you how that might work.
Hmm, save one person or let five people die.
My intuition tells me that killing is wrong.
Wait, what is intuition and why should I trust it?
I guess it’s the result of experience: cultural, personal, and evolution.
Now why should I trust that?
I suppose I shouldn’t because there’s no guarantee that any of that should result in the “right”
answer. Or even something that I actually prefer.
Hmm… If I look at the consequences, I see I prefer a world in which the five people live.
And this could go on and on until you’ve recalibrated your moral intuitions using hypothetical moral reasoning, and now when asked a similar hypothetical (or put in a similar situation) your immediate intuition is to look at the consequences. Why is the hypothetical part useful? It uncovers previously unquestioned assumptions. It’s also a nice compact form for discussing such issues.
but that hypotheticals can be a form of moral reasoning (and I hope we aren’t arguing about what ‘reasoning’ is).
We’re not, and I understand. We do disagree on that claim: I’m suggesting that no moral reasoning can be hypothetical, and that if some bit of reasoning proceeds from a hypothetical, we can know on the basis of that alone that it’s not really moral reasoning. I’m thinking of moral reasoning as the kind of reasoning you’re morally responsible for: if you reason rightly, you ought to be praised and proud, and if you reason wrongly, you ought to be blamed and ashamed. That sort of thing.
Hmm… If I look at the consequences, I see I prefer a world in which the five people live.
This is a good framing, thanks. By ‘on and on’ I assume you mean that the reasoner should go on to examine his decision to look at expected consequences, and perhaps more importantly his preference for the world in which five people live. After all, he shouldn’t trust that any more than the intuition, right?
I’m thinking of moral reasoning as the kind of reasoning you’re morally responsible for: if you reason rightly, you ought to be praised and proud, and if you reason wrongly, you ought to be blamed and ashamed. That sort of thing.
Can’t that apply to hypotheticals? If you come to the wrong conclusion you’re a horrible person, sort of thing.
I would probably call “moral reasoning” something along the lines of “reasoning about morals”. Even using your above definition, I think reasoning about morals using hypotheticals can result in a judgment, about what sort of action would be appropriate in the situation.
I’m thinking of moral reasoning as the kind of reasoning you’re morally responsible for: if you reason rightly, you ought to be praised and proud, and if you reason wrongly, you ought to be blamed and ashamed.
That can’t be what people normally mean by “moral reasoning”. Do you have a philosophy background?
I’m suggesting that no moral reasoning can be hypothetical
I don’t see why that would be the case. Cheap illustration:
TEACHER: Jimmy, suppose I tell you that P, and also that P implies Q. What does that tell you about Q? JIMMY: Q is true. TEACHER: That’s right Jimmy! Your reasoning is praiseworthy! JIMMY: Getting the right answer while reasoning about that hypothetical fills me with pride!
I don’t see why that would be the case. Cheap illustration:...
You’ve taken my conditional: “If something is moral reasoning, it is something for which we can be praised or blamed” for a biconditional. I only intend the former. ETA: I should say more. I don’t mean any kind of praise or blame, but the kind appropriate to morally good or bad action. One might believe that this isn’t different in kind from the sort of praise we offer in response to, say, excellence in playing the violin, but I haven’t gotten the sense that this view is on the table. If we agree that there is such a thing as distinctively moral praise or blame, then I’ll commit to the biconditional.
I suspect ABrooks is continuing his tradition of interpreting “X reasoning” to mean reasoning that has the property of being X, rather than reasoning about X.
If I’m right, I expect his reply here is that your example is not of hypothetical reasoning at all—supposing that actually happened, Jimmy really would be reasoning, so it would be actual reasoning. Sure, it would be reasoning about a hypothetical, but so what?
I share your sense, incidentally, that this is not what people normally mean, either by “moral reasoning” or “hypothetical reasoning.:”
I suspect ABrooks is continuing his tradition of interpreting “X reasoning” to mean reasoning that has the property of being X, rather than reasoning about X.
It’s not an interpretation, it’s a claim. If something is reasoning about moral subject matter, then, I claim, it is the sort of thing that is (morally) praiseworthy or blameworthy. When we call someone bad or good for something they’ve done, we at least in part mean to praise or blame their reasoning. And one of the reasons we call someone good or bad, or their action good or bad, is an evaluation of their reasoning as good or bad. And praise and blame are, of course, the products of moral reasoning. And we do consider them to be morally valued: to (excepting cases of ignorance) praise bad people is itself bad, and to blame good people is itself good.
Now, the claim I’m arguing against is the claim that there is another kind of moral reasoning which is a) neither praiseworthy, nor blameworthy, b) does not result in an action or an evaluation of an actual person or action, and c) is somehow tied to or predictive of reasoning that is praiseworthy, blameworthy, and resulting in action or actual evaluation.
So I’ve never intended ‘moral reasoning’ to mean ‘reasoning that is moral’ except as a consequence of my argument. That phrase means, in the first place, reasoning about moral matters. Same goes for how I’ve been understanding ‘hypothetical reasoning’. (ETA: though here, I can’t see how one could draw a distinction between ‘reasoning from a hypothetical’ and ‘reasoning that is hypothetical’. I’m not trying to talk about ‘reasoning about a hypothetical’ in the broadest sense, which might include coming up with trolly problems. I only mean to talk about reasoning that begins with a hypothetical.)
If something is reasoning about moral subject matter, then, I claim, it is the sort of thing that is (morally) praiseworthy or blameworthy.
Er. Just to make sure I understand this: is “whether it’s correct to put babies in a blender for fun” moral subject matter? If so, does it follow that if I am reasoning about whether it’s correct to put babies in a blender for fun, I am therefore something that is reasoning about moral subject matter? If so, does it follow that I am the sort of thing that is morally praiseworthy or blameworthy?
When we call someone bad or good for something they’ve done, we at least in part mean to praise or blame their reasoning.
Sure, if I were to say “Sam is a bad person” because Sam did X, I would likely be trying to imply something about the thought process that led Sam to do X.
And one of the reasons we call someone good or bad, or their action good or bad, is an evaluation of their reasoning as good or bad.
I agree that it’s possible for me to call Sam “good” or “bad” based on some aspect of their reasoning, as above, though I don’t really endorse that usage. I agree that it’s possible to call Sam’s act “good” or “bad” based on some aspect of Sam’s reasoning, although I don’t endorse that usage either. I agree that it’s possible to label reasoning that causes me to call either Sam or Sam’s act “good” or “bad” as “good reasoning” or “bad reasoning”, respectively, but this is neither something I could ever imagine myself doing, nor the interpretation I would naturally apply to labeling reasoning in this way.
And praise and blame are, of course, the products of moral reasoning.
That’s not clear to me.
to (excepting cases of ignorance) praise bad people is itself bad,
That’s not clear to me either.
and to blame good people is itself good.
That’s definitely not clear to me.
So I’ve never intended ‘moral reasoning’ to mean ‘reasoning that is moral’ except as a consequence of my argument. That phrase means, in the first place, reasoning about moral matters.
Ah, OK. That was in fact not clear; thanks for clarifying it.
Just to make sure I understand this: is “whether it’s correct to put babies in a blender for fun” moral subject matter?
Not necessarily, it may or may not be taken up as a moral question. We can, for example, study just how much fun it is and leave aside the question of its moral significance. If you’re reasoning about whether or not it’s right in some moral sense to put babies in a blender, then you’re doing something like moral reasoning, but if this were purely in the hypothetical then I think it would fall short. If you were seriously considering putting babies in a blender, then I think I’d want to call it moral reasoning, but in this case I think you could obviously be praised or blamed for your answer (well, maybe not praised so much).
and to blame good people is itself good.
That’s definitely not clear to me.
Sorry, typo. I mean’t ‘to blame good people (or to blame people for good actions) is bad.’ It shows some praiseworthy decency to appreciate the moral life of, I donno, MLK. It shows real character to stick up for a good but maligned person. Likewise, it shows some shallowness to have praised someone who only appeared good, but was in fact bad. And it shows some serious defect of character to praise someone we know to be bad (I donno, Manson?).
I agree that it’s possible for me to call Sam “good” or “bad” based on some aspect of their reasoning, as above, though I don’t really endorse that usage.
What’s the difference between agreeing here, and endorsing the usage?
OK, so just to be clear, you would say that the following are examples of moral reasoning...
“It would be fun to put this baby in that blender, and I want to have fun, but it would be wrong, so I won’t”
“It would be wrong to put this baby in that blender, and I don’t want to be wrong, but it would be fun, so I will”
...and the following are not:
“In general, putting babies in blenders would be fun, and I want to have fun, but in general it would be wrong, so if a situation arose where I had a baby and a blender and could put one inside the other with impunity, I would not do so, all else being equal.”
“In general, putting babies in blenders would be wrong, and I don’t want to be wrong, but in general it would be fun, so if a situation arose where I had a baby and a blender and could put one inside the other with impunity, I would do so, all else being equal.”
Yes? No?
If so, I continue to disagree with you; I absolutely would call those last two cases examples of moral reasoning. If not, I don’t think I’m understanding you at all.
What’s the difference between agreeing here, and endorsing the usage?
If A is some object or event that I observe, and L is a label in a language that consistently evokes a representation of A in the minds of native speakers, I agree that it’s possible for me to call A L. If using L to refer to A has other effects beyond evoking A, and I consider those effects to be bad, I might reject using L to refer to A.
For example, I agree that the label “faggot” reliably refers to a male homosexual in American English, but I don’t endorse the usage in most cases because it’s conventionally insulting. (There are exceptions.)
‘to blame good people (or to blame people for good actions) is bad.’ It shows some praiseworthy decency
Incidentally, here you demonstrate one of the behaviors that causes me not to endorse the usage of calling Sam “good” or “bad” in this case. First you went from making an observation about a particular act of reasoning to labeling the reasoner in a particular way, and now you’ve gone from labeling the reasoner in that way to inferring other facts about the reasoner. I would certainly agree that the various acts we’re talking about are evidence of praiseworthy decency on Sam’s part, but the way you are talking about it makes it very easy to make the mistake of treating them as logically equivalent to praiseworthy decency.
People do this all the time (e.g., fundamental attribution fallacy), and it causes a lot of problems.
I’m thinking of moral reasoning as the kind of reasoning you’re morally responsible for: if you reason rightly, you ought to be praised and proud, and if you reason wrongly, you ought to be blamed and ashamed.
Oh! I understand you now. Thanks for clarifying this.
An obvious argument in favor of B is that you get to live for 3^^^3 years. A reframing:
A = Dust speck in your eye, after which you read a normal life except that you cease to exist a mere 60 years later. B = Tortured for the rest of your life, but you never die.
(nods) That seemed the obvious argument, as you say, though it depends on the notion that being tortured for a year is a net utility gain (relative to not existing for that year at all), which seemed implausible to me. But it turns out that is indeed what ABrooks meant.
I generally avoid downvoting comments that are direct responses to me. I’m not exactly sure why, beyond a sense that it just feels wrong, although I can justify it in a number of different ways that I’m pretty sure aren’t my real reasons.
I do the same. The reasoning that comes to mind is that the timing tends to imply that you did it, and that that—especially if you’re already in an adversarial mode—can provoke a cycle of retaliation that’s harmful to your karma and doesn’t carry much informative value. Short of that, I feel it carries adversarial implications that’re harmful to the quality of discussion.
I’m reasonably sure that that’s my true objection.
Yeah, that’s plausible in my case as well. Evidence in favor of it is that I do become mildly anxious when people who are responding to me get downvoted by others, which suggests that I fear retaliation.
I thought that too, but I assumed I’d die right after being tortured anyway. And I’d rather live to age n without ever being tortured than live to age n + m being tortured for m years.
Doesn’t appreciably constrain your behavior, though, unless you happen to be the star of a popular Showtime series or something. Declaring a policy is only meaningful if it actually affects your choices, which in this case only makes sense if you expect to be considering mass murder as a solution to your problems.
And in a situation as extreme as that, I wouldn’t be surprised if some otherwise unthinkable subjective downsides came up.
“Choosing infanticide over abandonment is pretty pointless, so why do it?” “Killing another living thing doesn’t qualify as “euthanasia” if you do it for your benefit, not that being’s.”
Let me respond by a little story telling, without making a clear point. I am not proving You wrong, just sharing my personal experience. Warnings: depressive stories about ilnesses, probably bad reading.
I once was a friend with a boy with a progressive muscular dystrophy. It is a degenerative disease, where gradually, Your muscles stop working, and at the age of cca 20, most patients die, because they stop breathing. If You have heard great stories about people on the wheelchair getting adapted to their situation, well, here adaptation can be only shorterm, because next year, You might not be able of doing what you can do now. The pain was not excruciating but there was some, the body which is deprived of excercise gives You this feedback. If he had a bad dream at night, he could not turn to the other side (a very usual remedy, most people do it without even realizing). The boy had 2 suicide attempts, although, frankly, he did not really mean them. He would make phonecalls to his friends in the evening to relieve his pain—very unwelcome calls. I sometimes pretended not to be at home, and I know other people who did the same (We were in our twenties). Then, his desperation was deepened by feeling he is not loved. Once he was calling his psychologist, and caught her in the middle of a suicide attempt, poisoned by drugs—she repeated to him HIS previous statements from the previous phonecalls. I am not saying it was HIS fault, the lady clearly failed to safeguard the known risks of her profession (plus had other problems, departed partner etc.) I am just illustrating how hard it was sometimes to deal with him. (He called other people who saved her life, to close up this branch of the story). His parents took great care of him up to the level of their financial abilities, plus using the limited help of our government. There were frequent conflicts between him and his parents, though, and made him feel unloved, again. On the other hand, his parents were deeply religious and, knowingly, had another baby with the same genetic defect later, they did not choose abortion. The older boy has died at the age of 28, his life being surprisingly long.
This story clearly contains aspects, which were not optimized, the parents could have earned more money and bring more comforts to his lives, he could have gotten a personal assistant at night, more physiotherapy excercises, a better computer, some lectures how to deal with people and get a girlfriend (his desires were strong), he could have tried harder to develop his talents and get a job, which would make him feel useful to society. (We persuaded him to get a job eventually, phone operator, lasted 1 year or so). His friens, including me, could have worked harder on their emotional maturity. But, can You see all the energy and resources to make a misery somewhat better ?
Now let us see a different story, where the parents of a sick child became EXTREME optimizers. Watch the film Lorenzo’s Oil (http://en.wikipedia.org/wiki/Lorenzo%27s_Oil_%28film%29) or read about Lorenzo Odone (http://en.wikipedia.org/wiki/Lorenzo_Odone). Wonderful and admirable story. But can You see the end result, after You do all that is in Your power for Your baby ?
“Choosing infanticide over abandonment is pretty pointless, so why do it?” Abandoning a baby with a severe genetic defect at birth condemnes the baby to even lower quality of life in most government institutions, unless a millionaire chooses to adopt him.
I have a counterargument to my own reasoning right away—what if some parents killed their baby diagnosed with adrenoleukodystrophy (but with no developed symptomps yet) a year before Augusto and Michaela Odone invented the Lorenzo’s Oil for their son ? Such parents would have lost a potentially healthy baby, the baby would lose a realistic chance to live their normal life...
I am not really trying to win this argument, just explaining, why I sometimes TOY with the idea of infanticide being not so immoral, and considering it a form of euthanasia.
There’s plenty of diseases we can now deal with quite well because we didn’t infanticide or murder everyone who had them. This isn’t a coincidence that a treatment is found, if we killed everyone with a disease there would be no search for treatment.
Is this one of those “torture one person for 50 years” versus “deaths of millions” thought experiments?
Easiest thought experiments ever?
Would you rather be tortured for 3^^^3 years, or have a dust speck in your eye?
If I use UDT2 can I choose ‘both’?
This seems like a good “control” thought experiment to determine whether people are just being contrarian.
I think you’d have to be a pretty unsubtle contrarian to answer that with “torture”.
And yet, at least one person below did just that. Edit: …but later asserted that had been a joke.
I think in this case you can drop the suffix and just say “being contrary”.
More like, to determine whether people are paying any attention. (I once took an online personality test which included questions such as “I’ve never eaten before” to prevent people from using bots or similar to screw up their data.)
It’s hard to get people to answer such things straightforwardly. I once included “Some people have fingernails” in a poll, as about the most uncontroversially true thing I could think of, and participants found a way to argue that it wasn’t true—since “some” understates the proportion.
Well… Some people does usually implicate ‘not all people, and not even all people except a non-sizeable minority’, but if we go by implicatures rather than literal meanings, X has fingernails (in contexts where everyone knows X is a human), in my experience at least, usually implicates that X’s fingernails are not trimmed nearly as short as possible, since the literal meaning would be quite uninformative once you know X is a human.
“There exists at least one X that …” is what logicians have settled on as the most easily satisfiable and least objectionable phrasing.
That’s not that easy, unless having a dust speck in my eye also entails my living for 3^^^3 years.
I nominate ABrooks as this month’s contrarian.
Wait, what?
To clarify:
A = Dust speck in your eye, and your life is otherwise as it would have been without this deal.
B = 3^^^3 years of torture, followed by death.
Is that an easy choice for you?
If not, can you summarize your arguments in favor of choosing B?
Well, if I choose B, I’ll be alive for a very large number of years. I’ll be alive so long, that I expect that I’ll get used to anything deployed to torture me. And I’ll be alive so long, I’d need to study a fair amount of cosmology just to understand what my lifetime will involve, by way of the deaths and rebirths of whole universes or whatever. Some of that would be interesting to see.
The easy thought experiment would be dust speck vs. 3 years of torture followed by death. I think there, I’d go with the speck.
Is this based on the experience of torture victims? I think that “get used to” would more closely resemble “catatonic” than “unperturbed.” I don’t think your ability to be interested would survive very long.
I wonder if there’s a case study of an individual that’s been exposed to prolong torture. Probably have to look through Nazi and Japanese experiments.
(takes deep breath)
AAAAAAAAAAAAAAAAAIIIIIEEEEEEEEEEEE
sorry, I just had to scream for a bit
Them dust specks hurtin’?
I...um. Are you agreeing with me? Or did I say something stupid?
I think you can be confident that he’s not agreeing with you.
I ask only that people disagree with me in such a way that my errors are corrected.
If you’ve acclimated to torture it’s no longer torture.
If you’ve acclimated to torture the effects have likely left you with a life not worth living.
Torture isn’t something you can acclimate yourself to in hypotheticals. E.g., the interlocutor could say “oh you would acclimate to water boarding, well then I’ll scoop your brain out, intercept your sensory modalities, and feed you horror. but wait, just when you’re getting used to it I wipe your memory.”
All this misses the point of the hypothetical by being too focused on the details rather than the message. Have you told someone the trolley experiment and had them say something like “but I would call the police, or I’m not strong enough to push a fat man over” and have to reform the experiment over and over until they got the message?
This is a fair point. Though my response was very much intended to be a joke.
I think this is wrong: saying you’d yell real loud or call the police or break the game somehow is exactly the right response. It shows that someone is engaging with the problem as a serious moral one, and it’s no accident that it’s people who hear these problems for the first time that react like this. They’re the only ones taking it seriously: moral reasoning is not hypothetical, and what they’re doing is refusing to treat the problem hypothetically.
Learning to operate within the hypothetical just means learning to stop seeing it as an opportunity for moral reasoning. After that, all we’re doing is trying to maximize a value under a theory. But that’s neither here nor there.
It is not clear to me that that is a more “right” response than engaging with the problem as a pedagogic tool in a way that aligns with the expectations of the person who set it to me. Indeed, I’m inclined to doubt it.
In much the same way: if I’m asked to multiply 367 by 1472 the response I would give in the real world is to launch a calculator application, but when asked to do this by the woman giving me a neuropsych exam after my stroke I didn’t do that, because I understood that the goal was not to find out the product of 367 and 1472 but rather to find out something about my brain that would be revealed by my attempt to calculate that product.
I agree with you that it’s no accident that people react like this to trolley problems, but I disagree with your analysis of the causes.
You called the trolly problem a pedagogic tool: what do you have in mind here specifically? What sort of work do you take the trolly problem to be doing?
It clarifies the contrast between evaluating the rightness of an act in terms of the relative desirability of the likely states of the world after that act is performed or not performed, vs. evaluating the rightness of an act in other terms.
Okay, that sounds reasonable to me. But what do we mean by ‘act’ in this case? We could for instance imagine a trolly problem in which no one had the power to change the course of the train, and it just went down one track or the other on the basis of chance. We could still evaluate one outcome as better than the other (this must be the one man dying instead of five), but there’s no action.
Are we making a moral judgement in that case? Or do we reason differently when an agent is involved?
I don’t know who “we” are.
What I say about your proposed scenario is that the hypothetical world in which five people die is worse than the hypothetical world in which one person dies, all else being equal. So, no, my reasoning doesn’t change because there’s an agent involved.
But someone who evaluates the standard trolley problem differently might come to different conclusions.
For example, I know any number of deontologists who argue that the correct answer in the standard trolley problem is to let the five people die, because killing someone is worse than letting five people die. I’m not exactly sure what they would say about your proposed scenario, but I assume they would say in that case, since there’s no choice and therefore no “killing someone” involved, the world where five people die is worse.
Similarly, given someone like you who argues that the correct answer in the standard trolley problem is to “yell real loud or call the police or break the game somehow,” I’m not sure what you would say about your own proposed scenario.
I think it shows someone is trying to “solve” a hypothetical or be clever, because with a trivial amount of deliberation they would anticipate the interlocutors response and reform. Moreover, none of this engages the point of the exercise for which you’re free to argue without being opaque. E.g., “okay, clearly the point of this trolley experiment is to see if my moral intuitions align with consequentialism or utilitarianism, I don’t think this experiment does that because blah blah blah.”
Moreover, moral reasoning is hypothetical if you’re sufficiently reflective.
Well, in what kinds of things does moral reasoning conclude? I suppose I would say ‘actions and evaluations’ or something like that. Can you think of anything else?
Moral reasoning should inform your moral intuitions—what you’ll do in the absence of an opportunity to reflect. How do you prepare your moral intuitions for handling future scenarios?
Well, regardless of whether we have time to reflect or not, I take it moral reasoning or moral intuitions conclude either in an action or in something like an evaluative judgement. This would distinguish such reasoning, I suppose, from theoretical reasoning which begins from and concludes in beliefs. Does that sound right to you?
An evaluative judgement is an action; you’re fundamentally saying moral reasoning has consequences. I agree with that, of course. I don’t think it disguishes it from theorical reasoning.
By ‘action’ I mean something someone might see you do, something undertaken intentionally with the aim of changing something around you. But when we ask someone to react to a trolly problem, we don’t expect them to act as a result of their reasoning (since there’s no actual trolly). We just want them to reply. So sometimes moral reasoning concludes merely in a judgement, and sometimes it concludes in an action (if we were actually in the trolly scenario, for example) that will, I suppose, also involve a judgement. Does all this seem reasonable to you?
This would go quicker if you gave your conclusion and then we talked about the assumptions, rather than building from the assumptions to the conclusion (I think it’s that you want to say hypotheticals produce different results than reality). But to answer your question, I don’t think that giving a result to the trolley problem merely results in a judgement. I think it also potentially results in reflective equilibrium of moral intuitions, which then possibly results in different decisions in the future (I’ve had this experience). I think it also potentially affects the interlocutor or audience.
I’ve already given you my conclusion, such as it is: not that hypotheticals produce different results, but that reasoning about hypotheticals can’t be moral reasoning. I’m just trying to think through the problem myself, I don’t have a worked out theory here, or any kind of plan. If you have a more productive way to figure out how hypotheticals are related to moral reasoning then I’m happy to pursue that.
Right, but I’m just talking about the posing of the question as an invitation for someone to think about it. The aim or end result of that thinking is some kind of conclusion, and I’m just asking what kinds of conclusions moral reasoning ends in. Since we use moral reasoning in deciding how to act, I take it for granted that one kind of conclusion is an action: “It is right to X, and possible for me to X, therefore...” and then comes the action. When someone is addressing a trolly problem, they might think to themselves: “If one does X, one will get the result A, and if one does Y, one will get the result B. A is preferable to B, so...” and then comes the conclusion. The conclusion in this case is not an action, but just the proposition that ”...given the circumstances, one should do X.”
ETA: So, supposing that reasoning about the trolly problem here is moral reasoning (as opposed to, say, the sort of reasoning we’re doing when we play a game of chess) then moral reasoning can conclude sometimes in actions, and sometimes in judgements.
Suppose I sit down at time T1 to consider the hypothetical question of what responses I consider appropriate to various events, and I conclude that in response to event E1 I ought to take action A1. Then at T2, E1 occurs, and I take action A1 based on reasoning of the form “That’s E1, and I’ve previously decided that in case of E1 I should perform A1, so I’m going to perform A1.”
If I’ve understood you correctly, the only question being discussed here is whether the label “moral reasoning” properly applies to what occurs at T1, T2, both, or neither.
Can you give me an example of something that might be measurably different in the world under some possible set of conditions depending on which answer to that question turns out to be true?
You’ve understood me perfectly, and that’s an excellent way of putting things. I think there’s an interpretation of those variables such that both what occurs at T1 and at T2 could be called moral reasoning, especially if one expects E1 to occur. But suppose you just, by way of armchair reasoning, decide that if E1 ever happens, you’ll A1. Now suppose E1 has occured, but suppose also that you’ve forgotten the reasoning which lead you to conclude that A1 would be right: you remember the conclusion, but you’ve forgotten why you thought it. That scenario would, I believe, satisfy your description, and it would be a case in which your action is quite suspect. Not wholly so, since you may have good reason to believe your past decisions are reliable, but if you don’t know why you’re acting when you act, you’re not acting in a fully rational way.
I think it would be appropriate to say, in this case, that you are not to be morally praised (e.g. “you’re a good person”, “You’re a hero” etc.) for such an action (if it is good) in quite the measure you would be if you knew what you were doing. I bring up praise, just because this is an easy way for us to talk about what we consider to be the right response to morally good action, regardless of our theories. Does all this sound reasonable?
If what went on at T1 was fully moral reasoning, then no part of the moral action story seems to be left out: you reasoned your way to an action, and at some later time undertook that action. But if it’s true that we would consider an action in which you’ve forgotten your reasoning a defective action, less worthy of moral praise, then we consider it important that the reasoning be present to you as you act.
And I take it for granted, I suppose, that we don’t consider it terribly praiseworthy for someone to come to a bunch of good conclusions from the armchair and never make any effort to carry them out.
I’ll point out again that the phrase “moral reasoning” as you have been using it (to mean praiseworthy reasoning) is importantly different from how that phrase is being used by others.
That aside, I agree with you that in the scenario you describe, my reasoning at T2 (when E1 occurs) is not especially praiseworthy and thus does not especially merit the label “moral reasoning” as you’re using it. I don’t agree that my reasoning at T1 is not praiseworthy, though. If I sit down at T1 and work out the proper thing to do given E1, and I do that well enough that when E1 occurs at T2 I do the proper thing even though I’m not reasoning about it at T2, that seems compelling evidence that my reasoning at T1 is praiseworthy.
Sure, we agree there, I just wanted to point out that the, shall we say, ‘presence’ of the reasoning in one’s action at T2 is both a necessary and sufficient condition for the action’s being morally praiseworthy if it’s good. The reasoning done at T1 is, of itself, neither necessary nor sufficient.
I don’t agree that the action at T2 is necessary. I would agree that in the absence of the action at T2, it would be difficult to know that the thinking at T1 was praiseworthy, but what makes the thinking at T1 praiseworthy is the fact that it led to a correct conclusion (“given E1 do A1”). It did not retroactively become praiseworthy when E1 occurred.
So you would say that deliberating to the right answer in a moral hypothetical is, on its own, something which should or could earn the deliberator moral praise?
Would you say that people can or ought to be praised or blamed for their answers to the trolly problem?
I would say that committing to a correct policy to implement in case of a particular event occurring is a good thing to have done. (It is sometimes an even better thing to have done if I can then articulate that policy, and perhaps even that commitment, in a compelling way to others.)
I think that’s an example of “deliberating to the right answer in a moral hypothetical earning moral praise” as you’re using those phrases, so I think yes, it’s something that could earn moral praise.
People certainly can be praised or blamed for their answers to the trolley problem—I’ve seen it happen myself—but that’s not terribly interesting.
More interestingly, yes, there are types of answers to the standard trolley problem I think deserve praise.
In case of a possible misunderstanding: I didn’t mean to imply that moral reasoning is literally hypothetical, but that hypotheticals can be a form of moral reasoning (and I hope we aren’t arguing about what ‘reasoning’ is). The problem that I think you have with this is that you believe hypothetical moral reasoning doesn’t generalize? If so, let me show you how that might work.
And this could go on and on until you’ve recalibrated your moral intuitions using hypothetical moral reasoning, and now when asked a similar hypothetical (or put in a similar situation) your immediate intuition is to look at the consequences. Why is the hypothetical part useful? It uncovers previously unquestioned assumptions. It’s also a nice compact form for discussing such issues.
We’re not, and I understand. We do disagree on that claim: I’m suggesting that no moral reasoning can be hypothetical, and that if some bit of reasoning proceeds from a hypothetical, we can know on the basis of that alone that it’s not really moral reasoning. I’m thinking of moral reasoning as the kind of reasoning you’re morally responsible for: if you reason rightly, you ought to be praised and proud, and if you reason wrongly, you ought to be blamed and ashamed. That sort of thing.
This is a good framing, thanks. By ‘on and on’ I assume you mean that the reasoner should go on to examine his decision to look at expected consequences, and perhaps more importantly his preference for the world in which five people live. After all, he shouldn’t trust that any more than the intuition, right?
Can’t that apply to hypotheticals? If you come to the wrong conclusion you’re a horrible person, sort of thing.
I would probably call “moral reasoning” something along the lines of “reasoning about morals”. Even using your above definition, I think reasoning about morals using hypotheticals can result in a judgment, about what sort of action would be appropriate in the situation.
That can’t be what people normally mean by “moral reasoning”. Do you have a philosophy background?
I don’t see why that would be the case. Cheap illustration:
TEACHER: Jimmy, suppose I tell you that P, and also that P implies Q. What does that tell you about Q?
JIMMY: Q is true.
TEACHER: That’s right Jimmy! Your reasoning is praiseworthy!
JIMMY: Getting the right answer while reasoning about that hypothetical fills me with pride!
You’ve taken my conditional: “If something is moral reasoning, it is something for which we can be praised or blamed” for a biconditional. I only intend the former. ETA: I should say more. I don’t mean any kind of praise or blame, but the kind appropriate to morally good or bad action. One might believe that this isn’t different in kind from the sort of praise we offer in response to, say, excellence in playing the violin, but I haven’t gotten the sense that this view is on the table. If we agree that there is such a thing as distinctively moral praise or blame, then I’ll commit to the biconditional.
I suspect ABrooks is continuing his tradition of interpreting “X reasoning” to mean reasoning that has the property of being X, rather than reasoning about X.
If I’m right, I expect his reply here is that your example is not of hypothetical reasoning at all—supposing that actually happened, Jimmy really would be reasoning, so it would be actual reasoning. Sure, it would be reasoning about a hypothetical, but so what?
I share your sense, incidentally, that this is not what people normally mean, either by “moral reasoning” or “hypothetical reasoning.:”
It’s not an interpretation, it’s a claim. If something is reasoning about moral subject matter, then, I claim, it is the sort of thing that is (morally) praiseworthy or blameworthy. When we call someone bad or good for something they’ve done, we at least in part mean to praise or blame their reasoning. And one of the reasons we call someone good or bad, or their action good or bad, is an evaluation of their reasoning as good or bad. And praise and blame are, of course, the products of moral reasoning. And we do consider them to be morally valued: to (excepting cases of ignorance) praise bad people is itself bad, and to blame good people is itself good.
Now, the claim I’m arguing against is the claim that there is another kind of moral reasoning which is a) neither praiseworthy, nor blameworthy, b) does not result in an action or an evaluation of an actual person or action, and c) is somehow tied to or predictive of reasoning that is praiseworthy, blameworthy, and resulting in action or actual evaluation.
So I’ve never intended ‘moral reasoning’ to mean ‘reasoning that is moral’ except as a consequence of my argument. That phrase means, in the first place, reasoning about moral matters. Same goes for how I’ve been understanding ‘hypothetical reasoning’. (ETA: though here, I can’t see how one could draw a distinction between ‘reasoning from a hypothetical’ and ‘reasoning that is hypothetical’. I’m not trying to talk about ‘reasoning about a hypothetical’ in the broadest sense, which might include coming up with trolly problems. I only mean to talk about reasoning that begins with a hypothetical.)
I am sorry if that hasn’t been clear.
Er. Just to make sure I understand this: is “whether it’s correct to put babies in a blender for fun” moral subject matter? If so, does it follow that if I am reasoning about whether it’s correct to put babies in a blender for fun, I am therefore something that is reasoning about moral subject matter? If so, does it follow that I am the sort of thing that is morally praiseworthy or blameworthy?
Sure, if I were to say “Sam is a bad person” because Sam did X, I would likely be trying to imply something about the thought process that led Sam to do X.
I agree that it’s possible for me to call Sam “good” or “bad” based on some aspect of their reasoning, as above, though I don’t really endorse that usage. I agree that it’s possible to call Sam’s act “good” or “bad” based on some aspect of Sam’s reasoning, although I don’t endorse that usage either. I agree that it’s possible to label reasoning that causes me to call either Sam or Sam’s act “good” or “bad” as “good reasoning” or “bad reasoning”, respectively, but this is neither something I could ever imagine myself doing, nor the interpretation I would naturally apply to labeling reasoning in this way.
That’s not clear to me.
That’s not clear to me either.
That’s definitely not clear to me.
Ah, OK. That was in fact not clear; thanks for clarifying it.
Not necessarily, it may or may not be taken up as a moral question. We can, for example, study just how much fun it is and leave aside the question of its moral significance. If you’re reasoning about whether or not it’s right in some moral sense to put babies in a blender, then you’re doing something like moral reasoning, but if this were purely in the hypothetical then I think it would fall short. If you were seriously considering putting babies in a blender, then I think I’d want to call it moral reasoning, but in this case I think you could obviously be praised or blamed for your answer (well, maybe not praised so much).
Sorry, typo. I mean’t ‘to blame good people (or to blame people for good actions) is bad.’ It shows some praiseworthy decency to appreciate the moral life of, I donno, MLK. It shows real character to stick up for a good but maligned person. Likewise, it shows some shallowness to have praised someone who only appeared good, but was in fact bad. And it shows some serious defect of character to praise someone we know to be bad (I donno, Manson?).
What’s the difference between agreeing here, and endorsing the usage?
OK, so just to be clear, you would say that the following are examples of moral reasoning...
“It would be fun to put this baby in that blender, and I want to have fun, but it would be wrong, so I won’t”
“It would be wrong to put this baby in that blender, and I don’t want to be wrong, but it would be fun, so I will”
...and the following are not:
“In general, putting babies in blenders would be fun, and I want to have fun, but in general it would be wrong, so if a situation arose where I had a baby and a blender and could put one inside the other with impunity, I would not do so, all else being equal.”
“In general, putting babies in blenders would be wrong, and I don’t want to be wrong, but in general it would be fun, so if a situation arose where I had a baby and a blender and could put one inside the other with impunity, I would do so, all else being equal.”
Yes? No?
If so, I continue to disagree with you; I absolutely would call those last two cases examples of moral reasoning.
If not, I don’t think I’m understanding you at all.
If A is some object or event that I observe, and L is a label in a language that consistently evokes a representation of A in the minds of native speakers, I agree that it’s possible for me to call A L. If using L to refer to A has other effects beyond evoking A, and I consider those effects to be bad, I might reject using L to refer to A.
For example, I agree that the label “faggot” reliably refers to a male homosexual in American English, but I don’t endorse the usage in most cases because it’s conventionally insulting. (There are exceptions.)
Incidentally, here you demonstrate one of the behaviors that causes me not to endorse the usage of calling Sam “good” or “bad” in this case. First you went from making an observation about a particular act of reasoning to labeling the reasoner in a particular way, and now you’ve gone from labeling the reasoner in that way to inferring other facts about the reasoner. I would certainly agree that the various acts we’re talking about are evidence of praiseworthy decency on Sam’s part, but the way you are talking about it makes it very easy to make the mistake of treating them as logically equivalent to praiseworthy decency.
People do this all the time (e.g., fundamental attribution fallacy), and it causes a lot of problems.
Oh!
I understand you now.
Thanks for clarifying this.
Also...
Can you please clarify which of your comments in this thread you stand by, and which ones you don’t stand by?
I stand by everything I said about trolly problems. I don’t think an eternity of torture is preferable to a dust speck in one’s eye.
Until you posted this comment, I thought your response was intended as humor.
Edit: And not of the ha ha only serious type.
OK, thanks for clarifying.
An obvious argument in favor of B is that you get to live for 3^^^3 years. A reframing:
A = Dust speck in your eye, after which you read a normal life except that you cease to exist a mere 60 years later.
B = Tortured for the rest of your life, but you never die.
B is just the traditional idea of hell, isn’t it? (IIRC, the present-day Catholic Church’s idea is that hell is just the inability to see God.)
(nods) That seemed the obvious argument, as you say, though it depends on the notion that being tortured for a year is a net utility gain (relative to not existing for that year at all), which seemed implausible to me. But it turns out that is indeed what ABrooks meant.
(shrug) No accounting for taste.
Edit: He later asserted that had been a joke.
This is another great example of a comment that should have been silently downvoted, not responded to.
I generally avoid downvoting comments that are direct responses to me. I’m not exactly sure why, beyond a sense that it just feels wrong, although I can justify it in a number of different ways that I’m pretty sure aren’t my real reasons.
I do the same. The reasoning that comes to mind is that the timing tends to imply that you did it, and that that—especially if you’re already in an adversarial mode—can provoke a cycle of retaliation that’s harmful to your karma and doesn’t carry much informative value. Short of that, I feel it carries adversarial implications that’re harmful to the quality of discussion.
I’m reasonably sure that that’s my true objection.
Yeah, that’s plausible in my case as well. Evidence in favor of it is that I do become mildly anxious when people who are responding to me get downvoted by others, which suggests that I fear retaliation.
Anyone who has to respond to me has suffered enough already.
I thought that too, but I assumed I’d die right after being tortured anyway. And I’d rather live to age n without ever being tortured than live to age n + m being tortured for m years.
Note that you’re arguing that your preferred policy can never have true drawbacks, rather than arguing that it’s worth it on balance. Be careful.
Policy of not mass murdering people is as close to drawback-free as it gets.
I’m sure you can figure out some trivial drawbacks if you want.
Doesn’t appreciably constrain your behavior, though, unless you happen to be the star of a popular Showtime series or something. Declaring a policy is only meaningful if it actually affects your choices, which in this case only makes sense if you expect to be considering mass murder as a solution to your problems.
And in a situation as extreme as that, I wouldn’t be surprised if some otherwise unthinkable subjective downsides came up.