Great post. Here’s my unvarnished answer: I wouldn’t jump, and the reasons why involve my knowledge that I have a 7-year old daughter and the (Motivated Reasoning and egotism alert!!) idea that I have the potential to improve the lives of many people.
Now of course, it’s EXTREMELY likely that one or more of the other people in this scenario is a parent, and for all I know one of them will invent a cure for cancer in the future. In point of fact, if I were to HONESTLY evaluate the possibility that one of the other players has a potential to improve the planet more than I do, the likelihood may be as great as the likelihood that one of the other players is also a parent. Which makes me think that yes, my incentives are screwed up here and the correct answer is: I should be as willing to jump as to push the fat man off the bridge.
I also note that, if my wife or daughter was one of the people tied to the track, I would unhesitatingly throw myself off. This makes me conclude that I should want to throw myself off the bridge (because the supposedly, flimsily ‘rational atruistic’ reason—that I have the potential to help people—is revealed to be bogus). I still wonder, however, if there is any possible rational reason to not choose to sacrifice oneself in the scenario. I am unable to come up with one.
I still wonder, however, if there is any possible rational reason to not choose to sacrifice oneself in the scenario.
Of course there is—e.g. if you care more for yourself than for other people, rationality doesn’t compel you to sacrifice even a cent of your money, let alone you life, for the sake of others.
People must REALLY REALLY stop confusing what is “rational” and what is “moral”. Rationality says nothing about what you value, only about how to achieve it.
They must also stop confusing “should” “would” and “I would prefer to”.
I’m not sure what ‘should’ means if it doesn’t somehow cash out as preference.
Yeah, “somehow” the two concepts are connected, we can see that, because moral considerations act on our preferences, and most moral philosophies take the preferences of others in considerations when deciding what’s the moral thing to do.
But the first thing that you must see is that the concepts are not identical. “I prefer X to happen” and “I find X morally better” are different things.
Take random parent X and they’ll care more about the well-being of their own child than for the welfare of a million other children in the far corner of the world. That doesn’t mean they evaluate a world where a million other children suffer to be a morally better world than a world where just theirs does.
Here’s what I think “should” means. I think “should” is an attempted abstract calculation of our preferences in the attempted depersonalization of the provided context. To put it differently, I think “should” is what we believe we’d prefer to happen if we had no personal stakes involved, or what we believe we’d feel about the situation if our empathy was not centralized around our closest and dearest.
EDIT TO ADD: If I had to guess further, I’d guess that the primary evolutionary reason for our sense of morality is probably not to drive us via guilt and duty but to drive us via moral outrage—and that guilt is there only as in our imagined perception of the moral outrage of others. To test that I’d like to see if there’s been studies to determine if people who are guilt-free (e.g. psychopaths) are also free of a sense of moral outrage.
I find myself thinking mostly around the same lines as you, and so far the best I’ve been able to come up with is “I’m willing to accept a certain amount of immorality when it comes to the welfare of my wife and child”.
I’m not really comfortable with the implications of that, or that I’m not completely confident it’s not still a rationalization.
Is there an amount of human suffering of strangers to avoid which you’d consent to have your wife and child tortured to death?
Also, you’re “allowed” your own values—no need for rationalizations for your terminal values, whatever they may be. If the implications make you uncomfortable (maybe they aren’t in accordance with facets of your self-image), well, there’s not yet been a human with non-contradictory values so you’re in good company.
Is there an amount of human suffering of strangers to avoid which you’d consent to have your wife and child tortured to death?
Initially, my first instinct was to try and find the biggest font I could to say ‘no’. After actually stopping to think about it for a few minutes… I don’t know. It would probably have to be enough suffering to the point where it would destabilize society, but I haven’t come to any conclusions. Yet.
If the implications make you uncomfortable (maybe they aren’t in accordance with facets of your self-image), well, there’s not yet been a human with non-contradictory values so you’re in good company.
Heh, well, I suppose you’ve got a point there, but I’d still like my self-image to be accurate. Though I suppose around here that kind of goes without saying.
Initially, my first instinct was to try and find the biggest font I could to say ‘no’. After actually stopping to think about it for a few minutes… I don’t know. It would probably have to be enough suffering to the point where it would destabilize society, but I haven’t come to any conclusions. Yet.
That sounds a bit like muddling the hypothetical, along the lines of “well, if I don’t let my family be tortured to death, all those strangers dying would destabilize society, which would also cause my loved ones harm”.
No. Consider the death of those strangers to have no discernible impact whatsoever on your loved ones, and to keep the numbers lower, let’s compare “x strangers tortured to death” versus “wife and child tortured to death”. Solve for x. You wouldn’t need to watch the deeds in both cases (although feel free to say what would change if you’d need to watch when choosing against your family), it would be a button choice scenario.
The difference between myself and many others on LW is that not only would I unabashedly decide in favor of my loved ones over an arbitrary amount of strangers (whose fate wouldn’t impact us), I do not find any fault with that choice, i.e. it is an accurate reflection of my prioritized values.
I’d still like my self-image to be accurate.
As the saying goes, “if the hill will not come to Skeeve, Skeeve will go to the hill”. There’s a better alternative to trying to rewrite your values to suit your self-image. Which is constructing an honest self-image to reflect your values.
That sounds a bit like muddling the hypothetical, along the lines of “well, if I don’t let my family be tortured to death, all those strangers dying would destabilize society, which would also cause my loved ones harm”.
That was the sort of lines I was thinking along, yes. Framing the question in that fashion… I’m having some trouble imagining numbers of people large enough. It would have to be something on the order of ‘where x contains a majority of any given sentient species’.
The realization that I could willingly consign billions of people to death and be able to feel like I made the right decision in the morning is… unsettling.
As the saying goes, “if the hill will not come to Skeeve, Skeeve will go to the hill”.
I wish I could upvote you a second time just for this line. But yes, this is pretty much what I meant; I didn’t intend to imply that I wanted my self-image to be accurate and unchanging from what it is now, I’d just prefer it to be accurate.
Is there an amount of human suffering of strangers to avoid which you’d consent to have your wife and child tortured to death?
The hypothetical is being posed is what to me is an unsatisfactory degree of abstraction. How about a more concrete form?
You are fighting in the covert resistance against some appallingly oppressive regime. (Goodness knows the 20th century has enough examples to choose from.) You get the news that the regime is onto you and have your wife and child hostage. What do you do?
We may grok that scenario in decidedly different ways:
Maybe it would serve the wife and child best if I were successful in my resistance to some degree, to have a better bargaining situation? Maybe if I gave myself up, the regime would lose any incentive to keep the hostages alive? At that point we’d just be navigating the intricacies of such added details. Better to stick with the intent of the actions: Personally, I’d take the course of action most likely to preserve the wife and child’s well-being, but then I probably wouldn’t have grown into a role which exposes family to the regime as high-value bargaining chips.
What is immorality then? Even a theist would say “morality is that which is good and should be done, and immorality is that which is not good and should not be done.” If you think it would be immoral to spare you wife and child, then you are saying it is not a good thing and shouldn’t be done. I am pretty sure protecting your family is a good thing, and most people would agree.
The problem is, I think, is not that it is immoral to not push you wife and child in front of a moving train, albeit to save 5 others, but that it is immoral to push any individual in front of a train to save some other individuals.
If you increase the numbers enough, though, I would think it changes, since you are not just saving others, but society, or civilization or a town or what have you. Sacrificing others for that is acceptable, but rarely does this require a single person’s sacrifice, and it usually requires the consent and deliberation of the society under threat. Hence why we have the draft.
What I mean by ‘immorality’ is that I, on reflection, believe I am willing to break rules that I wouldn’t otherwise if it would benefit my family. Going back to the original switch problem, if it was ten people tied to the siding, and my wife and child tied to the main track, I’d flip the switch and send the train onto the siding.
I don’t know if that’s morally defensible, but it’s still what I’d do.
I’m finding myself disappointed that so many people have trouble distinguishing between “would” “should” and “prefer”
You’re just saying that a) you’d prefer to save your family b) you believe you would save your family. c) you probably should not.
There’s nothing at all contradictory in the above statements. You would do something and prefer to do something that you recognize you shouldn’t. What you “prefer” and what you “would” and what you “should” are all different logical concepts, so there’s no reason to think they always coincide even when they often do.
I don’t think I was having any trouble distinguishing between “would”, “should”, and “prefer”. Your analysis of my statement is spot on—it’s exactly what I was intending to say.
If morality is (rather simplistically) defined as what we “should” do, I ought to be concerned when what I would do and what I should do doesn’t line up, if I want to be a moral person.
Ah, but the [i]should[/i] coincide. And if this is a moral problem, it is in the realm of the [i]should[/i]. If it is a question if you are a moral person, then it it in the realm of the [i]would[/i]. As for [i]prefer[/i] that is the most fluid concept, meaning either a measuring of contrasting values, or your emotions of the matter.
This is incorrect. “Should” and “prefer” can’t give different answers for yourself, unless you really muddle the entire issue of morality altogether. Hopefully we can all agree that there is no such thing as an objective morality written down on the grand Morality Rock (and even if there were there would be no reason to actually follow it or call it moral). If we can’t then let me know and I’ll defend that rather than the rest of this post.
The important question is; what the hell do we mean by “morality?” It’s not something we can find written down somewhere on one of Jupiter’s moons, so what exactly is it, where does it come from, and most importantly where do our intuitions and knowledge about it come from? The answer that seems most useful is that morality is the algorithm we want to use to determine what actions to take, if we could self-modify to be the kind of people we want to be. It comes from reflecting on our preferences and values and deciding which we think are really and truly important and which we would rather do without. We can’t always do it perfectly right now, because we run on hostile hardware, but if we could reflect on all our choices perfectly then we would always choose the moral one. That seems to align with our intuitions of morality as the thing we wish we could do, even if we sometimes can’t or don’t due to akrasia or just lack of virtue. Thus, it is clear that there is a difference between what we “should” do, and what we “would” do (just as there is sometimes a difference between the best answer we can get for a math problem and the one we actually write down on the test). But it’s clear that there is no difference between what we “should” do and what we would prefer we do. Even if you think my definition of morality is missing something, it should be clear that morality cannot come from anywhere other than our preferences. There simply isn’t anywhere else we could get information about what we “should” do, which anyone in their right mind would not just ignore.
In short, if I would do x, and I prefer to do x, then why the heck would/should I care whether I should do x?! Morality in that case is completely meaningless; it’s no more useful than whatever’s written on the great Morality Rock. If I don’t prefer to act morally (according to whatever system is given) then I don’t care whether my action is “moral”.
“Should” and “prefer” can’t give different answers for yourself, unless you really muddle the entire issue of morality altogether.
But they do. “I know I shouldn’t, but I want to”. And since they do so often give different answer, they can give different answers.
Hopefully we can all agree that there is no such thing as an objective morality written down on the grand Morality Rock
I think we’re both in agreement that when we talk about “morality” we are in reality discussing something that some part of our brain is calculating or attempting to calculate. The disagreement between us is about what that calculation is attempting to do.
The answer that seems most useful is that morality is the algorithm we want to use to determine what actions to take, if we could self-modify to be the kind of people we want to be.
First of all, even that’s different from morality=preference—my calculated Morality(X) of an action X wouldn’t be calculated by my current Preference(CurrentAris, X), but rather by my estimated Preference(PreferredAris, X). So it would still allow that what I prefer to do is different from what I believe I should do.
Secondly, your definition doesn’t seem to me to explain how I can judge some people more moral than me and yet NOT want to be as moral as they are—can I invite you to read the Jain’s death?
SOME SPOILERS for “The Jain’s Death” follow below... Near the end of the first part of the comic, the Jain in question engages in a self-sacrificial action, which I don’t consider morally mandatory—I’m not even sure it’s morally permissible—and yet I consider her a more moral person than I am. I don’t want to be as moral as she is.
My own answer about what morality entails is that it’s an abstraction of our preferences in the attempted de-selfcenteredness of context. Let’s say that a fire marshall has the option of saving either 20 children in an orphanage or your own child.
What you prefer to do is that he save your own child.
What you recognize as moral is that he save the 20 children.
That’s because if you had no stakes in the issue that’s what your preference of what he should do would be.
So morality is not preference, it’s abstracted preference.
And abstracted preference feeds back to influence actual preference, but it doesn’t fully replace the purely amoral preference.
So in that sense I can’t depersonalize so much to consider the Jain’s action better than mine would have been, so I don’t consider her action morally better. I don’t want to depersonalize that much either—so I don’t want to be as moral as she is. But she is more moral than me, because I recognize that she does depersonalize more, and lets that abstraction move her actions further than I ever would want to.
I think my answer also explains as to why some people believe morality objective and some view it as subjective. Because it’s the subjective attempt at objectivity. :-)
In short, if I would do x, and I prefer to do x, then why the heck would/should I care whether I should do x?!
You would care about whether you should do x as a mere function of our brain—we’re wired so that the morality of a deed acts on our preferences. All other things being equal the positive morality of a deed tends to act positively on our preferences.
You have a very specific, universal definition of morality, which does seem to meet some of our intuitions about the word but which is generally not at all useful outside of that. Specifically, for some reason when you say moral you mean unselfish. You mean what we would want to do if we, personally, we’re not involved. That captures some of our intuitions, but only does so insofar as that is a specific thing that sounds sort of good and that therefore tends to end up in a lot of moral systems. However, it is essentially a command from on high—thou shalt not place thine own interests above others. I, quite frankly, don’t care what you think I should or shouldn’t do. I like living. I value my life higher than yours, by a lot. I think that in general people should flip the switch on the trolley problem, because I am more likely to be one of the 5 saved than the 1 killed. I think that if I already know I am the one, they should not. I understand why they wouldn’t care, and would flip it anyway, but I would do everything in my power (including use of the Dark Arts, bribes, threats, and lies) to convince them not to. And then I would walk away feeling sad that 5 people died, but nonetheless happy to be alive. I wouldn’t say that my action was immoral; on reflection I’d still want to live.
The major sticking point, honestly, is that the concept of morality needs to be dissolved. It is a wrong question. The terms can be preserved, but I’m becoming more and more convinced that they shouldn’t be. There is no such thing as a moral action. There is no such thing as good or evil. There are only things that I want, and things that you want, and things that other agents want. Clippy the paperclip maximizer is not evil, but I would kill him anyway (unless I could use him somehow with a plan to kill him later). I would adopt a binding contract to kill myself to save 5 others on the condition that everyone else does the same; but if I already know that I would be in a position to follow through on it then I would not adopt it. I don’t think that somehow I “should” adopt it even though I don’t want to, I just don’t want to adopt it and should is irrelevant (it’s exactly the same operation, mentally, as “want to”).
Basically, you’re trying to establish some standard of behavior and call it moral. And you’re wrong. That’s not what moral means in any sense other than that you have defined it to mean that. Which you can’t do. You’ve gotten yourself highly confused in the process. Restate your whole point, but don’t use the words moral or should anywhere (or synonyms). What you should find is that there’s no longer any point to be made. “Moral” and “should” are buzzwords with no meaning, but they sound like they should be important so everyone keeps talking about them and throwing out nice-sounding things and calling them moral, and are contradicted by othe people with other nice things and calling them moral. Sometimes I think the fundamentalist theists have it better figured out; “moral” is what God says it is, and you care because otherwise you’re thrown into fire!
I think that in general people should flip the switch on the trolley problem, because I am more likely to be one of the 5 saved than the 1 killed. I think that if I already know I am the one, they should not.
Let’s consider two scenarios: X: You are the one, the train is running towards the five, and Bob chooses to flip the switch so that it kills you instead. Y: You are among the five, the train is running towards the one, and Bob chooses to flip the switch so that it kills the five instead.
In both scenarios Bob flips the switch and as a result you die—but I think than in the case of action Y, where you are one of the five, you’d also be likely experiencing a sense of moral outrage towards Bob that you would be lacking in the case of action X.
There are only things that I want, and things that you want, and things that other agents want.
There exist moral considerations in someone choosing his actions much like there exist considerations of taste in someone choosing his lunch. If you fail to acknowledge this, you’ll be simply be predicting the actions of moral individuals wrongly.
Restate your whole point, but don’t use the words moral or should anywhere (or synonyms).
Okay. There’s a mechanism in our brains that serves to calculate our abstracted preferences for behaviours—in the sense of attempting to calculate a preference if we had no stakes in the given situation. The effect of this mechanism are several: it produces positive emotions towards people and behaviours that follow said abstracted preferences, negative emotions towards people and behaviours that don’t follow said abstracted preferences, and it contributes in determining our own actions; causing negative self-loathing feelings (labelled guilt or shame) when we fail to follow said abstracted preferences.
What you should find is that there’s no longer any point to be made. “Moral” and “should” are buzzwords with no meaning,
I think I did a good job above. You’ve failed to make your case to me that there is no meaning behind moral and should. We recognize the effects of morality (outrage, applauding, guilt), but we’re not self-aware enough about the mechanism of our moral calculations itself. But that isn’t surprising to me, there’s hardly any pattern-recognition in our brains whose mechanism we are self-aware about (I don’t consciously think “such-a-nose and such a face-shape” when my brain recognizes the face of my mother).
The only difference between optical pattern recognition and moral pattern recognition is that the latter deals with behaviours rather than visible objects. To tell me that there’s no morality is like telling me there’s no such thing as a square. Well sure, there’s no Squareness Rock somewhere in the universe, but it’s a actual pattern that our brains recognize.
It seems like a rather different statement to say that there exists a mechanism on our brain which tends to make us want to act as though we had no stakes in the situation, as opposed to talking about what is moral. I’m no evo-psych specialist but it seems plausible that such a mechanism exists. I dispute the notion that such a mechanism encompasses what is usually meant by morality. Most moral systems do not resolve to simply satisfying that mechanism. Also, I see no reason to label that particular mechanism “moral”, nor the output of it those things we “should” do (I don’t just disagree with this on reflection; it’s actually my inuition that “should” means what you want to do, while impartiality is a disconnected preference that I recognize but don’t associate even a little bit with should. I don’t seem to have an intuition about what morality means other than doing what you should, but then I get a little jarring sensation from the contact with my should intuition...). You’ve described something I agree with after the taboo, but which before it I definitely disagree with. It’s just an issue of semantics at this point, but semantics are also important. “Morality” has really huge connotations for us; it’s a bit disingenuous to pick one specific part of our preferences and call it “moral”, or what we “should” do (even if that’s the part of our brain that causes us to talk about morality, it’s not what we mean by morality). I mean, I ignore parts of my preferences all the time. A thousand shards of desire and all that. Acting impartially is somewhere in my preferences,, but it’s pretty effectively drowned out by everything else (and I would self-modify away from it given the option—it’s not worth giving anything up for on reflection, except as social customs dictate).
I can identify the mechanism you call moral outrage though. I experience (in my introspection of my self-simulation, so, you know, reliable data here /sarcasm) frustration that he would make a decision that would kill me for no reason (although it only just now occurred to me that he could be intentionally evil rather than stupid—that’s odd). I oddly experience a much stronger reaction imagining him being an idiot than imagining him directly trying to kill me. Maybe it’s a map from how my “should” algorithm is wired (you should do that which on reflection you want to do) onto the situation, which does make sense. I dislike the goals of the evil guy, but he’s following them as he should. The stupid one is failing to follow them correctly (and harming me in the process—I don’t get anywhere near as upset, although I do get some feeling from it, if he kills 5 to save me).
In short, using the word moral makes your point sound really different than when you don’t. I agree with it, mostly, without “moral” or “should”. I don’t think that most people mean anything close to what you’ve been using those words to mean, so I recommend some added clarity when talking about it. As to the Squareness Rock, “square” is a useful cocept regardless of how I learned it—and if it was a Harblan Rock that told me a Harblan was a rectangle with sides length 2:9, I wouldn’t care (unless there were special properties about Harblans). A Morality Rock only tells me some rules of behavior, which I don’t care about at all unless they line up with the preferences I already had. There is no such thing as morality, except in the way it’s encoded in individual human brains (if you want to call that morality, since I prefer simply calling it preferences); and your definition doesn’t even come close to the entirety of what is encoded in human brains.
“Should” and “prefer” can’t give different answers for yourself, unless you really muddle the entire issue of morality altogether.
(shrug) I find that “prefer” can give different answers for myself all on its own.
The important question is; what the hell do we mean by “morality?”
I’m not sure that is an important question, actually. Let alone the important question. What makes you think so?
morality cannot come from anywhere other than our preferences.
No argument there, ultimately. But just because my beliefs about what I should do are ultimately grounded in terms of my preferences, it still doesn’t follow that in every situation my beliefs about what I should do will be identical to my beliefs about what I prefer to do.
Given that those two things are potentially different, it’s potentially useful to have ways of talking about the difference.
By the important question, I meant the important question with regard to the problem at hand. Ultimately I’ve since decided that the whole concept of morality is a sort of Wrong Question; discourse is vastly improved by eliminating the word altogether (and not replacing with a synonym).
What is the process which determines what you should do? What mental process do you perform to decide that you should or shouldn’t do x? When I try and pinpoint it I just keep finding myself using exactly the same thoughts as when I decide what I prefer to do. When I try to reflect back to my days as a Christian, I recall checking against a set of general rules of good and bad and determine where something lies on that spectrum. Should can mean something different from want in the sense of “according to the Christian Bible, you should use any means necessary to bring others to believe in Christ even if that hurts you.” But when talking about yourself? What’s the rule set you’re comparing to? I want to default to comparing to your preferences. If you don’t do that then you need to be a lot more specific about what you mean by “should”, and indeed why the word is useful at all in that context.
The mental process I go through to determine my preferences is highly scope-sensitive.
For example, the process underlying asking “which of the choices I’ have the practical ability to implement right now do I prefer?” is very different from “which of the choices I have the intellectual ability to conceive of right now do I prefer?” is very different from “do I prefer to choose from among my current choices, or defer choosing?”
Also, the answer I give to each of those questions depends a lot on what parts of my psyche I’m most identifying with at the moment I answer.
Many of my “should” statements refer to the results of the most far-mode, ego-less version of “prefer” that I’ve cached the results of evaluating. In those cases, yes, “should” is equivalent to (one and only one version of) “prefer.” Even in those cases, though, “prefer” is not (generally) equivalent to “should,” though in those cases I am generally happiest when my various other “prefers” converge on my “should”.
There are also “should” statements I make which are really social constructs I’ve picked up uncritically. I make some effort to evaluate these as I identify them and either discard them or endorse them on other grounds, but I don’t devote nearly the effort to that that would be required to complete the task. In many of those cases, my “should” isn’t equivalent to any form of “prefer,” and I am generally happiest in those cases when I discard that “should”.
I can see that. It will be a difficult choice, but I would do the same. I think it is morally defensible.
[Edit] On second thought, I am not a husband or father, but I would like to think I will one day have a family who has heroic virtue enough to be willing to sacrifice their live for others. How I would behave, again is subject to my emotions, but I would like to honor that wish.
There’s a possible loophole in the possibility that living with the grief of your dead family (and specifically, the knowledge that you could have prevented it) would prevent you from making the world so super-awesome.
for all I know one of them will invent a cure for cancer in the future
What are the actual odds of that, though? Compared to the good you do (you’re on LW, so I’m guessing you’re more likely to do some rational altruism and save more than five lives than they are.)
I also note that, if my wife or daughter was one of the people tied to the track, I would unhesitatingly throw myself off. This makes me conclude that I should want to throw myself off the bridge (because the supposedly, flimsily ‘rational atruistic’ reason—that I have the potential to help people—is revealed to be bogus).
I would assume you’re massively biased/emotionally compromised with regards to that scenario, just for evopsych reasons. So I’d be iffy about using that as a yard stick.
That said, you also presumably know them better, so there’s the risk that you’re treating the five victims as faceless NPCs.
I still wonder, however, if there is any possible rational reason to not choose to sacrifice oneself in the scenario. I am unable to come up with one.
Ultimately, it comes down the instrumental values. The five get a x5 and also automatically save four net lives, so you would have to be noticeably above average—but I’d say there’s enough low-hanging fruit around that that’s far from impossible.
After all, it’s not like these people are signed up for cryonics.
Which makes me think that yes, my incentives are screwed up here and the correct answer is: I should be as willing to jump as to push the fat man off the bridge.
Beware of the straw Vulcan/Dickensian rule “The needs of the many...”. This is deontology disguising as utilitarianism. Sometimes it works and sometimes it doesn’t, and you don’t have to feel bad when it doesn’t.
Great post. Here’s my unvarnished answer: I wouldn’t jump, and the reasons why involve my knowledge that I have a 7-year old daughter and the (Motivated Reasoning and egotism alert!!) idea that I have the potential to improve the lives of many people.
Now of course, it’s EXTREMELY likely that one or more of the other people in this scenario is a parent, and for all I know one of them will invent a cure for cancer in the future. In point of fact, if I were to HONESTLY evaluate the possibility that one of the other players has a potential to improve the planet more than I do, the likelihood may be as great as the likelihood that one of the other players is also a parent. Which makes me think that yes, my incentives are screwed up here and the correct answer is: I should be as willing to jump as to push the fat man off the bridge.
I also note that, if my wife or daughter was one of the people tied to the track, I would unhesitatingly throw myself off. This makes me conclude that I should want to throw myself off the bridge (because the supposedly, flimsily ‘rational atruistic’ reason—that I have the potential to help people—is revealed to be bogus). I still wonder, however, if there is any possible rational reason to not choose to sacrifice oneself in the scenario. I am unable to come up with one.
Of course there is—e.g. if you care more for yourself than for other people, rationality doesn’t compel you to sacrifice even a cent of your money, let alone you life, for the sake of others.
People must REALLY REALLY stop confusing what is “rational” and what is “moral”. Rationality says nothing about what you value, only about how to achieve it.
They must also stop confusing “should” “would” and “I would prefer to”.
I’m not sure what ‘should’ means if it doesn’t somehow cash out as preference.
Yeah, “somehow” the two concepts are connected, we can see that, because moral considerations act on our preferences, and most moral philosophies take the preferences of others in considerations when deciding what’s the moral thing to do.
But the first thing that you must see is that the concepts are not identical. “I prefer X to happen” and “I find X morally better” are different things.
Take random parent X and they’ll care more about the well-being of their own child than for the welfare of a million other children in the far corner of the world. That doesn’t mean they evaluate a world where a million other children suffer to be a morally better world than a world where just theirs does.
Here’s what I think “should” means. I think “should” is an attempted abstract calculation of our preferences in the attempted depersonalization of the provided context. To put it differently, I think “should” is what we believe we’d prefer to happen if we had no personal stakes involved, or what we believe we’d feel about the situation if our empathy was not centralized around our closest and dearest.
EDIT TO ADD: If I had to guess further, I’d guess that the primary evolutionary reason for our sense of morality is probably not to drive us via guilt and duty but to drive us via moral outrage—and that guilt is there only as in our imagined perception of the moral outrage of others. To test that I’d like to see if there’s been studies to determine if people who are guilt-free (e.g. psychopaths) are also free of a sense of moral outrage.
Well, anonymity does lead to antisocial behavior in experiments … and in 4chan, for that matter.
On the other hand, 4chan is also known for group hatefests of moral outrage which erupt into DDOS attacks and worse.
I find myself thinking mostly around the same lines as you, and so far the best I’ve been able to come up with is “I’m willing to accept a certain amount of immorality when it comes to the welfare of my wife and child”.
I’m not really comfortable with the implications of that, or that I’m not completely confident it’s not still a rationalization.
Is there an amount of human suffering of strangers to avoid which you’d consent to have your wife and child tortured to death?
Also, you’re “allowed” your own values—no need for rationalizations for your terminal values, whatever they may be. If the implications make you uncomfortable (maybe they aren’t in accordance with facets of your self-image), well, there’s not yet been a human with non-contradictory values so you’re in good company.
Initially, my first instinct was to try and find the biggest font I could to say ‘no’. After actually stopping to think about it for a few minutes… I don’t know. It would probably have to be enough suffering to the point where it would destabilize society, but I haven’t come to any conclusions. Yet.
Heh, well, I suppose you’ve got a point there, but I’d still like my self-image to be accurate. Though I suppose around here that kind of goes without saying.
That sounds a bit like muddling the hypothetical, along the lines of “well, if I don’t let my family be tortured to death, all those strangers dying would destabilize society, which would also cause my loved ones harm”.
No. Consider the death of those strangers to have no discernible impact whatsoever on your loved ones, and to keep the numbers lower, let’s compare “x strangers tortured to death” versus “wife and child tortured to death”. Solve for x. You wouldn’t need to watch the deeds in both cases (although feel free to say what would change if you’d need to watch when choosing against your family), it would be a button choice scenario.
The difference between myself and many others on LW is that not only would I unabashedly decide in favor of my loved ones over an arbitrary amount of strangers (whose fate wouldn’t impact us), I do not find any fault with that choice, i.e. it is an accurate reflection of my prioritized values.
As the saying goes, “if the hill will not come to Skeeve, Skeeve will go to the hill”. There’s a better alternative to trying to rewrite your values to suit your self-image. Which is constructing an honest self-image to reflect your values.
That was the sort of lines I was thinking along, yes. Framing the question in that fashion… I’m having some trouble imagining numbers of people large enough. It would have to be something on the order of ‘where x contains a majority of any given sentient species’.
The realization that I could willingly consign billions of people to death and be able to feel like I made the right decision in the morning is… unsettling.
I wish I could upvote you a second time just for this line. But yes, this is pretty much what I meant; I didn’t intend to imply that I wanted my self-image to be accurate and unchanging from what it is now, I’d just prefer it to be accurate.
The hypothetical is being posed is what to me is an unsatisfactory degree of abstraction. How about a more concrete form?
You are fighting in the covert resistance against some appallingly oppressive regime. (Goodness knows the 20th century has enough examples to choose from.) You get the news that the regime is onto you and have your wife and child hostage. What do you do?
We may grok that scenario in decidedly different ways:
Maybe it would serve the wife and child best if I were successful in my resistance to some degree, to have a better bargaining situation? Maybe if I gave myself up, the regime would lose any incentive to keep the hostages alive? At that point we’d just be navigating the intricacies of such added details. Better to stick with the intent of the actions: Personally, I’d take the course of action most likely to preserve the wife and child’s well-being, but then I probably wouldn’t have grown into a role which exposes family to the regime as high-value bargaining chips.
What is immorality then? Even a theist would say “morality is that which is good and should be done, and immorality is that which is not good and should not be done.” If you think it would be immoral to spare you wife and child, then you are saying it is not a good thing and shouldn’t be done. I am pretty sure protecting your family is a good thing, and most people would agree.
The problem is, I think, is not that it is immoral to not push you wife and child in front of a moving train, albeit to save 5 others, but that it is immoral to push any individual in front of a train to save some other individuals.
If you increase the numbers enough, though, I would think it changes, since you are not just saving others, but society, or civilization or a town or what have you. Sacrificing others for that is acceptable, but rarely does this require a single person’s sacrifice, and it usually requires the consent and deliberation of the society under threat. Hence why we have the draft.
What I mean by ‘immorality’ is that I, on reflection, believe I am willing to break rules that I wouldn’t otherwise if it would benefit my family. Going back to the original switch problem, if it was ten people tied to the siding, and my wife and child tied to the main track, I’d flip the switch and send the train onto the siding.
I don’t know if that’s morally defensible, but it’s still what I’d do.
I’m finding myself disappointed that so many people have trouble distinguishing between “would” “should” and “prefer”
You’re just saying that
a) you’d prefer to save your family
b) you believe you would save your family.
c) you probably should not.
There’s nothing at all contradictory in the above statements. You would do something and prefer to do something that you recognize you shouldn’t. What you “prefer” and what you “would” and what you “should” are all different logical concepts, so there’s no reason to think they always coincide even when they often do.
I don’t think I was having any trouble distinguishing between “would”, “should”, and “prefer”. Your analysis of my statement is spot on—it’s exactly what I was intending to say.
If morality is (rather simplistically) defined as what we “should” do, I ought to be concerned when what I would do and what I should do doesn’t line up, if I want to be a moral person.
Ah, but the [i]should[/i] coincide. And if this is a moral problem, it is in the realm of the [i]should[/i]. If it is a question if you are a moral person, then it it in the realm of the [i]would[/i]. As for [i]prefer[/i] that is the most fluid concept, meaning either a measuring of contrasting values, or your emotions of the matter.
This is incorrect. “Should” and “prefer” can’t give different answers for yourself, unless you really muddle the entire issue of morality altogether. Hopefully we can all agree that there is no such thing as an objective morality written down on the grand Morality Rock (and even if there were there would be no reason to actually follow it or call it moral). If we can’t then let me know and I’ll defend that rather than the rest of this post.
The important question is; what the hell do we mean by “morality?” It’s not something we can find written down somewhere on one of Jupiter’s moons, so what exactly is it, where does it come from, and most importantly where do our intuitions and knowledge about it come from? The answer that seems most useful is that morality is the algorithm we want to use to determine what actions to take, if we could self-modify to be the kind of people we want to be. It comes from reflecting on our preferences and values and deciding which we think are really and truly important and which we would rather do without. We can’t always do it perfectly right now, because we run on hostile hardware, but if we could reflect on all our choices perfectly then we would always choose the moral one. That seems to align with our intuitions of morality as the thing we wish we could do, even if we sometimes can’t or don’t due to akrasia or just lack of virtue. Thus, it is clear that there is a difference between what we “should” do, and what we “would” do (just as there is sometimes a difference between the best answer we can get for a math problem and the one we actually write down on the test). But it’s clear that there is no difference between what we “should” do and what we would prefer we do. Even if you think my definition of morality is missing something, it should be clear that morality cannot come from anywhere other than our preferences. There simply isn’t anywhere else we could get information about what we “should” do, which anyone in their right mind would not just ignore.
In short, if I would do x, and I prefer to do x, then why the heck would/should I care whether I should do x?! Morality in that case is completely meaningless; it’s no more useful than whatever’s written on the great Morality Rock. If I don’t prefer to act morally (according to whatever system is given) then I don’t care whether my action is “moral”.
But they do. “I know I shouldn’t, but I want to”. And since they do so often give different answer, they can give different answers.
I think we’re both in agreement that when we talk about “morality” we are in reality discussing something that some part of our brain is calculating or attempting to calculate. The disagreement between us is about what that calculation is attempting to do.
First of all, even that’s different from morality=preference—my calculated Morality(X) of an action X wouldn’t be calculated by my current Preference(CurrentAris, X), but rather by my estimated Preference(PreferredAris, X). So it would still allow that what I prefer to do is different from what I believe I should do.
Secondly, your definition doesn’t seem to me to explain how I can judge some people more moral than me and yet NOT want to be as moral as they are—can I invite you to read the Jain’s death?
SOME SPOILERS for “The Jain’s Death” follow below...
Near the end of the first part of the comic, the Jain in question engages in a self-sacrificial action, which I don’t consider morally mandatory—I’m not even sure it’s morally permissible—and yet I consider her a more moral person than I am. I don’t want to be as moral as she is.
My own answer about what morality entails is that it’s an abstraction of our preferences in the attempted de-selfcenteredness of context. Let’s say that a fire marshall has the option of saving either 20 children in an orphanage or your own child.
What you prefer to do is that he save your own child. What you recognize as moral is that he save the 20 children. That’s because if you had no stakes in the issue that’s what your preference of what he should do would be. So morality is not preference, it’s abstracted preference.
And abstracted preference feeds back to influence actual preference, but it doesn’t fully replace the purely amoral preference.
So in that sense I can’t depersonalize so much to consider the Jain’s action better than mine would have been, so I don’t consider her action morally better. I don’t want to depersonalize that much either—so I don’t want to be as moral as she is. But she is more moral than me, because I recognize that she does depersonalize more, and lets that abstraction move her actions further than I ever would want to.
I think my answer also explains as to why some people believe morality objective and some view it as subjective. Because it’s the subjective attempt at objectivity. :-)
You would care about whether you should do x as a mere function of our brain—we’re wired so that the morality of a deed acts on our preferences. All other things being equal the positive morality of a deed tends to act positively on our preferences.
You have a very specific, universal definition of morality, which does seem to meet some of our intuitions about the word but which is generally not at all useful outside of that. Specifically, for some reason when you say moral you mean unselfish. You mean what we would want to do if we, personally, we’re not involved. That captures some of our intuitions, but only does so insofar as that is a specific thing that sounds sort of good and that therefore tends to end up in a lot of moral systems. However, it is essentially a command from on high—thou shalt not place thine own interests above others. I, quite frankly, don’t care what you think I should or shouldn’t do. I like living. I value my life higher than yours, by a lot. I think that in general people should flip the switch on the trolley problem, because I am more likely to be one of the 5 saved than the 1 killed. I think that if I already know I am the one, they should not. I understand why they wouldn’t care, and would flip it anyway, but I would do everything in my power (including use of the Dark Arts, bribes, threats, and lies) to convince them not to. And then I would walk away feeling sad that 5 people died, but nonetheless happy to be alive. I wouldn’t say that my action was immoral; on reflection I’d still want to live.
The major sticking point, honestly, is that the concept of morality needs to be dissolved. It is a wrong question. The terms can be preserved, but I’m becoming more and more convinced that they shouldn’t be. There is no such thing as a moral action. There is no such thing as good or evil. There are only things that I want, and things that you want, and things that other agents want. Clippy the paperclip maximizer is not evil, but I would kill him anyway (unless I could use him somehow with a plan to kill him later). I would adopt a binding contract to kill myself to save 5 others on the condition that everyone else does the same; but if I already know that I would be in a position to follow through on it then I would not adopt it. I don’t think that somehow I “should” adopt it even though I don’t want to, I just don’t want to adopt it and should is irrelevant (it’s exactly the same operation, mentally, as “want to”).
Basically, you’re trying to establish some standard of behavior and call it moral. And you’re wrong. That’s not what moral means in any sense other than that you have defined it to mean that. Which you can’t do. You’ve gotten yourself highly confused in the process. Restate your whole point, but don’t use the words moral or should anywhere (or synonyms). What you should find is that there’s no longer any point to be made. “Moral” and “should” are buzzwords with no meaning, but they sound like they should be important so everyone keeps talking about them and throwing out nice-sounding things and calling them moral, and are contradicted by othe people with other nice things and calling them moral. Sometimes I think the fundamentalist theists have it better figured out; “moral” is what God says it is, and you care because otherwise you’re thrown into fire!
Let’s consider two scenarios:
X: You are the one, the train is running towards the five, and Bob chooses to flip the switch so that it kills you instead.
Y: You are among the five, the train is running towards the one, and Bob chooses to flip the switch so that it kills the five instead.
In both scenarios Bob flips the switch and as a result you die—but I think than in the case of action Y, where you are one of the five, you’d also be likely experiencing a sense of moral outrage towards Bob that you would be lacking in the case of action X.
There exist moral considerations in someone choosing his actions much like there exist considerations of taste in someone choosing his lunch. If you fail to acknowledge this, you’ll be simply be predicting the actions of moral individuals wrongly.
Okay. There’s a mechanism in our brains that serves to calculate our abstracted preferences for behaviours—in the sense of attempting to calculate a preference if we had no stakes in the given situation. The effect of this mechanism are several: it produces positive emotions towards people and behaviours that follow said abstracted preferences, negative emotions towards people and behaviours that don’t follow said abstracted preferences, and it contributes in determining our own actions; causing negative self-loathing feelings (labelled guilt or shame) when we fail to follow said abstracted preferences.
I think I did a good job above. You’ve failed to make your case to me that there is no meaning behind moral and should. We recognize the effects of morality (outrage, applauding, guilt), but we’re not self-aware enough about the mechanism of our moral calculations itself. But that isn’t surprising to me, there’s hardly any pattern-recognition in our brains whose mechanism we are self-aware about (I don’t consciously think “such-a-nose and such a face-shape” when my brain recognizes the face of my mother).
The only difference between optical pattern recognition and moral pattern recognition is that the latter deals with behaviours rather than visible objects. To tell me that there’s no morality is like telling me there’s no such thing as a square. Well sure, there’s no Squareness Rock somewhere in the universe, but it’s a actual pattern that our brains recognize.
It seems like a rather different statement to say that there exists a mechanism on our brain which tends to make us want to act as though we had no stakes in the situation, as opposed to talking about what is moral. I’m no evo-psych specialist but it seems plausible that such a mechanism exists. I dispute the notion that such a mechanism encompasses what is usually meant by morality. Most moral systems do not resolve to simply satisfying that mechanism. Also, I see no reason to label that particular mechanism “moral”, nor the output of it those things we “should” do (I don’t just disagree with this on reflection; it’s actually my inuition that “should” means what you want to do, while impartiality is a disconnected preference that I recognize but don’t associate even a little bit with should. I don’t seem to have an intuition about what morality means other than doing what you should, but then I get a little jarring sensation from the contact with my should intuition...). You’ve described something I agree with after the taboo, but which before it I definitely disagree with. It’s just an issue of semantics at this point, but semantics are also important. “Morality” has really huge connotations for us; it’s a bit disingenuous to pick one specific part of our preferences and call it “moral”, or what we “should” do (even if that’s the part of our brain that causes us to talk about morality, it’s not what we mean by morality). I mean, I ignore parts of my preferences all the time. A thousand shards of desire and all that. Acting impartially is somewhere in my preferences,, but it’s pretty effectively drowned out by everything else (and I would self-modify away from it given the option—it’s not worth giving anything up for on reflection, except as social customs dictate).
I can identify the mechanism you call moral outrage though. I experience (in my introspection of my self-simulation, so, you know, reliable data here /sarcasm) frustration that he would make a decision that would kill me for no reason (although it only just now occurred to me that he could be intentionally evil rather than stupid—that’s odd). I oddly experience a much stronger reaction imagining him being an idiot than imagining him directly trying to kill me. Maybe it’s a map from how my “should” algorithm is wired (you should do that which on reflection you want to do) onto the situation, which does make sense. I dislike the goals of the evil guy, but he’s following them as he should. The stupid one is failing to follow them correctly (and harming me in the process—I don’t get anywhere near as upset, although I do get some feeling from it, if he kills 5 to save me).
In short, using the word moral makes your point sound really different than when you don’t. I agree with it, mostly, without “moral” or “should”. I don’t think that most people mean anything close to what you’ve been using those words to mean, so I recommend some added clarity when talking about it. As to the Squareness Rock, “square” is a useful cocept regardless of how I learned it—and if it was a Harblan Rock that told me a Harblan was a rectangle with sides length 2:9, I wouldn’t care (unless there were special properties about Harblans). A Morality Rock only tells me some rules of behavior, which I don’t care about at all unless they line up with the preferences I already had. There is no such thing as morality, except in the way it’s encoded in individual human brains (if you want to call that morality, since I prefer simply calling it preferences); and your definition doesn’t even come close to the entirety of what is encoded in human brains.
(shrug) I find that “prefer” can give different answers for myself all on its own.
I’m not sure that is an important question, actually. Let alone the important question. What makes you think so?
No argument there, ultimately. But just because my beliefs about what I should do are ultimately grounded in terms of my preferences, it still doesn’t follow that in every situation my beliefs about what I should do will be identical to my beliefs about what I prefer to do.
Given that those two things are potentially different, it’s potentially useful to have ways of talking about the difference.
By the important question, I meant the important question with regard to the problem at hand. Ultimately I’ve since decided that the whole concept of morality is a sort of Wrong Question; discourse is vastly improved by eliminating the word altogether (and not replacing with a synonym).
What is the process which determines what you should do? What mental process do you perform to decide that you should or shouldn’t do x? When I try and pinpoint it I just keep finding myself using exactly the same thoughts as when I decide what I prefer to do. When I try to reflect back to my days as a Christian, I recall checking against a set of general rules of good and bad and determine where something lies on that spectrum. Should can mean something different from want in the sense of “according to the Christian Bible, you should use any means necessary to bring others to believe in Christ even if that hurts you.” But when talking about yourself? What’s the rule set you’re comparing to? I want to default to comparing to your preferences. If you don’t do that then you need to be a lot more specific about what you mean by “should”, and indeed why the word is useful at all in that context.
The mental process I go through to determine my preferences is highly scope-sensitive.
For example, the process underlying asking “which of the choices I’ have the practical ability to implement right now do I prefer?” is very different from “which of the choices I have the intellectual ability to conceive of right now do I prefer?” is very different from “do I prefer to choose from among my current choices, or defer choosing?”
Also, the answer I give to each of those questions depends a lot on what parts of my psyche I’m most identifying with at the moment I answer.
Many of my “should” statements refer to the results of the most far-mode, ego-less version of “prefer” that I’ve cached the results of evaluating. In those cases, yes, “should” is equivalent to (one and only one version of) “prefer.” Even in those cases, though, “prefer” is not (generally) equivalent to “should,” though in those cases I am generally happiest when my various other “prefers” converge on my “should”.
There are also “should” statements I make which are really social constructs I’ve picked up uncritically. I make some effort to evaluate these as I identify them and either discard them or endorse them on other grounds, but I don’t devote nearly the effort to that that would be required to complete the task. In many of those cases, my “should” isn’t equivalent to any form of “prefer,” and I am generally happiest in those cases when I discard that “should”.
I can see that. It will be a difficult choice, but I would do the same. I think it is morally defensible.
[Edit] On second thought, I am not a husband or father, but I would like to think I will one day have a family who has heroic virtue enough to be willing to sacrifice their live for others. How I would behave, again is subject to my emotions, but I would like to honor that wish.
As a non-parent, I endorse this comment.
There’s a possible loophole in the possibility that living with the grief of your dead family (and specifically, the knowledge that you could have prevented it) would prevent you from making the world so super-awesome.
What are the actual odds of that, though? Compared to the good you do (you’re on LW, so I’m guessing you’re more likely to do some rational altruism and save more than five lives than they are.)
I would assume you’re massively biased/emotionally compromised with regards to that scenario, just for evopsych reasons. So I’d be iffy about using that as a yard stick.
That said, you also presumably know them better, so there’s the risk that you’re treating the five victims as faceless NPCs.
Ultimately, it comes down the instrumental values. The five get a x5 and also automatically save four net lives, so you would have to be noticeably above average—but I’d say there’s enough low-hanging fruit around that that’s far from impossible.
After all, it’s not like these people are signed up for cryonics.
Beware of the straw Vulcan/Dickensian rule “The needs of the many...”. This is deontology disguising as utilitarianism. Sometimes it works and sometimes it doesn’t, and you don’t have to feel bad when it doesn’t.