I think that in general people should flip the switch on the trolley problem, because I am more likely to be one of the 5 saved than the 1 killed. I think that if I already know I am the one, they should not.
Let’s consider two scenarios: X: You are the one, the train is running towards the five, and Bob chooses to flip the switch so that it kills you instead. Y: You are among the five, the train is running towards the one, and Bob chooses to flip the switch so that it kills the five instead.
In both scenarios Bob flips the switch and as a result you die—but I think than in the case of action Y, where you are one of the five, you’d also be likely experiencing a sense of moral outrage towards Bob that you would be lacking in the case of action X.
There are only things that I want, and things that you want, and things that other agents want.
There exist moral considerations in someone choosing his actions much like there exist considerations of taste in someone choosing his lunch. If you fail to acknowledge this, you’ll be simply be predicting the actions of moral individuals wrongly.
Restate your whole point, but don’t use the words moral or should anywhere (or synonyms).
Okay. There’s a mechanism in our brains that serves to calculate our abstracted preferences for behaviours—in the sense of attempting to calculate a preference if we had no stakes in the given situation. The effect of this mechanism are several: it produces positive emotions towards people and behaviours that follow said abstracted preferences, negative emotions towards people and behaviours that don’t follow said abstracted preferences, and it contributes in determining our own actions; causing negative self-loathing feelings (labelled guilt or shame) when we fail to follow said abstracted preferences.
What you should find is that there’s no longer any point to be made. “Moral” and “should” are buzzwords with no meaning,
I think I did a good job above. You’ve failed to make your case to me that there is no meaning behind moral and should. We recognize the effects of morality (outrage, applauding, guilt), but we’re not self-aware enough about the mechanism of our moral calculations itself. But that isn’t surprising to me, there’s hardly any pattern-recognition in our brains whose mechanism we are self-aware about (I don’t consciously think “such-a-nose and such a face-shape” when my brain recognizes the face of my mother).
The only difference between optical pattern recognition and moral pattern recognition is that the latter deals with behaviours rather than visible objects. To tell me that there’s no morality is like telling me there’s no such thing as a square. Well sure, there’s no Squareness Rock somewhere in the universe, but it’s a actual pattern that our brains recognize.
It seems like a rather different statement to say that there exists a mechanism on our brain which tends to make us want to act as though we had no stakes in the situation, as opposed to talking about what is moral. I’m no evo-psych specialist but it seems plausible that such a mechanism exists. I dispute the notion that such a mechanism encompasses what is usually meant by morality. Most moral systems do not resolve to simply satisfying that mechanism. Also, I see no reason to label that particular mechanism “moral”, nor the output of it those things we “should” do (I don’t just disagree with this on reflection; it’s actually my inuition that “should” means what you want to do, while impartiality is a disconnected preference that I recognize but don’t associate even a little bit with should. I don’t seem to have an intuition about what morality means other than doing what you should, but then I get a little jarring sensation from the contact with my should intuition...). You’ve described something I agree with after the taboo, but which before it I definitely disagree with. It’s just an issue of semantics at this point, but semantics are also important. “Morality” has really huge connotations for us; it’s a bit disingenuous to pick one specific part of our preferences and call it “moral”, or what we “should” do (even if that’s the part of our brain that causes us to talk about morality, it’s not what we mean by morality). I mean, I ignore parts of my preferences all the time. A thousand shards of desire and all that. Acting impartially is somewhere in my preferences,, but it’s pretty effectively drowned out by everything else (and I would self-modify away from it given the option—it’s not worth giving anything up for on reflection, except as social customs dictate).
I can identify the mechanism you call moral outrage though. I experience (in my introspection of my self-simulation, so, you know, reliable data here /sarcasm) frustration that he would make a decision that would kill me for no reason (although it only just now occurred to me that he could be intentionally evil rather than stupid—that’s odd). I oddly experience a much stronger reaction imagining him being an idiot than imagining him directly trying to kill me. Maybe it’s a map from how my “should” algorithm is wired (you should do that which on reflection you want to do) onto the situation, which does make sense. I dislike the goals of the evil guy, but he’s following them as he should. The stupid one is failing to follow them correctly (and harming me in the process—I don’t get anywhere near as upset, although I do get some feeling from it, if he kills 5 to save me).
In short, using the word moral makes your point sound really different than when you don’t. I agree with it, mostly, without “moral” or “should”. I don’t think that most people mean anything close to what you’ve been using those words to mean, so I recommend some added clarity when talking about it. As to the Squareness Rock, “square” is a useful cocept regardless of how I learned it—and if it was a Harblan Rock that told me a Harblan was a rectangle with sides length 2:9, I wouldn’t care (unless there were special properties about Harblans). A Morality Rock only tells me some rules of behavior, which I don’t care about at all unless they line up with the preferences I already had. There is no such thing as morality, except in the way it’s encoded in individual human brains (if you want to call that morality, since I prefer simply calling it preferences); and your definition doesn’t even come close to the entirety of what is encoded in human brains.
Let’s consider two scenarios:
X: You are the one, the train is running towards the five, and Bob chooses to flip the switch so that it kills you instead.
Y: You are among the five, the train is running towards the one, and Bob chooses to flip the switch so that it kills the five instead.
In both scenarios Bob flips the switch and as a result you die—but I think than in the case of action Y, where you are one of the five, you’d also be likely experiencing a sense of moral outrage towards Bob that you would be lacking in the case of action X.
There exist moral considerations in someone choosing his actions much like there exist considerations of taste in someone choosing his lunch. If you fail to acknowledge this, you’ll be simply be predicting the actions of moral individuals wrongly.
Okay. There’s a mechanism in our brains that serves to calculate our abstracted preferences for behaviours—in the sense of attempting to calculate a preference if we had no stakes in the given situation. The effect of this mechanism are several: it produces positive emotions towards people and behaviours that follow said abstracted preferences, negative emotions towards people and behaviours that don’t follow said abstracted preferences, and it contributes in determining our own actions; causing negative self-loathing feelings (labelled guilt or shame) when we fail to follow said abstracted preferences.
I think I did a good job above. You’ve failed to make your case to me that there is no meaning behind moral and should. We recognize the effects of morality (outrage, applauding, guilt), but we’re not self-aware enough about the mechanism of our moral calculations itself. But that isn’t surprising to me, there’s hardly any pattern-recognition in our brains whose mechanism we are self-aware about (I don’t consciously think “such-a-nose and such a face-shape” when my brain recognizes the face of my mother).
The only difference between optical pattern recognition and moral pattern recognition is that the latter deals with behaviours rather than visible objects. To tell me that there’s no morality is like telling me there’s no such thing as a square. Well sure, there’s no Squareness Rock somewhere in the universe, but it’s a actual pattern that our brains recognize.
It seems like a rather different statement to say that there exists a mechanism on our brain which tends to make us want to act as though we had no stakes in the situation, as opposed to talking about what is moral. I’m no evo-psych specialist but it seems plausible that such a mechanism exists. I dispute the notion that such a mechanism encompasses what is usually meant by morality. Most moral systems do not resolve to simply satisfying that mechanism. Also, I see no reason to label that particular mechanism “moral”, nor the output of it those things we “should” do (I don’t just disagree with this on reflection; it’s actually my inuition that “should” means what you want to do, while impartiality is a disconnected preference that I recognize but don’t associate even a little bit with should. I don’t seem to have an intuition about what morality means other than doing what you should, but then I get a little jarring sensation from the contact with my should intuition...). You’ve described something I agree with after the taboo, but which before it I definitely disagree with. It’s just an issue of semantics at this point, but semantics are also important. “Morality” has really huge connotations for us; it’s a bit disingenuous to pick one specific part of our preferences and call it “moral”, or what we “should” do (even if that’s the part of our brain that causes us to talk about morality, it’s not what we mean by morality). I mean, I ignore parts of my preferences all the time. A thousand shards of desire and all that. Acting impartially is somewhere in my preferences,, but it’s pretty effectively drowned out by everything else (and I would self-modify away from it given the option—it’s not worth giving anything up for on reflection, except as social customs dictate).
I can identify the mechanism you call moral outrage though. I experience (in my introspection of my self-simulation, so, you know, reliable data here /sarcasm) frustration that he would make a decision that would kill me for no reason (although it only just now occurred to me that he could be intentionally evil rather than stupid—that’s odd). I oddly experience a much stronger reaction imagining him being an idiot than imagining him directly trying to kill me. Maybe it’s a map from how my “should” algorithm is wired (you should do that which on reflection you want to do) onto the situation, which does make sense. I dislike the goals of the evil guy, but he’s following them as he should. The stupid one is failing to follow them correctly (and harming me in the process—I don’t get anywhere near as upset, although I do get some feeling from it, if he kills 5 to save me).
In short, using the word moral makes your point sound really different than when you don’t. I agree with it, mostly, without “moral” or “should”. I don’t think that most people mean anything close to what you’ve been using those words to mean, so I recommend some added clarity when talking about it. As to the Squareness Rock, “square” is a useful cocept regardless of how I learned it—and if it was a Harblan Rock that told me a Harblan was a rectangle with sides length 2:9, I wouldn’t care (unless there were special properties about Harblans). A Morality Rock only tells me some rules of behavior, which I don’t care about at all unless they line up with the preferences I already had. There is no such thing as morality, except in the way it’s encoded in individual human brains (if you want to call that morality, since I prefer simply calling it preferences); and your definition doesn’t even come close to the entirety of what is encoded in human brains.