I’m finding myself disappointed that so many people have trouble distinguishing between “would” “should” and “prefer”
You’re just saying that a) you’d prefer to save your family b) you believe you would save your family. c) you probably should not.
There’s nothing at all contradictory in the above statements. You would do something and prefer to do something that you recognize you shouldn’t. What you “prefer” and what you “would” and what you “should” are all different logical concepts, so there’s no reason to think they always coincide even when they often do.
I don’t think I was having any trouble distinguishing between “would”, “should”, and “prefer”. Your analysis of my statement is spot on—it’s exactly what I was intending to say.
If morality is (rather simplistically) defined as what we “should” do, I ought to be concerned when what I would do and what I should do doesn’t line up, if I want to be a moral person.
Ah, but the [i]should[/i] coincide. And if this is a moral problem, it is in the realm of the [i]should[/i]. If it is a question if you are a moral person, then it it in the realm of the [i]would[/i]. As for [i]prefer[/i] that is the most fluid concept, meaning either a measuring of contrasting values, or your emotions of the matter.
This is incorrect. “Should” and “prefer” can’t give different answers for yourself, unless you really muddle the entire issue of morality altogether. Hopefully we can all agree that there is no such thing as an objective morality written down on the grand Morality Rock (and even if there were there would be no reason to actually follow it or call it moral). If we can’t then let me know and I’ll defend that rather than the rest of this post.
The important question is; what the hell do we mean by “morality?” It’s not something we can find written down somewhere on one of Jupiter’s moons, so what exactly is it, where does it come from, and most importantly where do our intuitions and knowledge about it come from? The answer that seems most useful is that morality is the algorithm we want to use to determine what actions to take, if we could self-modify to be the kind of people we want to be. It comes from reflecting on our preferences and values and deciding which we think are really and truly important and which we would rather do without. We can’t always do it perfectly right now, because we run on hostile hardware, but if we could reflect on all our choices perfectly then we would always choose the moral one. That seems to align with our intuitions of morality as the thing we wish we could do, even if we sometimes can’t or don’t due to akrasia or just lack of virtue. Thus, it is clear that there is a difference between what we “should” do, and what we “would” do (just as there is sometimes a difference between the best answer we can get for a math problem and the one we actually write down on the test). But it’s clear that there is no difference between what we “should” do and what we would prefer we do. Even if you think my definition of morality is missing something, it should be clear that morality cannot come from anywhere other than our preferences. There simply isn’t anywhere else we could get information about what we “should” do, which anyone in their right mind would not just ignore.
In short, if I would do x, and I prefer to do x, then why the heck would/should I care whether I should do x?! Morality in that case is completely meaningless; it’s no more useful than whatever’s written on the great Morality Rock. If I don’t prefer to act morally (according to whatever system is given) then I don’t care whether my action is “moral”.
“Should” and “prefer” can’t give different answers for yourself, unless you really muddle the entire issue of morality altogether.
But they do. “I know I shouldn’t, but I want to”. And since they do so often give different answer, they can give different answers.
Hopefully we can all agree that there is no such thing as an objective morality written down on the grand Morality Rock
I think we’re both in agreement that when we talk about “morality” we are in reality discussing something that some part of our brain is calculating or attempting to calculate. The disagreement between us is about what that calculation is attempting to do.
The answer that seems most useful is that morality is the algorithm we want to use to determine what actions to take, if we could self-modify to be the kind of people we want to be.
First of all, even that’s different from morality=preference—my calculated Morality(X) of an action X wouldn’t be calculated by my current Preference(CurrentAris, X), but rather by my estimated Preference(PreferredAris, X). So it would still allow that what I prefer to do is different from what I believe I should do.
Secondly, your definition doesn’t seem to me to explain how I can judge some people more moral than me and yet NOT want to be as moral as they are—can I invite you to read the Jain’s death?
SOME SPOILERS for “The Jain’s Death” follow below... Near the end of the first part of the comic, the Jain in question engages in a self-sacrificial action, which I don’t consider morally mandatory—I’m not even sure it’s morally permissible—and yet I consider her a more moral person than I am. I don’t want to be as moral as she is.
My own answer about what morality entails is that it’s an abstraction of our preferences in the attempted de-selfcenteredness of context. Let’s say that a fire marshall has the option of saving either 20 children in an orphanage or your own child.
What you prefer to do is that he save your own child.
What you recognize as moral is that he save the 20 children.
That’s because if you had no stakes in the issue that’s what your preference of what he should do would be.
So morality is not preference, it’s abstracted preference.
And abstracted preference feeds back to influence actual preference, but it doesn’t fully replace the purely amoral preference.
So in that sense I can’t depersonalize so much to consider the Jain’s action better than mine would have been, so I don’t consider her action morally better. I don’t want to depersonalize that much either—so I don’t want to be as moral as she is. But she is more moral than me, because I recognize that she does depersonalize more, and lets that abstraction move her actions further than I ever would want to.
I think my answer also explains as to why some people believe morality objective and some view it as subjective. Because it’s the subjective attempt at objectivity. :-)
In short, if I would do x, and I prefer to do x, then why the heck would/should I care whether I should do x?!
You would care about whether you should do x as a mere function of our brain—we’re wired so that the morality of a deed acts on our preferences. All other things being equal the positive morality of a deed tends to act positively on our preferences.
You have a very specific, universal definition of morality, which does seem to meet some of our intuitions about the word but which is generally not at all useful outside of that. Specifically, for some reason when you say moral you mean unselfish. You mean what we would want to do if we, personally, we’re not involved. That captures some of our intuitions, but only does so insofar as that is a specific thing that sounds sort of good and that therefore tends to end up in a lot of moral systems. However, it is essentially a command from on high—thou shalt not place thine own interests above others. I, quite frankly, don’t care what you think I should or shouldn’t do. I like living. I value my life higher than yours, by a lot. I think that in general people should flip the switch on the trolley problem, because I am more likely to be one of the 5 saved than the 1 killed. I think that if I already know I am the one, they should not. I understand why they wouldn’t care, and would flip it anyway, but I would do everything in my power (including use of the Dark Arts, bribes, threats, and lies) to convince them not to. And then I would walk away feeling sad that 5 people died, but nonetheless happy to be alive. I wouldn’t say that my action was immoral; on reflection I’d still want to live.
The major sticking point, honestly, is that the concept of morality needs to be dissolved. It is a wrong question. The terms can be preserved, but I’m becoming more and more convinced that they shouldn’t be. There is no such thing as a moral action. There is no such thing as good or evil. There are only things that I want, and things that you want, and things that other agents want. Clippy the paperclip maximizer is not evil, but I would kill him anyway (unless I could use him somehow with a plan to kill him later). I would adopt a binding contract to kill myself to save 5 others on the condition that everyone else does the same; but if I already know that I would be in a position to follow through on it then I would not adopt it. I don’t think that somehow I “should” adopt it even though I don’t want to, I just don’t want to adopt it and should is irrelevant (it’s exactly the same operation, mentally, as “want to”).
Basically, you’re trying to establish some standard of behavior and call it moral. And you’re wrong. That’s not what moral means in any sense other than that you have defined it to mean that. Which you can’t do. You’ve gotten yourself highly confused in the process. Restate your whole point, but don’t use the words moral or should anywhere (or synonyms). What you should find is that there’s no longer any point to be made. “Moral” and “should” are buzzwords with no meaning, but they sound like they should be important so everyone keeps talking about them and throwing out nice-sounding things and calling them moral, and are contradicted by othe people with other nice things and calling them moral. Sometimes I think the fundamentalist theists have it better figured out; “moral” is what God says it is, and you care because otherwise you’re thrown into fire!
I think that in general people should flip the switch on the trolley problem, because I am more likely to be one of the 5 saved than the 1 killed. I think that if I already know I am the one, they should not.
Let’s consider two scenarios: X: You are the one, the train is running towards the five, and Bob chooses to flip the switch so that it kills you instead. Y: You are among the five, the train is running towards the one, and Bob chooses to flip the switch so that it kills the five instead.
In both scenarios Bob flips the switch and as a result you die—but I think than in the case of action Y, where you are one of the five, you’d also be likely experiencing a sense of moral outrage towards Bob that you would be lacking in the case of action X.
There are only things that I want, and things that you want, and things that other agents want.
There exist moral considerations in someone choosing his actions much like there exist considerations of taste in someone choosing his lunch. If you fail to acknowledge this, you’ll be simply be predicting the actions of moral individuals wrongly.
Restate your whole point, but don’t use the words moral or should anywhere (or synonyms).
Okay. There’s a mechanism in our brains that serves to calculate our abstracted preferences for behaviours—in the sense of attempting to calculate a preference if we had no stakes in the given situation. The effect of this mechanism are several: it produces positive emotions towards people and behaviours that follow said abstracted preferences, negative emotions towards people and behaviours that don’t follow said abstracted preferences, and it contributes in determining our own actions; causing negative self-loathing feelings (labelled guilt or shame) when we fail to follow said abstracted preferences.
What you should find is that there’s no longer any point to be made. “Moral” and “should” are buzzwords with no meaning,
I think I did a good job above. You’ve failed to make your case to me that there is no meaning behind moral and should. We recognize the effects of morality (outrage, applauding, guilt), but we’re not self-aware enough about the mechanism of our moral calculations itself. But that isn’t surprising to me, there’s hardly any pattern-recognition in our brains whose mechanism we are self-aware about (I don’t consciously think “such-a-nose and such a face-shape” when my brain recognizes the face of my mother).
The only difference between optical pattern recognition and moral pattern recognition is that the latter deals with behaviours rather than visible objects. To tell me that there’s no morality is like telling me there’s no such thing as a square. Well sure, there’s no Squareness Rock somewhere in the universe, but it’s a actual pattern that our brains recognize.
It seems like a rather different statement to say that there exists a mechanism on our brain which tends to make us want to act as though we had no stakes in the situation, as opposed to talking about what is moral. I’m no evo-psych specialist but it seems plausible that such a mechanism exists. I dispute the notion that such a mechanism encompasses what is usually meant by morality. Most moral systems do not resolve to simply satisfying that mechanism. Also, I see no reason to label that particular mechanism “moral”, nor the output of it those things we “should” do (I don’t just disagree with this on reflection; it’s actually my inuition that “should” means what you want to do, while impartiality is a disconnected preference that I recognize but don’t associate even a little bit with should. I don’t seem to have an intuition about what morality means other than doing what you should, but then I get a little jarring sensation from the contact with my should intuition...). You’ve described something I agree with after the taboo, but which before it I definitely disagree with. It’s just an issue of semantics at this point, but semantics are also important. “Morality” has really huge connotations for us; it’s a bit disingenuous to pick one specific part of our preferences and call it “moral”, or what we “should” do (even if that’s the part of our brain that causes us to talk about morality, it’s not what we mean by morality). I mean, I ignore parts of my preferences all the time. A thousand shards of desire and all that. Acting impartially is somewhere in my preferences,, but it’s pretty effectively drowned out by everything else (and I would self-modify away from it given the option—it’s not worth giving anything up for on reflection, except as social customs dictate).
I can identify the mechanism you call moral outrage though. I experience (in my introspection of my self-simulation, so, you know, reliable data here /sarcasm) frustration that he would make a decision that would kill me for no reason (although it only just now occurred to me that he could be intentionally evil rather than stupid—that’s odd). I oddly experience a much stronger reaction imagining him being an idiot than imagining him directly trying to kill me. Maybe it’s a map from how my “should” algorithm is wired (you should do that which on reflection you want to do) onto the situation, which does make sense. I dislike the goals of the evil guy, but he’s following them as he should. The stupid one is failing to follow them correctly (and harming me in the process—I don’t get anywhere near as upset, although I do get some feeling from it, if he kills 5 to save me).
In short, using the word moral makes your point sound really different than when you don’t. I agree with it, mostly, without “moral” or “should”. I don’t think that most people mean anything close to what you’ve been using those words to mean, so I recommend some added clarity when talking about it. As to the Squareness Rock, “square” is a useful cocept regardless of how I learned it—and if it was a Harblan Rock that told me a Harblan was a rectangle with sides length 2:9, I wouldn’t care (unless there were special properties about Harblans). A Morality Rock only tells me some rules of behavior, which I don’t care about at all unless they line up with the preferences I already had. There is no such thing as morality, except in the way it’s encoded in individual human brains (if you want to call that morality, since I prefer simply calling it preferences); and your definition doesn’t even come close to the entirety of what is encoded in human brains.
“Should” and “prefer” can’t give different answers for yourself, unless you really muddle the entire issue of morality altogether.
(shrug) I find that “prefer” can give different answers for myself all on its own.
The important question is; what the hell do we mean by “morality?”
I’m not sure that is an important question, actually. Let alone the important question. What makes you think so?
morality cannot come from anywhere other than our preferences.
No argument there, ultimately. But just because my beliefs about what I should do are ultimately grounded in terms of my preferences, it still doesn’t follow that in every situation my beliefs about what I should do will be identical to my beliefs about what I prefer to do.
Given that those two things are potentially different, it’s potentially useful to have ways of talking about the difference.
By the important question, I meant the important question with regard to the problem at hand. Ultimately I’ve since decided that the whole concept of morality is a sort of Wrong Question; discourse is vastly improved by eliminating the word altogether (and not replacing with a synonym).
What is the process which determines what you should do? What mental process do you perform to decide that you should or shouldn’t do x? When I try and pinpoint it I just keep finding myself using exactly the same thoughts as when I decide what I prefer to do. When I try to reflect back to my days as a Christian, I recall checking against a set of general rules of good and bad and determine where something lies on that spectrum. Should can mean something different from want in the sense of “according to the Christian Bible, you should use any means necessary to bring others to believe in Christ even if that hurts you.” But when talking about yourself? What’s the rule set you’re comparing to? I want to default to comparing to your preferences. If you don’t do that then you need to be a lot more specific about what you mean by “should”, and indeed why the word is useful at all in that context.
The mental process I go through to determine my preferences is highly scope-sensitive.
For example, the process underlying asking “which of the choices I’ have the practical ability to implement right now do I prefer?” is very different from “which of the choices I have the intellectual ability to conceive of right now do I prefer?” is very different from “do I prefer to choose from among my current choices, or defer choosing?”
Also, the answer I give to each of those questions depends a lot on what parts of my psyche I’m most identifying with at the moment I answer.
Many of my “should” statements refer to the results of the most far-mode, ego-less version of “prefer” that I’ve cached the results of evaluating. In those cases, yes, “should” is equivalent to (one and only one version of) “prefer.” Even in those cases, though, “prefer” is not (generally) equivalent to “should,” though in those cases I am generally happiest when my various other “prefers” converge on my “should”.
There are also “should” statements I make which are really social constructs I’ve picked up uncritically. I make some effort to evaluate these as I identify them and either discard them or endorse them on other grounds, but I don’t devote nearly the effort to that that would be required to complete the task. In many of those cases, my “should” isn’t equivalent to any form of “prefer,” and I am generally happiest in those cases when I discard that “should”.
I’m finding myself disappointed that so many people have trouble distinguishing between “would” “should” and “prefer”
You’re just saying that
a) you’d prefer to save your family
b) you believe you would save your family.
c) you probably should not.
There’s nothing at all contradictory in the above statements. You would do something and prefer to do something that you recognize you shouldn’t. What you “prefer” and what you “would” and what you “should” are all different logical concepts, so there’s no reason to think they always coincide even when they often do.
I don’t think I was having any trouble distinguishing between “would”, “should”, and “prefer”. Your analysis of my statement is spot on—it’s exactly what I was intending to say.
If morality is (rather simplistically) defined as what we “should” do, I ought to be concerned when what I would do and what I should do doesn’t line up, if I want to be a moral person.
Ah, but the [i]should[/i] coincide. And if this is a moral problem, it is in the realm of the [i]should[/i]. If it is a question if you are a moral person, then it it in the realm of the [i]would[/i]. As for [i]prefer[/i] that is the most fluid concept, meaning either a measuring of contrasting values, or your emotions of the matter.
This is incorrect. “Should” and “prefer” can’t give different answers for yourself, unless you really muddle the entire issue of morality altogether. Hopefully we can all agree that there is no such thing as an objective morality written down on the grand Morality Rock (and even if there were there would be no reason to actually follow it or call it moral). If we can’t then let me know and I’ll defend that rather than the rest of this post.
The important question is; what the hell do we mean by “morality?” It’s not something we can find written down somewhere on one of Jupiter’s moons, so what exactly is it, where does it come from, and most importantly where do our intuitions and knowledge about it come from? The answer that seems most useful is that morality is the algorithm we want to use to determine what actions to take, if we could self-modify to be the kind of people we want to be. It comes from reflecting on our preferences and values and deciding which we think are really and truly important and which we would rather do without. We can’t always do it perfectly right now, because we run on hostile hardware, but if we could reflect on all our choices perfectly then we would always choose the moral one. That seems to align with our intuitions of morality as the thing we wish we could do, even if we sometimes can’t or don’t due to akrasia or just lack of virtue. Thus, it is clear that there is a difference between what we “should” do, and what we “would” do (just as there is sometimes a difference between the best answer we can get for a math problem and the one we actually write down on the test). But it’s clear that there is no difference between what we “should” do and what we would prefer we do. Even if you think my definition of morality is missing something, it should be clear that morality cannot come from anywhere other than our preferences. There simply isn’t anywhere else we could get information about what we “should” do, which anyone in their right mind would not just ignore.
In short, if I would do x, and I prefer to do x, then why the heck would/should I care whether I should do x?! Morality in that case is completely meaningless; it’s no more useful than whatever’s written on the great Morality Rock. If I don’t prefer to act morally (according to whatever system is given) then I don’t care whether my action is “moral”.
But they do. “I know I shouldn’t, but I want to”. And since they do so often give different answer, they can give different answers.
I think we’re both in agreement that when we talk about “morality” we are in reality discussing something that some part of our brain is calculating or attempting to calculate. The disagreement between us is about what that calculation is attempting to do.
First of all, even that’s different from morality=preference—my calculated Morality(X) of an action X wouldn’t be calculated by my current Preference(CurrentAris, X), but rather by my estimated Preference(PreferredAris, X). So it would still allow that what I prefer to do is different from what I believe I should do.
Secondly, your definition doesn’t seem to me to explain how I can judge some people more moral than me and yet NOT want to be as moral as they are—can I invite you to read the Jain’s death?
SOME SPOILERS for “The Jain’s Death” follow below...
Near the end of the first part of the comic, the Jain in question engages in a self-sacrificial action, which I don’t consider morally mandatory—I’m not even sure it’s morally permissible—and yet I consider her a more moral person than I am. I don’t want to be as moral as she is.
My own answer about what morality entails is that it’s an abstraction of our preferences in the attempted de-selfcenteredness of context. Let’s say that a fire marshall has the option of saving either 20 children in an orphanage or your own child.
What you prefer to do is that he save your own child. What you recognize as moral is that he save the 20 children. That’s because if you had no stakes in the issue that’s what your preference of what he should do would be. So morality is not preference, it’s abstracted preference.
And abstracted preference feeds back to influence actual preference, but it doesn’t fully replace the purely amoral preference.
So in that sense I can’t depersonalize so much to consider the Jain’s action better than mine would have been, so I don’t consider her action morally better. I don’t want to depersonalize that much either—so I don’t want to be as moral as she is. But she is more moral than me, because I recognize that she does depersonalize more, and lets that abstraction move her actions further than I ever would want to.
I think my answer also explains as to why some people believe morality objective and some view it as subjective. Because it’s the subjective attempt at objectivity. :-)
You would care about whether you should do x as a mere function of our brain—we’re wired so that the morality of a deed acts on our preferences. All other things being equal the positive morality of a deed tends to act positively on our preferences.
You have a very specific, universal definition of morality, which does seem to meet some of our intuitions about the word but which is generally not at all useful outside of that. Specifically, for some reason when you say moral you mean unselfish. You mean what we would want to do if we, personally, we’re not involved. That captures some of our intuitions, but only does so insofar as that is a specific thing that sounds sort of good and that therefore tends to end up in a lot of moral systems. However, it is essentially a command from on high—thou shalt not place thine own interests above others. I, quite frankly, don’t care what you think I should or shouldn’t do. I like living. I value my life higher than yours, by a lot. I think that in general people should flip the switch on the trolley problem, because I am more likely to be one of the 5 saved than the 1 killed. I think that if I already know I am the one, they should not. I understand why they wouldn’t care, and would flip it anyway, but I would do everything in my power (including use of the Dark Arts, bribes, threats, and lies) to convince them not to. And then I would walk away feeling sad that 5 people died, but nonetheless happy to be alive. I wouldn’t say that my action was immoral; on reflection I’d still want to live.
The major sticking point, honestly, is that the concept of morality needs to be dissolved. It is a wrong question. The terms can be preserved, but I’m becoming more and more convinced that they shouldn’t be. There is no such thing as a moral action. There is no such thing as good or evil. There are only things that I want, and things that you want, and things that other agents want. Clippy the paperclip maximizer is not evil, but I would kill him anyway (unless I could use him somehow with a plan to kill him later). I would adopt a binding contract to kill myself to save 5 others on the condition that everyone else does the same; but if I already know that I would be in a position to follow through on it then I would not adopt it. I don’t think that somehow I “should” adopt it even though I don’t want to, I just don’t want to adopt it and should is irrelevant (it’s exactly the same operation, mentally, as “want to”).
Basically, you’re trying to establish some standard of behavior and call it moral. And you’re wrong. That’s not what moral means in any sense other than that you have defined it to mean that. Which you can’t do. You’ve gotten yourself highly confused in the process. Restate your whole point, but don’t use the words moral or should anywhere (or synonyms). What you should find is that there’s no longer any point to be made. “Moral” and “should” are buzzwords with no meaning, but they sound like they should be important so everyone keeps talking about them and throwing out nice-sounding things and calling them moral, and are contradicted by othe people with other nice things and calling them moral. Sometimes I think the fundamentalist theists have it better figured out; “moral” is what God says it is, and you care because otherwise you’re thrown into fire!
Let’s consider two scenarios:
X: You are the one, the train is running towards the five, and Bob chooses to flip the switch so that it kills you instead.
Y: You are among the five, the train is running towards the one, and Bob chooses to flip the switch so that it kills the five instead.
In both scenarios Bob flips the switch and as a result you die—but I think than in the case of action Y, where you are one of the five, you’d also be likely experiencing a sense of moral outrage towards Bob that you would be lacking in the case of action X.
There exist moral considerations in someone choosing his actions much like there exist considerations of taste in someone choosing his lunch. If you fail to acknowledge this, you’ll be simply be predicting the actions of moral individuals wrongly.
Okay. There’s a mechanism in our brains that serves to calculate our abstracted preferences for behaviours—in the sense of attempting to calculate a preference if we had no stakes in the given situation. The effect of this mechanism are several: it produces positive emotions towards people and behaviours that follow said abstracted preferences, negative emotions towards people and behaviours that don’t follow said abstracted preferences, and it contributes in determining our own actions; causing negative self-loathing feelings (labelled guilt or shame) when we fail to follow said abstracted preferences.
I think I did a good job above. You’ve failed to make your case to me that there is no meaning behind moral and should. We recognize the effects of morality (outrage, applauding, guilt), but we’re not self-aware enough about the mechanism of our moral calculations itself. But that isn’t surprising to me, there’s hardly any pattern-recognition in our brains whose mechanism we are self-aware about (I don’t consciously think “such-a-nose and such a face-shape” when my brain recognizes the face of my mother).
The only difference between optical pattern recognition and moral pattern recognition is that the latter deals with behaviours rather than visible objects. To tell me that there’s no morality is like telling me there’s no such thing as a square. Well sure, there’s no Squareness Rock somewhere in the universe, but it’s a actual pattern that our brains recognize.
It seems like a rather different statement to say that there exists a mechanism on our brain which tends to make us want to act as though we had no stakes in the situation, as opposed to talking about what is moral. I’m no evo-psych specialist but it seems plausible that such a mechanism exists. I dispute the notion that such a mechanism encompasses what is usually meant by morality. Most moral systems do not resolve to simply satisfying that mechanism. Also, I see no reason to label that particular mechanism “moral”, nor the output of it those things we “should” do (I don’t just disagree with this on reflection; it’s actually my inuition that “should” means what you want to do, while impartiality is a disconnected preference that I recognize but don’t associate even a little bit with should. I don’t seem to have an intuition about what morality means other than doing what you should, but then I get a little jarring sensation from the contact with my should intuition...). You’ve described something I agree with after the taboo, but which before it I definitely disagree with. It’s just an issue of semantics at this point, but semantics are also important. “Morality” has really huge connotations for us; it’s a bit disingenuous to pick one specific part of our preferences and call it “moral”, or what we “should” do (even if that’s the part of our brain that causes us to talk about morality, it’s not what we mean by morality). I mean, I ignore parts of my preferences all the time. A thousand shards of desire and all that. Acting impartially is somewhere in my preferences,, but it’s pretty effectively drowned out by everything else (and I would self-modify away from it given the option—it’s not worth giving anything up for on reflection, except as social customs dictate).
I can identify the mechanism you call moral outrage though. I experience (in my introspection of my self-simulation, so, you know, reliable data here /sarcasm) frustration that he would make a decision that would kill me for no reason (although it only just now occurred to me that he could be intentionally evil rather than stupid—that’s odd). I oddly experience a much stronger reaction imagining him being an idiot than imagining him directly trying to kill me. Maybe it’s a map from how my “should” algorithm is wired (you should do that which on reflection you want to do) onto the situation, which does make sense. I dislike the goals of the evil guy, but he’s following them as he should. The stupid one is failing to follow them correctly (and harming me in the process—I don’t get anywhere near as upset, although I do get some feeling from it, if he kills 5 to save me).
In short, using the word moral makes your point sound really different than when you don’t. I agree with it, mostly, without “moral” or “should”. I don’t think that most people mean anything close to what you’ve been using those words to mean, so I recommend some added clarity when talking about it. As to the Squareness Rock, “square” is a useful cocept regardless of how I learned it—and if it was a Harblan Rock that told me a Harblan was a rectangle with sides length 2:9, I wouldn’t care (unless there were special properties about Harblans). A Morality Rock only tells me some rules of behavior, which I don’t care about at all unless they line up with the preferences I already had. There is no such thing as morality, except in the way it’s encoded in individual human brains (if you want to call that morality, since I prefer simply calling it preferences); and your definition doesn’t even come close to the entirety of what is encoded in human brains.
(shrug) I find that “prefer” can give different answers for myself all on its own.
I’m not sure that is an important question, actually. Let alone the important question. What makes you think so?
No argument there, ultimately. But just because my beliefs about what I should do are ultimately grounded in terms of my preferences, it still doesn’t follow that in every situation my beliefs about what I should do will be identical to my beliefs about what I prefer to do.
Given that those two things are potentially different, it’s potentially useful to have ways of talking about the difference.
By the important question, I meant the important question with regard to the problem at hand. Ultimately I’ve since decided that the whole concept of morality is a sort of Wrong Question; discourse is vastly improved by eliminating the word altogether (and not replacing with a synonym).
What is the process which determines what you should do? What mental process do you perform to decide that you should or shouldn’t do x? When I try and pinpoint it I just keep finding myself using exactly the same thoughts as when I decide what I prefer to do. When I try to reflect back to my days as a Christian, I recall checking against a set of general rules of good and bad and determine where something lies on that spectrum. Should can mean something different from want in the sense of “according to the Christian Bible, you should use any means necessary to bring others to believe in Christ even if that hurts you.” But when talking about yourself? What’s the rule set you’re comparing to? I want to default to comparing to your preferences. If you don’t do that then you need to be a lot more specific about what you mean by “should”, and indeed why the word is useful at all in that context.
The mental process I go through to determine my preferences is highly scope-sensitive.
For example, the process underlying asking “which of the choices I’ have the practical ability to implement right now do I prefer?” is very different from “which of the choices I have the intellectual ability to conceive of right now do I prefer?” is very different from “do I prefer to choose from among my current choices, or defer choosing?”
Also, the answer I give to each of those questions depends a lot on what parts of my psyche I’m most identifying with at the moment I answer.
Many of my “should” statements refer to the results of the most far-mode, ego-less version of “prefer” that I’ve cached the results of evaluating. In those cases, yes, “should” is equivalent to (one and only one version of) “prefer.” Even in those cases, though, “prefer” is not (generally) equivalent to “should,” though in those cases I am generally happiest when my various other “prefers” converge on my “should”.
There are also “should” statements I make which are really social constructs I’ve picked up uncritically. I make some effort to evaluate these as I identify them and either discard them or endorse them on other grounds, but I don’t devote nearly the effort to that that would be required to complete the task. In many of those cases, my “should” isn’t equivalent to any form of “prefer,” and I am generally happiest in those cases when I discard that “should”.