How about something like this: There’s a certain set of semi abstract criteria that we call ‘morality’. And we happen to be the sorts of beings that (for various reasons) happen to care about this morality stuff as opposed to caring about something else. should we care about morality? Well, what is meant by “should”? It sure seems like that’s a term that we use to simply point to the same morality criteria/computation. In other words, “should we care about morality” seems to translate to “is it moral to care about morality” or “apply morality function to ‘care about morality’ and check the output”
It would seem also that the answer is yes, it is moral to care about morality.
Some other creatures might somewhere care about something other than morality. That’s not a disagreement about any facts or theory or anything, it’s simply that we care about morality and they may care about something like “maximize paperclip production” or whatever.
But, of course, morality is better than paper-clip-ality. (And, of course, when we say “better”, we mean “in terms of those criteria we care about”… ie, morality again.)
It’s not quite circular. Us and the paperclipper creatures wouldn’t really disagree about anything. They’d say “turning all the matter in the solar system into paperclips is paperclipish”, and we’d agree. We’d say “it’s more moral not to do so”, and they’d agree.
The catch is that they don’t give a dingdong about morality, and we don’t give a dingdong about paperclipishness. And indeed that does make us better. And if they scanned our minds to see what we mean by “better”, they’d agree. But then, that criteria that we were referring to by the term “better” is simply not something the paperclippers care about.
“we happen to care about it” is not the justification. It’s moral is the justification. It’s just that our criteria for valid moral justification is, well… morality. Which is as it should be. etc etc.
Morality is seems to be an objective criteria. Actions can be judged good or bad in terms of morality. We simply happen to care about morality instead of something else. And this is indeed a good thing.
I don’t understand two sentences in a row. Not here, not in the meta-ethics sequence, not anywhere where you guys talk about morality.
I don’t understand why I seem to be cognitively fine on other topics on Less Wrong, but then all of a sudden am Flowers for Algernon here.
I’m not going to comment anymore on this topic; it just so happens meta-morality or meta-ethics isn’t something I worry about anyway. But I would like to part with the admonition that I don’t see any reason why LW should be separating so many words from their original meanings—“good”, “better”, “should”, etc. It doesn’t seem to be clarifying things even for you guys.
I think that when something is understood—really understood—you can write it down in words. If you can’t describe an understanding, you don’t own it.
Huh? I’m asserting that most people, when they use words like “morality”, “should”(in a moral context), “better”(ditto), etc, are pointing at the same thing. That is, we think this sort of thing partly captures what people actually mean by the terms. Now, we don’t have full self knowledge, and our morality algorithm hasn’t finished reflecting (that is, hasn’t finished reconsidering itself, etc), so we have uncertainty about what sorts of things are or are not moral… But that’s a separate issue.
As far as the rest… I’m pretty sure I understand the basic idea. Anything I can do to help clarify it?
How about this: “morality is objective, and we simply happen to be the sorts of beings that care about morality as opposed to, say, evil psycho alien bots that care about maximizing paperclips instead of morality”
It looks circular to me. Of course, if you look hard enough at any views like this, the only choices are circles and terminating lines, and it seems almost an aesthetic matter which someone goes with, but this is such a small circle. It’s right to care about morality and to be moral because morality says so and morality possesses the sole capacity to identify “rightness”, including the rightness of caring about morality.
It’s more almost, well, I hate to say this, but more a matter of definitions.
ie, what do you MEAN by the term “right”?
Just keep poking your brain about that, and keep poking your brain about what you mean by “should” and what you actually mean by terms like “morality” and I think you’ll find that all those terms are pointing at the same thing.
It’s not so much “there’s this criteria of ‘rightness’ that only morality has the ability to measure” but rather an appeal to morality is what we mean when we say stuff like “‘should’ we do this? is it ‘right’?” etc...
The situation is more, well, like this:
Humans: “Morality says that, among other things, it’s more better and moral to be, well, moral. It is also moral to save lives, help people, bring joy, and a whole lot of other things”
Paperclipers: “having scanned your brains to see what you mean by these terms, we agree with your statement.”
Paperclippers: “Converting all the matter in your system into paperclips is paperclipish. Further, it is better and paperclipish to be paperclipish.”
Humans: “having scanned your minds to determine what you actually mean by those terms, we agree with your statement.”
Humans: “However, we don’t care about paperclipishness. We care about morality. Turning all the matter of our solar system (including the matter we are composed of) into paperclips is bad, so we will try to stop you.”
Paperclippers: “We do not care about morality. We care about paperclipishness. Resisting the conversion to paperclips is unpaperclipish. Therefore we will try to crush your resistance.”
This is very different from what we normally think of as circular arguments, which would be of the form of “A, therefore B, therefore A, QED”, while the other side would be “no! not A”
Here, all sides agree about stuff. It’s just that they value different things. But the fact of humans valuing the stuff isn’t the justification for valuing that stuff. The justification is that it’s moral. But the fact is that we happen to be moved by arguments like “it’s moral”, rather than the wicked paperclippers that only care about whether it’s paperclipish or not.
But why should I feel obliged to act morally instead of paperclippishly?
Circles seem all well and good when you’re already inside of them, but being inside of them already is kind of not the point of discussing meta-ethics.
Well, that’s not necessarily a moral sense of ‘should’, I guess—I’m asking whether I have any sort of good reason to act morally, be it an appeal to my interests or to transcendent moral reasons or whatever.
It’s generally the contention of moralists and paperclipists that there’s always good reason for everyone to act morally or paperclippishly. But proving that this contention itself just boils down to yet another moral/paperclippy claim doesn’t seem to help their case any. It just demonstrates what a tight circle their argument is, and what little reason someone outside of it has to care about it if they don’t already.
What do you mean by “should” in this context other than a moral sense of it? What would count as a “good reason”?
As far as your statement about both moralists and paperclippers thinking there are “good reasons”… the catch is that the phrase “good reasons” is being used to refer to two distinct concepts. When a human/moralist uses it, they mean, well… good, as opposed to evil.
A paperclipper, however, is not concerned at all about that standard. A paperclipper cares about what, well, maximizes paperclips.
It’s not that it should do so, but simply that it doesn’t care what it should do. Being evil doesn’t bother it any more than failing to maximize paperclips bothers you.
Being evil is clearly worse (where by “worse” I mean, well, immoral, bad, evil, etc...) that being good. But the paperclipper doesn’t care. But you do (as far as I know. If you don’t, then… I think you scare me). What sort of standard other than morality would you want to appeal to for this sort of issue in the first place?
What do you mean by “should” in this context other than a moral sense of it? What would count as a “good reason”?
By that I mean rationally motivating reasons.
But I’d be willing to concede, if you pressed, that ‘rationality’ is itself just another set of action-directing values. The point would still stand: if the set of values I mean when I say ‘rationality’ is incongruent with the set of values you mean when you say ‘morality,’ then it appears you have no grounds on which to persuade me to be directed by morality.
This is a very unsatisfactory conclusion for most moral realists, who believe that moral reasons are to be inherently objectively compelling to any sentient being. So I’m not sure if the position you’re espousing is just a complicated way of expressing surrender, or an attempt to reframe the question, or what, but it doesn’t seem to get us any more traction when it comes to answering “Why should I be moral?”
But you do (as far as I know. If you don’t, then… I think you scare me).
Duly noted, but is what I happen to care about relevant to this issue of meta-ethics?
Rationality is basically “how to make an accurate map of the world… and how to WIN (where win basically means getting what you “want” (where want includes all your preferences, stuff like morality, etc etc...)
Before rationality can tell you what to do, you have to tell it what it is you’re trying to do.
If your goal is to save lives, rationality can help you find ways to do that. If your goal is to turn stuff into paperclips, rationality can help you find ways to do that too.
I’m not sure I quite understand you mean by “rationally motivating” reasons.
As far as objectively compelling to any sentient (let me generalize that to any intelligent being)… Why should there be any such thing? “Doing this will help ensure your survival” “But… what if I don’t care about this?”
This is a very unsatisfactory conclusion for most moral realists, who believe that moral reasons are to be inherently objectively compelling to any sentient being.
According to the original post, strong moral realism (the above) is not held by most moral realists.
Well, my “moral reasons are to be...” there was kind of slippery. The ‘strong moral realism’ Roko outlined seems to be based on a factual premise (“All...beings...will agree...”), which I’d agree most moral realists are smart enough not to hold. The much more commonly held view seems to amount instead to a sort of … moral imperative to accept moral imperatives—by positing a set of knowable moral facts that we might not bother to recognize or follow, but ought to. Which seems like more of the same circular reasoning that Psy-Kosh has been talking about/defending.
What I’m saying is that when you say the word “ought”, you mean something. Even if you can’t quite articulate it, you have some sort of standard for saying “you ought do this, you ought not do that” that is basically the definition of ought.
I’m saying”this oughtness, whatever it is, is the same thing that you mean when you talk about ‘morality’. So “ought I be moral?” directly translates to “is it moral to be moral?”
I’m not saying “only morality has the authority to answer this question” but rather “uh… ‘is X moral?’ is kind of what you actually mean by ought/should/etc, isn’t it? ie, if I do a bit of a trace in your brain, follow the word back to its associated concepts, isn’t it going to be pointing/labeling the same algorithms that “morality” labels in your brain?
So basically it amounts to “yes, there’re things that one ought to do… and there can exist beings that know this but simply don’t care about whether or not they ‘ought’ to do something.”
It’s not that another being refuses to recognize this so much as they’d be saying “So what? we don’t care about this ‘oughtness’ business.” It’s not a disagreement, it’s simply failing to care about it.
What I’m saying is that when you say the word “ought”, you mean something. Even if you can’t quite articulate it, you have some sort of standard for saying “you ought do this, you ought not do that” that is basically the definition of ought.
I’d object to this simplification of the meaning of the word (I’d argue that ‘ought’ means lots of different things in different contexts, most of which aren’t only reducible to categorically imperative moral claims), but I suppose it’s not really relevant here.
I’m pretty sure we agree and are just playing with the words differently.
There are certain things one ought to do—and by ‘ought’ I mean you will be motivated to do those things, provided you already agree that they are among the ‘things one ought to do’
and
There is no non-circular answer to the question “Why should I be moral?”, so the moral realists’ project is sunk
seem to amount to about the same thing from where I sit. But it’s a bit misleading to phrase your admission that moral realism fails (and it does, just as paperclip realism fails) as an affirmation that “there are things one ought to do”.
The fact that some other creature might instead want to know the answer to the question “what is 6*7?” (which also has an objectively true answer) is irrelevant.
How does that make “what is 2+3?” less real?
Similarly, how does the fact that some other beings might care about something other than morality make questions of the form “what is moral? what should I do?” non objective?
It’s nothing to do with agreement. When you ask “ought I do this?”, well… to the extent that you’re not speaking empty words, you’re asking SOME specific question.
There is some criteria by which “oughtness” can be judged… that is, the defining criteria. It may be hard for you to articulate, it may only be implicitly encoded in your brain, but to the extent that word is a label for some concept, it means something.
I do not think you’d argue too much against this.
I make an additional claim: That that which we commonly refer to in these contexts by words like “Should”, “ought” and so on is the same thing we’re referring to when we say stuff like “morality”.
To me “what should I do?” and “what is the moral thing to do?” are basically the same question, pretty much.
“Ought I be moral?” thus would translate to “ought I be the sort of person that does what I ought to do?”
I think the answer to that is yes.
There may be beings that agree with that completely but take the view of “but we simply don’t care about whether or not we ought to do something. It is not that we disagree with your claims about whether one ought to be moral. We agree we ought to be moral. We simply place no value in doing what one ‘ought’ to do. Instead we value certain other things.” But screw them… I mean, they don’t do what they ought to do!
“what is 2+3?” has an objectively true answer. The fact that some other creature might instead want to know the answer to the question “what is 6*7?” (which also has an objectively true answer) is irrelevant.
Oh shit. I get it. Morality exists outside of ourselves in the same way that paperclips exists outside clippies.
Babyeating is justified by some of the same impulses as baby saving: protecting ones own genetic line.
It’s not necessarily as well motivated by the criteria of saving sentient creatures from pain, but you might be able to make an argument for it. Maybe if you took thhe opposite path and said not that pain was bad, but that sentience / long life/ grandchildren was good and baby eating was a “moral decision” for having grand children.
First part yes, rest… not quite. (or maybe I’m misunderstanding you?)
“Protecting one’s own genetic line” would be more the evolutionary reason. ie, part of the process that led to us valuing morality as opposed to valuing paperclips. (or, hypothetically fictionally alternately, part of the process that led to the Babyeaters valuing babyeating instead of valuing morality.)
But that’s not exactly a moral justification as much as it is part of an explanation of why we care about morality. We should save babies… because! ie, Babies (or people in general, for that matter) dying is bad. Killing innocent sentients, especially those that have had the least opportunity to live, is extra bad. The fact that I care about this is ultimately in part explained via evolutionary processes, but that’s not the justification.
The hypothetical Babyeaters do not care about morality. That’s kind of the point. It’s not that they’ve come to different conclusions about morality as much as the thing that they value isn’t quite morality in the first place.
I… don’t think so. One theory of morality is that killing death is bad. Sure, that’s at least a component of most moral systems, but there are certain circumstance under which killing is good or okay. Such as if the person you’re killing is a Nazi or a werewolf or if they are a fetus you could not support to adulthood or trying to kill you or a death row inmate guilty of a crime by rule of law.
Justifications for killing are often moral.
Babyeaters are, in a way at least possessing similarities to human morality, justified by giving the fewer remaining children a chance at a life with the guidance of adult babyeaters, and more resources since they don’t have to compete against millions of their siblings.
This allows babyeaters to develop something like empathy, affection, bonding, love and happiness for the surviving babyeater kind. Without this, babyeaters would be unable to make a babyeater society, and it’s really easy to apply utilitarianism to it in the same way utilitarian theory can apply utilitarian theory to human morality.
It’s also justified because it’s an individual sacrifice to your own genetic line, rather than the eating other babyeater’s children, which is the type of a grandchildren maximizer would do. The need of the many > The wants of the few, which also plays a part in various theories of morality.
I’d say they reached the same conclusion that we did about most things, it’s just they took necessary and important moral sacrifice, and turned it into a ritual that is now detached from morality.
It damn well sounds like we’re talking about the same thing. The only objection I can think of is that they re aliens and that that would be highly improbable, but if morality is just an evolutionary optimization strategy among intelligent minds, even something that could be computed mathematically, then it isn’t necessarily any more unlikely than that certain parts of human and plant anatomy would follow the Fibonacci sequence.
In some sense, the analogy between morality and arithmetics is right. On the other hand, the meaning of arithmetics can be described enough precisely, so that everybody means the same thing by using that word. Here, I don’t know exactly what you mean by morality. Yes, saving babies, not comitting murder and all that stuff, but when it comes to details, I am pretty sure that you will often find yourself disagreeing with others about what is moral. Of course, in your language, any such disagreement means that somebody is wrong about the fact. What I am uncomfortable with is the lack of unambiguous definition.
So, there is a computation named “morality”, but nobody knows what it exactly is, and nobody gives methods how to discover new details of the yet incomplete definition. Fair, but I don’t see any compelling argument why to attach words to only partly defined objects, or why to care too much about them. Seems to me that this approach pictures morality as an ineffable stuff, although of different kind than the standard bad philosophy does.
Is there some way in which this is not all fantastically circular?
How about something like this: There’s a certain set of semi abstract criteria that we call ‘morality’. And we happen to be the sorts of beings that (for various reasons) happen to care about this morality stuff as opposed to caring about something else. should we care about morality? Well, what is meant by “should”? It sure seems like that’s a term that we use to simply point to the same morality criteria/computation. In other words, “should we care about morality” seems to translate to “is it moral to care about morality” or “apply morality function to ‘care about morality’ and check the output”
It would seem also that the answer is yes, it is moral to care about morality.
Some other creatures might somewhere care about something other than morality. That’s not a disagreement about any facts or theory or anything, it’s simply that we care about morality and they may care about something like “maximize paperclip production” or whatever.
But, of course, morality is better than paper-clip-ality. (And, of course, when we say “better”, we mean “in terms of those criteria we care about”… ie, morality again.)
It’s not quite circular. Us and the paperclipper creatures wouldn’t really disagree about anything. They’d say “turning all the matter in the solar system into paperclips is paperclipish”, and we’d agree. We’d say “it’s more moral not to do so”, and they’d agree.
The catch is that they don’t give a dingdong about morality, and we don’t give a dingdong about paperclipishness. And indeed that does make us better. And if they scanned our minds to see what we mean by “better”, they’d agree. But then, that criteria that we were referring to by the term “better” is simply not something the paperclippers care about.
“we happen to care about it” is not the justification. It’s moral is the justification. It’s just that our criteria for valid moral justification is, well… morality. Which is as it should be. etc etc.
Morality is seems to be an objective criteria. Actions can be judged good or bad in terms of morality. We simply happen to care about morality instead of something else. And this is indeed a good thing.
I don’t understand two sentences in a row. Not here, not in the meta-ethics sequence, not anywhere where you guys talk about morality.
I don’t understand why I seem to be cognitively fine on other topics on Less Wrong, but then all of a sudden am Flowers for Algernon here.
I’m not going to comment anymore on this topic; it just so happens meta-morality or meta-ethics isn’t something I worry about anyway. But I would like to part with the admonition that I don’t see any reason why LW should be separating so many words from their original meanings—“good”, “better”, “should”, etc. It doesn’t seem to be clarifying things even for you guys.
I think that when something is understood—really understood—you can write it down in words. If you can’t describe an understanding, you don’t own it.
Huh? I’m asserting that most people, when they use words like “morality”, “should”(in a moral context), “better”(ditto), etc, are pointing at the same thing. That is, we think this sort of thing partly captures what people actually mean by the terms. Now, we don’t have full self knowledge, and our morality algorithm hasn’t finished reflecting (that is, hasn’t finished reconsidering itself, etc), so we have uncertainty about what sorts of things are or are not moral… But that’s a separate issue.
As far as the rest… I’m pretty sure I understand the basic idea. Anything I can do to help clarify it?
How about this: “morality is objective, and we simply happen to be the sorts of beings that care about morality as opposed to, say, evil psycho alien bots that care about maximizing paperclips instead of morality”
Does that help at all?
It looks circular to me. Of course, if you look hard enough at any views like this, the only choices are circles and terminating lines, and it seems almost an aesthetic matter which someone goes with, but this is such a small circle. It’s right to care about morality and to be moral because morality says so and morality possesses the sole capacity to identify “rightness”, including the rightness of caring about morality.
It’s more almost, well, I hate to say this, but more a matter of definitions.
ie, what do you MEAN by the term “right”?
Just keep poking your brain about that, and keep poking your brain about what you mean by “should” and what you actually mean by terms like “morality” and I think you’ll find that all those terms are pointing at the same thing.
It’s not so much “there’s this criteria of ‘rightness’ that only morality has the ability to measure” but rather an appeal to morality is what we mean when we say stuff like “‘should’ we do this? is it ‘right’?” etc...
The situation is more, well, like this:
Humans: “Morality says that, among other things, it’s more better and moral to be, well, moral. It is also moral to save lives, help people, bring joy, and a whole lot of other things”
Paperclipers: “having scanned your brains to see what you mean by these terms, we agree with your statement.”
Paperclippers: “Converting all the matter in your system into paperclips is paperclipish. Further, it is better and paperclipish to be paperclipish.”
Humans: “having scanned your minds to determine what you actually mean by those terms, we agree with your statement.”
Humans: “However, we don’t care about paperclipishness. We care about morality. Turning all the matter of our solar system (including the matter we are composed of) into paperclips is bad, so we will try to stop you.”
Paperclippers: “We do not care about morality. We care about paperclipishness. Resisting the conversion to paperclips is unpaperclipish. Therefore we will try to crush your resistance.”
This is very different from what we normally think of as circular arguments, which would be of the form of “A, therefore B, therefore A, QED”, while the other side would be “no! not A”
Here, all sides agree about stuff. It’s just that they value different things. But the fact of humans valuing the stuff isn’t the justification for valuing that stuff. The justification is that it’s moral. But the fact is that we happen to be moved by arguments like “it’s moral”, rather than the wicked paperclippers that only care about whether it’s paperclipish or not.
But why should I feel obliged to act morally instead of paperclippishly? Circles seem all well and good when you’re already inside of them, but being inside of them already is kind of not the point of discussing meta-ethics.
“should”
What do you mean by “should”? Do you actually mean anything by it other than an appeal to morality in the first place?
Well, that’s not necessarily a moral sense of ‘should’, I guess—I’m asking whether I have any sort of good reason to act morally, be it an appeal to my interests or to transcendent moral reasons or whatever.
It’s generally the contention of moralists and paperclipists that there’s always good reason for everyone to act morally or paperclippishly. But proving that this contention itself just boils down to yet another moral/paperclippy claim doesn’t seem to help their case any. It just demonstrates what a tight circle their argument is, and what little reason someone outside of it has to care about it if they don’t already.
What do you mean by “should” in this context other than a moral sense of it? What would count as a “good reason”?
As far as your statement about both moralists and paperclippers thinking there are “good reasons”… the catch is that the phrase “good reasons” is being used to refer to two distinct concepts. When a human/moralist uses it, they mean, well… good, as opposed to evil.
A paperclipper, however, is not concerned at all about that standard. A paperclipper cares about what, well, maximizes paperclips.
It’s not that it should do so, but simply that it doesn’t care what it should do. Being evil doesn’t bother it any more than failing to maximize paperclips bothers you.
Being evil is clearly worse (where by “worse” I mean, well, immoral, bad, evil, etc...) that being good. But the paperclipper doesn’t care. But you do (as far as I know. If you don’t, then… I think you scare me). What sort of standard other than morality would you want to appeal to for this sort of issue in the first place?
By that I mean rationally motivating reasons. But I’d be willing to concede, if you pressed, that ‘rationality’ is itself just another set of action-directing values. The point would still stand: if the set of values I mean when I say ‘rationality’ is incongruent with the set of values you mean when you say ‘morality,’ then it appears you have no grounds on which to persuade me to be directed by morality.
This is a very unsatisfactory conclusion for most moral realists, who believe that moral reasons are to be inherently objectively compelling to any sentient being. So I’m not sure if the position you’re espousing is just a complicated way of expressing surrender, or an attempt to reframe the question, or what, but it doesn’t seem to get us any more traction when it comes to answering “Why should I be moral?”
Duly noted, but is what I happen to care about relevant to this issue of meta-ethics?
Rationality is basically “how to make an accurate map of the world… and how to WIN (where win basically means getting what you “want” (where want includes all your preferences, stuff like morality, etc etc...)
Before rationality can tell you what to do, you have to tell it what it is you’re trying to do.
If your goal is to save lives, rationality can help you find ways to do that. If your goal is to turn stuff into paperclips, rationality can help you find ways to do that too.
I’m not sure I quite understand you mean by “rationally motivating” reasons.
As far as objectively compelling to any sentient (let me generalize that to any intelligent being)… Why should there be any such thing? “Doing this will help ensure your survival” “But… what if I don’t care about this?”
“doing this will bring joy” “So?”
etc etc… There are No Universally Compelling Arguments
According to the original post, strong moral realism (the above) is not held by most moral realists.
Well, my “moral reasons are to be...” there was kind of slippery. The ‘strong moral realism’ Roko outlined seems to be based on a factual premise (“All...beings...will agree...”), which I’d agree most moral realists are smart enough not to hold. The much more commonly held view seems to amount instead to a sort of … moral imperative to accept moral imperatives—by positing a set of knowable moral facts that we might not bother to recognize or follow, but ought to. Which seems like more of the same circular reasoning that Psy-Kosh has been talking about/defending.
What I’m saying is that when you say the word “ought”, you mean something. Even if you can’t quite articulate it, you have some sort of standard for saying “you ought do this, you ought not do that” that is basically the definition of ought.
I’m saying”this oughtness, whatever it is, is the same thing that you mean when you talk about ‘morality’. So “ought I be moral?” directly translates to “is it moral to be moral?”
I’m not saying “only morality has the authority to answer this question” but rather “uh… ‘is X moral?’ is kind of what you actually mean by ought/should/etc, isn’t it? ie, if I do a bit of a trace in your brain, follow the word back to its associated concepts, isn’t it going to be pointing/labeling the same algorithms that “morality” labels in your brain?
So basically it amounts to “yes, there’re things that one ought to do… and there can exist beings that know this but simply don’t care about whether or not they ‘ought’ to do something.”
It’s not that another being refuses to recognize this so much as they’d be saying “So what? we don’t care about this ‘oughtness’ business.” It’s not a disagreement, it’s simply failing to care about it.
I’d object to this simplification of the meaning of the word (I’d argue that ‘ought’ means lots of different things in different contexts, most of which aren’t only reducible to categorically imperative moral claims), but I suppose it’s not really relevant here.
I’m pretty sure we agree and are just playing with the words differently.
and
seem to amount to about the same thing from where I sit. But it’s a bit misleading to phrase your admission that moral realism fails (and it does, just as paperclip realism fails) as an affirmation that “there are things one ought to do”.
What’s failing?
“what is 2+3?” has an objectively true answer.
The fact that some other creature might instead want to know the answer to the question “what is 6*7?” (which also has an objectively true answer) is irrelevant.
How does that make “what is 2+3?” less real?
Similarly, how does the fact that some other beings might care about something other than morality make questions of the form “what is moral? what should I do?” non objective?
It’s nothing to do with agreement. When you ask “ought I do this?”, well… to the extent that you’re not speaking empty words, you’re asking SOME specific question.
There is some criteria by which “oughtness” can be judged… that is, the defining criteria. It may be hard for you to articulate, it may only be implicitly encoded in your brain, but to the extent that word is a label for some concept, it means something.
I do not think you’d argue too much against this.
I make an additional claim: That that which we commonly refer to in these contexts by words like “Should”, “ought” and so on is the same thing we’re referring to when we say stuff like “morality”.
To me “what should I do?” and “what is the moral thing to do?” are basically the same question, pretty much.
“Ought I be moral?” thus would translate to “ought I be the sort of person that does what I ought to do?”
I think the answer to that is yes.
There may be beings that agree with that completely but take the view of “but we simply don’t care about whether or not we ought to do something. It is not that we disagree with your claims about whether one ought to be moral. We agree we ought to be moral. We simply place no value in doing what one ‘ought’ to do. Instead we value certain other things.” But screw them… I mean, they don’t do what they ought to do!
(EDIT: minor changes to last paragraph.)
I just want to know, what is six by nine?
“nobody writes jokes in base 13” :)
Oh shit. I get it. Morality exists outside of ourselves in the same way that paperclips exists outside clippies.
Babyeating is justified by some of the same impulses as baby saving: protecting ones own genetic line.
It’s not necessarily as well motivated by the criteria of saving sentient creatures from pain, but you might be able to make an argument for it. Maybe if you took thhe opposite path and said not that pain was bad, but that sentience / long life/ grandchildren was good and baby eating was a “moral decision” for having grand children.
First part yes, rest… not quite. (or maybe I’m misunderstanding you?)
“Protecting one’s own genetic line” would be more the evolutionary reason. ie, part of the process that led to us valuing morality as opposed to valuing paperclips. (or, hypothetically fictionally alternately, part of the process that led to the Babyeaters valuing babyeating instead of valuing morality.)
But that’s not exactly a moral justification as much as it is part of an explanation of why we care about morality. We should save babies… because! ie, Babies (or people in general, for that matter) dying is bad. Killing innocent sentients, especially those that have had the least opportunity to live, is extra bad. The fact that I care about this is ultimately in part explained via evolutionary processes, but that’s not the justification.
The hypothetical Babyeaters do not care about morality. That’s kind of the point. It’s not that they’ve come to different conclusions about morality as much as the thing that they value isn’t quite morality in the first place.
I… don’t think so. One theory of morality is that killing death is bad. Sure, that’s at least a component of most moral systems, but there are certain circumstance under which killing is good or okay. Such as if the person you’re killing is a Nazi or a werewolf or if they are a fetus you could not support to adulthood or trying to kill you or a death row inmate guilty of a crime by rule of law.
Justifications for killing are often moral.
Babyeaters are, in a way at least possessing similarities to human morality, justified by giving the fewer remaining children a chance at a life with the guidance of adult babyeaters, and more resources since they don’t have to compete against millions of their siblings.
This allows babyeaters to develop something like empathy, affection, bonding, love and happiness for the surviving babyeater kind. Without this, babyeaters would be unable to make a babyeater society, and it’s really easy to apply utilitarianism to it in the same way utilitarian theory can apply utilitarian theory to human morality.
It’s also justified because it’s an individual sacrifice to your own genetic line, rather than the eating other babyeater’s children, which is the type of a grandchildren maximizer would do. The need of the many > The wants of the few, which also plays a part in various theories of morality.
I’d say they reached the same conclusion that we did about most things, it’s just they took necessary and important moral sacrifice, and turned it into a ritual that is now detached from morality.
It damn well sounds like we’re talking about the same thing. The only objection I can think of is that they re aliens and that that would be highly improbable, but if morality is just an evolutionary optimization strategy among intelligent minds, even something that could be computed mathematically, then it isn’t necessarily any more unlikely than that certain parts of human and plant anatomy would follow the Fibonacci sequence.
Only in the sense that “2 + 2 = 4” is not fantastically circular.
In some sense, the analogy between morality and arithmetics is right. On the other hand, the meaning of arithmetics can be described enough precisely, so that everybody means the same thing by using that word. Here, I don’t know exactly what you mean by morality. Yes, saving babies, not comitting murder and all that stuff, but when it comes to details, I am pretty sure that you will often find yourself disagreeing with others about what is moral. Of course, in your language, any such disagreement means that somebody is wrong about the fact. What I am uncomfortable with is the lack of unambiguous definition.
So, there is a computation named “morality”, but nobody knows what it exactly is, and nobody gives methods how to discover new details of the yet incomplete definition. Fair, but I don’t see any compelling argument why to attach words to only partly defined objects, or why to care too much about them. Seems to me that this approach pictures morality as an ineffable stuff, although of different kind than the standard bad philosophy does.