What do you mean by “should” in this context other than a moral sense of it? What would count as a “good reason”?
As far as your statement about both moralists and paperclippers thinking there are “good reasons”… the catch is that the phrase “good reasons” is being used to refer to two distinct concepts. When a human/moralist uses it, they mean, well… good, as opposed to evil.
A paperclipper, however, is not concerned at all about that standard. A paperclipper cares about what, well, maximizes paperclips.
It’s not that it should do so, but simply that it doesn’t care what it should do. Being evil doesn’t bother it any more than failing to maximize paperclips bothers you.
Being evil is clearly worse (where by “worse” I mean, well, immoral, bad, evil, etc...) that being good. But the paperclipper doesn’t care. But you do (as far as I know. If you don’t, then… I think you scare me). What sort of standard other than morality would you want to appeal to for this sort of issue in the first place?
What do you mean by “should” in this context other than a moral sense of it? What would count as a “good reason”?
By that I mean rationally motivating reasons.
But I’d be willing to concede, if you pressed, that ‘rationality’ is itself just another set of action-directing values. The point would still stand: if the set of values I mean when I say ‘rationality’ is incongruent with the set of values you mean when you say ‘morality,’ then it appears you have no grounds on which to persuade me to be directed by morality.
This is a very unsatisfactory conclusion for most moral realists, who believe that moral reasons are to be inherently objectively compelling to any sentient being. So I’m not sure if the position you’re espousing is just a complicated way of expressing surrender, or an attempt to reframe the question, or what, but it doesn’t seem to get us any more traction when it comes to answering “Why should I be moral?”
But you do (as far as I know. If you don’t, then… I think you scare me).
Duly noted, but is what I happen to care about relevant to this issue of meta-ethics?
Rationality is basically “how to make an accurate map of the world… and how to WIN (where win basically means getting what you “want” (where want includes all your preferences, stuff like morality, etc etc...)
Before rationality can tell you what to do, you have to tell it what it is you’re trying to do.
If your goal is to save lives, rationality can help you find ways to do that. If your goal is to turn stuff into paperclips, rationality can help you find ways to do that too.
I’m not sure I quite understand you mean by “rationally motivating” reasons.
As far as objectively compelling to any sentient (let me generalize that to any intelligent being)… Why should there be any such thing? “Doing this will help ensure your survival” “But… what if I don’t care about this?”
This is a very unsatisfactory conclusion for most moral realists, who believe that moral reasons are to be inherently objectively compelling to any sentient being.
According to the original post, strong moral realism (the above) is not held by most moral realists.
Well, my “moral reasons are to be...” there was kind of slippery. The ‘strong moral realism’ Roko outlined seems to be based on a factual premise (“All...beings...will agree...”), which I’d agree most moral realists are smart enough not to hold. The much more commonly held view seems to amount instead to a sort of … moral imperative to accept moral imperatives—by positing a set of knowable moral facts that we might not bother to recognize or follow, but ought to. Which seems like more of the same circular reasoning that Psy-Kosh has been talking about/defending.
What I’m saying is that when you say the word “ought”, you mean something. Even if you can’t quite articulate it, you have some sort of standard for saying “you ought do this, you ought not do that” that is basically the definition of ought.
I’m saying”this oughtness, whatever it is, is the same thing that you mean when you talk about ‘morality’. So “ought I be moral?” directly translates to “is it moral to be moral?”
I’m not saying “only morality has the authority to answer this question” but rather “uh… ‘is X moral?’ is kind of what you actually mean by ought/should/etc, isn’t it? ie, if I do a bit of a trace in your brain, follow the word back to its associated concepts, isn’t it going to be pointing/labeling the same algorithms that “morality” labels in your brain?
So basically it amounts to “yes, there’re things that one ought to do… and there can exist beings that know this but simply don’t care about whether or not they ‘ought’ to do something.”
It’s not that another being refuses to recognize this so much as they’d be saying “So what? we don’t care about this ‘oughtness’ business.” It’s not a disagreement, it’s simply failing to care about it.
What I’m saying is that when you say the word “ought”, you mean something. Even if you can’t quite articulate it, you have some sort of standard for saying “you ought do this, you ought not do that” that is basically the definition of ought.
I’d object to this simplification of the meaning of the word (I’d argue that ‘ought’ means lots of different things in different contexts, most of which aren’t only reducible to categorically imperative moral claims), but I suppose it’s not really relevant here.
I’m pretty sure we agree and are just playing with the words differently.
There are certain things one ought to do—and by ‘ought’ I mean you will be motivated to do those things, provided you already agree that they are among the ‘things one ought to do’
and
There is no non-circular answer to the question “Why should I be moral?”, so the moral realists’ project is sunk
seem to amount to about the same thing from where I sit. But it’s a bit misleading to phrase your admission that moral realism fails (and it does, just as paperclip realism fails) as an affirmation that “there are things one ought to do”.
The fact that some other creature might instead want to know the answer to the question “what is 6*7?” (which also has an objectively true answer) is irrelevant.
How does that make “what is 2+3?” less real?
Similarly, how does the fact that some other beings might care about something other than morality make questions of the form “what is moral? what should I do?” non objective?
It’s nothing to do with agreement. When you ask “ought I do this?”, well… to the extent that you’re not speaking empty words, you’re asking SOME specific question.
There is some criteria by which “oughtness” can be judged… that is, the defining criteria. It may be hard for you to articulate, it may only be implicitly encoded in your brain, but to the extent that word is a label for some concept, it means something.
I do not think you’d argue too much against this.
I make an additional claim: That that which we commonly refer to in these contexts by words like “Should”, “ought” and so on is the same thing we’re referring to when we say stuff like “morality”.
To me “what should I do?” and “what is the moral thing to do?” are basically the same question, pretty much.
“Ought I be moral?” thus would translate to “ought I be the sort of person that does what I ought to do?”
I think the answer to that is yes.
There may be beings that agree with that completely but take the view of “but we simply don’t care about whether or not we ought to do something. It is not that we disagree with your claims about whether one ought to be moral. We agree we ought to be moral. We simply place no value in doing what one ‘ought’ to do. Instead we value certain other things.” But screw them… I mean, they don’t do what they ought to do!
“what is 2+3?” has an objectively true answer. The fact that some other creature might instead want to know the answer to the question “what is 6*7?” (which also has an objectively true answer) is irrelevant.
What do you mean by “should” in this context other than a moral sense of it? What would count as a “good reason”?
As far as your statement about both moralists and paperclippers thinking there are “good reasons”… the catch is that the phrase “good reasons” is being used to refer to two distinct concepts. When a human/moralist uses it, they mean, well… good, as opposed to evil.
A paperclipper, however, is not concerned at all about that standard. A paperclipper cares about what, well, maximizes paperclips.
It’s not that it should do so, but simply that it doesn’t care what it should do. Being evil doesn’t bother it any more than failing to maximize paperclips bothers you.
Being evil is clearly worse (where by “worse” I mean, well, immoral, bad, evil, etc...) that being good. But the paperclipper doesn’t care. But you do (as far as I know. If you don’t, then… I think you scare me). What sort of standard other than morality would you want to appeal to for this sort of issue in the first place?
By that I mean rationally motivating reasons. But I’d be willing to concede, if you pressed, that ‘rationality’ is itself just another set of action-directing values. The point would still stand: if the set of values I mean when I say ‘rationality’ is incongruent with the set of values you mean when you say ‘morality,’ then it appears you have no grounds on which to persuade me to be directed by morality.
This is a very unsatisfactory conclusion for most moral realists, who believe that moral reasons are to be inherently objectively compelling to any sentient being. So I’m not sure if the position you’re espousing is just a complicated way of expressing surrender, or an attempt to reframe the question, or what, but it doesn’t seem to get us any more traction when it comes to answering “Why should I be moral?”
Duly noted, but is what I happen to care about relevant to this issue of meta-ethics?
Rationality is basically “how to make an accurate map of the world… and how to WIN (where win basically means getting what you “want” (where want includes all your preferences, stuff like morality, etc etc...)
Before rationality can tell you what to do, you have to tell it what it is you’re trying to do.
If your goal is to save lives, rationality can help you find ways to do that. If your goal is to turn stuff into paperclips, rationality can help you find ways to do that too.
I’m not sure I quite understand you mean by “rationally motivating” reasons.
As far as objectively compelling to any sentient (let me generalize that to any intelligent being)… Why should there be any such thing? “Doing this will help ensure your survival” “But… what if I don’t care about this?”
“doing this will bring joy” “So?”
etc etc… There are No Universally Compelling Arguments
According to the original post, strong moral realism (the above) is not held by most moral realists.
Well, my “moral reasons are to be...” there was kind of slippery. The ‘strong moral realism’ Roko outlined seems to be based on a factual premise (“All...beings...will agree...”), which I’d agree most moral realists are smart enough not to hold. The much more commonly held view seems to amount instead to a sort of … moral imperative to accept moral imperatives—by positing a set of knowable moral facts that we might not bother to recognize or follow, but ought to. Which seems like more of the same circular reasoning that Psy-Kosh has been talking about/defending.
What I’m saying is that when you say the word “ought”, you mean something. Even if you can’t quite articulate it, you have some sort of standard for saying “you ought do this, you ought not do that” that is basically the definition of ought.
I’m saying”this oughtness, whatever it is, is the same thing that you mean when you talk about ‘morality’. So “ought I be moral?” directly translates to “is it moral to be moral?”
I’m not saying “only morality has the authority to answer this question” but rather “uh… ‘is X moral?’ is kind of what you actually mean by ought/should/etc, isn’t it? ie, if I do a bit of a trace in your brain, follow the word back to its associated concepts, isn’t it going to be pointing/labeling the same algorithms that “morality” labels in your brain?
So basically it amounts to “yes, there’re things that one ought to do… and there can exist beings that know this but simply don’t care about whether or not they ‘ought’ to do something.”
It’s not that another being refuses to recognize this so much as they’d be saying “So what? we don’t care about this ‘oughtness’ business.” It’s not a disagreement, it’s simply failing to care about it.
I’d object to this simplification of the meaning of the word (I’d argue that ‘ought’ means lots of different things in different contexts, most of which aren’t only reducible to categorically imperative moral claims), but I suppose it’s not really relevant here.
I’m pretty sure we agree and are just playing with the words differently.
and
seem to amount to about the same thing from where I sit. But it’s a bit misleading to phrase your admission that moral realism fails (and it does, just as paperclip realism fails) as an affirmation that “there are things one ought to do”.
What’s failing?
“what is 2+3?” has an objectively true answer.
The fact that some other creature might instead want to know the answer to the question “what is 6*7?” (which also has an objectively true answer) is irrelevant.
How does that make “what is 2+3?” less real?
Similarly, how does the fact that some other beings might care about something other than morality make questions of the form “what is moral? what should I do?” non objective?
It’s nothing to do with agreement. When you ask “ought I do this?”, well… to the extent that you’re not speaking empty words, you’re asking SOME specific question.
There is some criteria by which “oughtness” can be judged… that is, the defining criteria. It may be hard for you to articulate, it may only be implicitly encoded in your brain, but to the extent that word is a label for some concept, it means something.
I do not think you’d argue too much against this.
I make an additional claim: That that which we commonly refer to in these contexts by words like “Should”, “ought” and so on is the same thing we’re referring to when we say stuff like “morality”.
To me “what should I do?” and “what is the moral thing to do?” are basically the same question, pretty much.
“Ought I be moral?” thus would translate to “ought I be the sort of person that does what I ought to do?”
I think the answer to that is yes.
There may be beings that agree with that completely but take the view of “but we simply don’t care about whether or not we ought to do something. It is not that we disagree with your claims about whether one ought to be moral. We agree we ought to be moral. We simply place no value in doing what one ‘ought’ to do. Instead we value certain other things.” But screw them… I mean, they don’t do what they ought to do!
(EDIT: minor changes to last paragraph.)
I just want to know, what is six by nine?
“nobody writes jokes in base 13” :)