It looks circular to me. Of course, if you look hard enough at any views like this, the only choices are circles and terminating lines, and it seems almost an aesthetic matter which someone goes with, but this is such a small circle. It’s right to care about morality and to be moral because morality says so and morality possesses the sole capacity to identify “rightness”, including the rightness of caring about morality.
It’s more almost, well, I hate to say this, but more a matter of definitions.
ie, what do you MEAN by the term “right”?
Just keep poking your brain about that, and keep poking your brain about what you mean by “should” and what you actually mean by terms like “morality” and I think you’ll find that all those terms are pointing at the same thing.
It’s not so much “there’s this criteria of ‘rightness’ that only morality has the ability to measure” but rather an appeal to morality is what we mean when we say stuff like “‘should’ we do this? is it ‘right’?” etc...
The situation is more, well, like this:
Humans: “Morality says that, among other things, it’s more better and moral to be, well, moral. It is also moral to save lives, help people, bring joy, and a whole lot of other things”
Paperclipers: “having scanned your brains to see what you mean by these terms, we agree with your statement.”
Paperclippers: “Converting all the matter in your system into paperclips is paperclipish. Further, it is better and paperclipish to be paperclipish.”
Humans: “having scanned your minds to determine what you actually mean by those terms, we agree with your statement.”
Humans: “However, we don’t care about paperclipishness. We care about morality. Turning all the matter of our solar system (including the matter we are composed of) into paperclips is bad, so we will try to stop you.”
Paperclippers: “We do not care about morality. We care about paperclipishness. Resisting the conversion to paperclips is unpaperclipish. Therefore we will try to crush your resistance.”
This is very different from what we normally think of as circular arguments, which would be of the form of “A, therefore B, therefore A, QED”, while the other side would be “no! not A”
Here, all sides agree about stuff. It’s just that they value different things. But the fact of humans valuing the stuff isn’t the justification for valuing that stuff. The justification is that it’s moral. But the fact is that we happen to be moved by arguments like “it’s moral”, rather than the wicked paperclippers that only care about whether it’s paperclipish or not.
But why should I feel obliged to act morally instead of paperclippishly?
Circles seem all well and good when you’re already inside of them, but being inside of them already is kind of not the point of discussing meta-ethics.
Well, that’s not necessarily a moral sense of ‘should’, I guess—I’m asking whether I have any sort of good reason to act morally, be it an appeal to my interests or to transcendent moral reasons or whatever.
It’s generally the contention of moralists and paperclipists that there’s always good reason for everyone to act morally or paperclippishly. But proving that this contention itself just boils down to yet another moral/paperclippy claim doesn’t seem to help their case any. It just demonstrates what a tight circle their argument is, and what little reason someone outside of it has to care about it if they don’t already.
What do you mean by “should” in this context other than a moral sense of it? What would count as a “good reason”?
As far as your statement about both moralists and paperclippers thinking there are “good reasons”… the catch is that the phrase “good reasons” is being used to refer to two distinct concepts. When a human/moralist uses it, they mean, well… good, as opposed to evil.
A paperclipper, however, is not concerned at all about that standard. A paperclipper cares about what, well, maximizes paperclips.
It’s not that it should do so, but simply that it doesn’t care what it should do. Being evil doesn’t bother it any more than failing to maximize paperclips bothers you.
Being evil is clearly worse (where by “worse” I mean, well, immoral, bad, evil, etc...) that being good. But the paperclipper doesn’t care. But you do (as far as I know. If you don’t, then… I think you scare me). What sort of standard other than morality would you want to appeal to for this sort of issue in the first place?
What do you mean by “should” in this context other than a moral sense of it? What would count as a “good reason”?
By that I mean rationally motivating reasons.
But I’d be willing to concede, if you pressed, that ‘rationality’ is itself just another set of action-directing values. The point would still stand: if the set of values I mean when I say ‘rationality’ is incongruent with the set of values you mean when you say ‘morality,’ then it appears you have no grounds on which to persuade me to be directed by morality.
This is a very unsatisfactory conclusion for most moral realists, who believe that moral reasons are to be inherently objectively compelling to any sentient being. So I’m not sure if the position you’re espousing is just a complicated way of expressing surrender, or an attempt to reframe the question, or what, but it doesn’t seem to get us any more traction when it comes to answering “Why should I be moral?”
But you do (as far as I know. If you don’t, then… I think you scare me).
Duly noted, but is what I happen to care about relevant to this issue of meta-ethics?
Rationality is basically “how to make an accurate map of the world… and how to WIN (where win basically means getting what you “want” (where want includes all your preferences, stuff like morality, etc etc...)
Before rationality can tell you what to do, you have to tell it what it is you’re trying to do.
If your goal is to save lives, rationality can help you find ways to do that. If your goal is to turn stuff into paperclips, rationality can help you find ways to do that too.
I’m not sure I quite understand you mean by “rationally motivating” reasons.
As far as objectively compelling to any sentient (let me generalize that to any intelligent being)… Why should there be any such thing? “Doing this will help ensure your survival” “But… what if I don’t care about this?”
This is a very unsatisfactory conclusion for most moral realists, who believe that moral reasons are to be inherently objectively compelling to any sentient being.
According to the original post, strong moral realism (the above) is not held by most moral realists.
Well, my “moral reasons are to be...” there was kind of slippery. The ‘strong moral realism’ Roko outlined seems to be based on a factual premise (“All...beings...will agree...”), which I’d agree most moral realists are smart enough not to hold. The much more commonly held view seems to amount instead to a sort of … moral imperative to accept moral imperatives—by positing a set of knowable moral facts that we might not bother to recognize or follow, but ought to. Which seems like more of the same circular reasoning that Psy-Kosh has been talking about/defending.
What I’m saying is that when you say the word “ought”, you mean something. Even if you can’t quite articulate it, you have some sort of standard for saying “you ought do this, you ought not do that” that is basically the definition of ought.
I’m saying”this oughtness, whatever it is, is the same thing that you mean when you talk about ‘morality’. So “ought I be moral?” directly translates to “is it moral to be moral?”
I’m not saying “only morality has the authority to answer this question” but rather “uh… ‘is X moral?’ is kind of what you actually mean by ought/should/etc, isn’t it? ie, if I do a bit of a trace in your brain, follow the word back to its associated concepts, isn’t it going to be pointing/labeling the same algorithms that “morality” labels in your brain?
So basically it amounts to “yes, there’re things that one ought to do… and there can exist beings that know this but simply don’t care about whether or not they ‘ought’ to do something.”
It’s not that another being refuses to recognize this so much as they’d be saying “So what? we don’t care about this ‘oughtness’ business.” It’s not a disagreement, it’s simply failing to care about it.
What I’m saying is that when you say the word “ought”, you mean something. Even if you can’t quite articulate it, you have some sort of standard for saying “you ought do this, you ought not do that” that is basically the definition of ought.
I’d object to this simplification of the meaning of the word (I’d argue that ‘ought’ means lots of different things in different contexts, most of which aren’t only reducible to categorically imperative moral claims), but I suppose it’s not really relevant here.
I’m pretty sure we agree and are just playing with the words differently.
There are certain things one ought to do—and by ‘ought’ I mean you will be motivated to do those things, provided you already agree that they are among the ‘things one ought to do’
and
There is no non-circular answer to the question “Why should I be moral?”, so the moral realists’ project is sunk
seem to amount to about the same thing from where I sit. But it’s a bit misleading to phrase your admission that moral realism fails (and it does, just as paperclip realism fails) as an affirmation that “there are things one ought to do”.
The fact that some other creature might instead want to know the answer to the question “what is 6*7?” (which also has an objectively true answer) is irrelevant.
How does that make “what is 2+3?” less real?
Similarly, how does the fact that some other beings might care about something other than morality make questions of the form “what is moral? what should I do?” non objective?
It’s nothing to do with agreement. When you ask “ought I do this?”, well… to the extent that you’re not speaking empty words, you’re asking SOME specific question.
There is some criteria by which “oughtness” can be judged… that is, the defining criteria. It may be hard for you to articulate, it may only be implicitly encoded in your brain, but to the extent that word is a label for some concept, it means something.
I do not think you’d argue too much against this.
I make an additional claim: That that which we commonly refer to in these contexts by words like “Should”, “ought” and so on is the same thing we’re referring to when we say stuff like “morality”.
To me “what should I do?” and “what is the moral thing to do?” are basically the same question, pretty much.
“Ought I be moral?” thus would translate to “ought I be the sort of person that does what I ought to do?”
I think the answer to that is yes.
There may be beings that agree with that completely but take the view of “but we simply don’t care about whether or not we ought to do something. It is not that we disagree with your claims about whether one ought to be moral. We agree we ought to be moral. We simply place no value in doing what one ‘ought’ to do. Instead we value certain other things.” But screw them… I mean, they don’t do what they ought to do!
“what is 2+3?” has an objectively true answer. The fact that some other creature might instead want to know the answer to the question “what is 6*7?” (which also has an objectively true answer) is irrelevant.
It looks circular to me. Of course, if you look hard enough at any views like this, the only choices are circles and terminating lines, and it seems almost an aesthetic matter which someone goes with, but this is such a small circle. It’s right to care about morality and to be moral because morality says so and morality possesses the sole capacity to identify “rightness”, including the rightness of caring about morality.
It’s more almost, well, I hate to say this, but more a matter of definitions.
ie, what do you MEAN by the term “right”?
Just keep poking your brain about that, and keep poking your brain about what you mean by “should” and what you actually mean by terms like “morality” and I think you’ll find that all those terms are pointing at the same thing.
It’s not so much “there’s this criteria of ‘rightness’ that only morality has the ability to measure” but rather an appeal to morality is what we mean when we say stuff like “‘should’ we do this? is it ‘right’?” etc...
The situation is more, well, like this:
Humans: “Morality says that, among other things, it’s more better and moral to be, well, moral. It is also moral to save lives, help people, bring joy, and a whole lot of other things”
Paperclipers: “having scanned your brains to see what you mean by these terms, we agree with your statement.”
Paperclippers: “Converting all the matter in your system into paperclips is paperclipish. Further, it is better and paperclipish to be paperclipish.”
Humans: “having scanned your minds to determine what you actually mean by those terms, we agree with your statement.”
Humans: “However, we don’t care about paperclipishness. We care about morality. Turning all the matter of our solar system (including the matter we are composed of) into paperclips is bad, so we will try to stop you.”
Paperclippers: “We do not care about morality. We care about paperclipishness. Resisting the conversion to paperclips is unpaperclipish. Therefore we will try to crush your resistance.”
This is very different from what we normally think of as circular arguments, which would be of the form of “A, therefore B, therefore A, QED”, while the other side would be “no! not A”
Here, all sides agree about stuff. It’s just that they value different things. But the fact of humans valuing the stuff isn’t the justification for valuing that stuff. The justification is that it’s moral. But the fact is that we happen to be moved by arguments like “it’s moral”, rather than the wicked paperclippers that only care about whether it’s paperclipish or not.
But why should I feel obliged to act morally instead of paperclippishly? Circles seem all well and good when you’re already inside of them, but being inside of them already is kind of not the point of discussing meta-ethics.
“should”
What do you mean by “should”? Do you actually mean anything by it other than an appeal to morality in the first place?
Well, that’s not necessarily a moral sense of ‘should’, I guess—I’m asking whether I have any sort of good reason to act morally, be it an appeal to my interests or to transcendent moral reasons or whatever.
It’s generally the contention of moralists and paperclipists that there’s always good reason for everyone to act morally or paperclippishly. But proving that this contention itself just boils down to yet another moral/paperclippy claim doesn’t seem to help their case any. It just demonstrates what a tight circle their argument is, and what little reason someone outside of it has to care about it if they don’t already.
What do you mean by “should” in this context other than a moral sense of it? What would count as a “good reason”?
As far as your statement about both moralists and paperclippers thinking there are “good reasons”… the catch is that the phrase “good reasons” is being used to refer to two distinct concepts. When a human/moralist uses it, they mean, well… good, as opposed to evil.
A paperclipper, however, is not concerned at all about that standard. A paperclipper cares about what, well, maximizes paperclips.
It’s not that it should do so, but simply that it doesn’t care what it should do. Being evil doesn’t bother it any more than failing to maximize paperclips bothers you.
Being evil is clearly worse (where by “worse” I mean, well, immoral, bad, evil, etc...) that being good. But the paperclipper doesn’t care. But you do (as far as I know. If you don’t, then… I think you scare me). What sort of standard other than morality would you want to appeal to for this sort of issue in the first place?
By that I mean rationally motivating reasons. But I’d be willing to concede, if you pressed, that ‘rationality’ is itself just another set of action-directing values. The point would still stand: if the set of values I mean when I say ‘rationality’ is incongruent with the set of values you mean when you say ‘morality,’ then it appears you have no grounds on which to persuade me to be directed by morality.
This is a very unsatisfactory conclusion for most moral realists, who believe that moral reasons are to be inherently objectively compelling to any sentient being. So I’m not sure if the position you’re espousing is just a complicated way of expressing surrender, or an attempt to reframe the question, or what, but it doesn’t seem to get us any more traction when it comes to answering “Why should I be moral?”
Duly noted, but is what I happen to care about relevant to this issue of meta-ethics?
Rationality is basically “how to make an accurate map of the world… and how to WIN (where win basically means getting what you “want” (where want includes all your preferences, stuff like morality, etc etc...)
Before rationality can tell you what to do, you have to tell it what it is you’re trying to do.
If your goal is to save lives, rationality can help you find ways to do that. If your goal is to turn stuff into paperclips, rationality can help you find ways to do that too.
I’m not sure I quite understand you mean by “rationally motivating” reasons.
As far as objectively compelling to any sentient (let me generalize that to any intelligent being)… Why should there be any such thing? “Doing this will help ensure your survival” “But… what if I don’t care about this?”
“doing this will bring joy” “So?”
etc etc… There are No Universally Compelling Arguments
According to the original post, strong moral realism (the above) is not held by most moral realists.
Well, my “moral reasons are to be...” there was kind of slippery. The ‘strong moral realism’ Roko outlined seems to be based on a factual premise (“All...beings...will agree...”), which I’d agree most moral realists are smart enough not to hold. The much more commonly held view seems to amount instead to a sort of … moral imperative to accept moral imperatives—by positing a set of knowable moral facts that we might not bother to recognize or follow, but ought to. Which seems like more of the same circular reasoning that Psy-Kosh has been talking about/defending.
What I’m saying is that when you say the word “ought”, you mean something. Even if you can’t quite articulate it, you have some sort of standard for saying “you ought do this, you ought not do that” that is basically the definition of ought.
I’m saying”this oughtness, whatever it is, is the same thing that you mean when you talk about ‘morality’. So “ought I be moral?” directly translates to “is it moral to be moral?”
I’m not saying “only morality has the authority to answer this question” but rather “uh… ‘is X moral?’ is kind of what you actually mean by ought/should/etc, isn’t it? ie, if I do a bit of a trace in your brain, follow the word back to its associated concepts, isn’t it going to be pointing/labeling the same algorithms that “morality” labels in your brain?
So basically it amounts to “yes, there’re things that one ought to do… and there can exist beings that know this but simply don’t care about whether or not they ‘ought’ to do something.”
It’s not that another being refuses to recognize this so much as they’d be saying “So what? we don’t care about this ‘oughtness’ business.” It’s not a disagreement, it’s simply failing to care about it.
I’d object to this simplification of the meaning of the word (I’d argue that ‘ought’ means lots of different things in different contexts, most of which aren’t only reducible to categorically imperative moral claims), but I suppose it’s not really relevant here.
I’m pretty sure we agree and are just playing with the words differently.
and
seem to amount to about the same thing from where I sit. But it’s a bit misleading to phrase your admission that moral realism fails (and it does, just as paperclip realism fails) as an affirmation that “there are things one ought to do”.
What’s failing?
“what is 2+3?” has an objectively true answer.
The fact that some other creature might instead want to know the answer to the question “what is 6*7?” (which also has an objectively true answer) is irrelevant.
How does that make “what is 2+3?” less real?
Similarly, how does the fact that some other beings might care about something other than morality make questions of the form “what is moral? what should I do?” non objective?
It’s nothing to do with agreement. When you ask “ought I do this?”, well… to the extent that you’re not speaking empty words, you’re asking SOME specific question.
There is some criteria by which “oughtness” can be judged… that is, the defining criteria. It may be hard for you to articulate, it may only be implicitly encoded in your brain, but to the extent that word is a label for some concept, it means something.
I do not think you’d argue too much against this.
I make an additional claim: That that which we commonly refer to in these contexts by words like “Should”, “ought” and so on is the same thing we’re referring to when we say stuff like “morality”.
To me “what should I do?” and “what is the moral thing to do?” are basically the same question, pretty much.
“Ought I be moral?” thus would translate to “ought I be the sort of person that does what I ought to do?”
I think the answer to that is yes.
There may be beings that agree with that completely but take the view of “but we simply don’t care about whether or not we ought to do something. It is not that we disagree with your claims about whether one ought to be moral. We agree we ought to be moral. We simply place no value in doing what one ‘ought’ to do. Instead we value certain other things.” But screw them… I mean, they don’t do what they ought to do!
(EDIT: minor changes to last paragraph.)
I just want to know, what is six by nine?
“nobody writes jokes in base 13” :)