I generally consider “you ought to do X” to mean “I’d prefer it if you did X”, and do not think judgements of “ought” can be wrong in this sense. (Aside for the normal questions of “what does a preference mean”, but I don’t find those relevant in this particular situation.) I agree that there are definitions of “ought” by which moral judgements can actually be wrong.
Incidentally, since I just read your comment over at Remind Physicalists where you pointed out that upvotes (or by extension, my “great post” comment) don’t convey you information about what it was about the post that was good: I found the most value in this post from the fact that it made the general argument of “our moral arguments tend to be rationalizations” with better citations and backing than I’d previously seen. The fact that it also made the case of deontology in particular tending to be rationalization was interesting, but not as valuable.
I generally consider “you ought to do X” to mean “I’d prefer it if you did X”, and do not think judgements of “ought” can be wrong in this sense. (Aside for the normal questions of “what does a preference mean”, but I don’t find those relevant in this particular situation.)
That some judgment or opinion that can be changed on further reflection (and that goes for all actions; perhaps you ate incorrect sort of cheese), motivates introducing the (more abstract) idea of correctness. Even if something is just a behavior, one can look back and rewrite the heuristics that generated it, to act differently next time. When this process itself is abstracted from the details of implementation, you get a first draft of a notion of correctness. With its help, you can avoid what you would otherwise correct and do the improved thing instead.
You’re right, though I’m not sure if “correctness” is the word I’d use for that, as it has undesirable connotations. Maybe something like “stable (upon reflection)”.
“Correct” is closely connected with a moral “ought”, which in turn has a number of different definitions (and thus connotations) depending on who you speak with. The statement “it would be correct for Clippy to exterminate humanity and turn the planet into a paperclip factory” might be technically right if we equate “stable” and “correct”, but it sure does sound odd. People who are already into the jargon might be fine with it, but it’s certain to create unneeded misunderstandings with newcomers.
Also, I suspect that taking a criteria like stability under reflection and calling it correctness may act as a semantic stopsign. If we just call it stability, it’s easier to ask questions like “should we require moral judgements to be stable” and “are there things other than stability that we should require”. If we call it correctness, we have already framed the default hypothesis as “stability is the thing that’s required”.
Now I’m confused about what your position is. What you said originally was:
As moral judgments can’t be right or wrong, only something you agree or disagree with.
But if you’re now saying that it makes sense to ask questions like “should we require moral judgements to be stable”, that seems to imply that moral judgments can be wrong (or at least it’s unclear that moral judgements can’t be wrong). Because asking that question implies that you think the answer might be yes, in which case unstable moral judgments would be wrong. Am I missing something here?
When I originally said that moral judgments couldn’t be right or wrong, I was defining “ought” in the common sense meaning of the word, which I believe to roughly correspond to emotivism.
When I said that we shouldn’t use the word correctness to refer to stability, and that we might have various criteria for correctness, I meant “ought” or “correct” in the sense of some hypothetical goal system we may wish to give an AI.
There’s some sort of a complex overlap/interaction between those two meanings in my mind, which contributed to my initial unclear usage and which prompted the mention in my original comment. Right now I’m unable to untangle my intuitions about that connection, as I hadn’t realized the existence of the issue before reading your comment.
When I originally said that moral judgments couldn’t be right or wrong, I was defining “ought” in the common sense meaning of the word, which I believe to roughly correspond to emotivism.
Here’s my argument against emotivism. First, I don’t dispute that empirically most people form moral judgments from their emotional responses with little or no conscious reflection. I do dispute that this implies when they state moral judgements, those judgements do not express propositions but only emotional attitudes (and therefore can’t be right or wrong).
Consider an analogy with empirical judgements. Suppose someone says “Earth is flat.” Are they stating a proposition about the way the world is, or just expressing that they have a certain belief? If it’s the latter, then they can’t be wrong (assuming they’re not deliberately lying). I think we would say that a statement like “Earth is flat” does express a proposition and not just a belief, and therefore can be wrong, even if the person stating it did so based purely on gut instinct, without any conscious deliberation.
You might argue that the analogy isn’t exact, because it’s clear what kind of proposition is expressed by “Earth is flat”, but we don’t know what kind of proposition moral judgements could be expressing, nor could we find out by asking the people who are stating those moral judgements. I would answer that it’s actually not obvious what “Earth is flat” means, given that the true ontology of the world is probably something like Tegmarks’ Level 4 multiverse with its infinite copies of both round and flat Earths. Certainly the person saying “Earth is flat” couldn’t tell you exactly what proposition they are stating. I could also bring up other examples of statements whose meanings are unclear, which we nevertheless do not think “can’t be right or wrong”, such as “UDT is closer to the correct decision theory than CDT is” or “given what we know about computational complexity, we should bet on P!=NP”.
(To be clear, I think it may still turn out to be the case that moral judgments can’t be said to mean anything, and are mere expressions of emotional attitude (or, more generally, brain output). I just don’t see how anyone can state that confidently at this point.)
Right now I’m unable to untangle my intuitions about that connection, as I hadn’t realized the existence of the issue before reading your comment.
I’d be interested in your thoughts once you’ve untangled them.
As far as I can tell, in this comment you present an analogy between moral judgements and empirical judgements. You then provide arguments against a specific claim saying “these two situations don’t share a deep cause”. But you don’t seem to have provided arguments for the judgements sharing a deep cause in the first place. It seems like a surface analogy to me.
Perhaps I should have said “reason for skepticism” instead of “argument”. Let me put it this way: what reasons do you have for thinking that moral judgments can’t be right or wrong, and have you checked whether those reasons don’t apply equally to empirical judgments?
Occam’s Razor, I suppose. Something roughly like emotivism seems like a wholly adequate explanation of what moral judgements are, both from a psychological and evolutionary point of view. I just don’t see any need to presume that moral judgements would be anything else, nor do I know what else they could be. From a decision-theoretical perspective, too, preferences (in the form of utility functions) are merely what the organism wants, and are simply taken as givens.
On the other hand, empirical judgements clearly do need to be evaluated for their correctness, if they are to be useful in achieving an organism’s preferences and/or survival.
Consider an analogy with empirical judgements. Suppose someone says “Earth is flat.” Are they stating a proposition about the way the world is, or just expressing that they have a certain belief? If it’s the latter, then they can’t be wrong (assuming they’re not deliberately lying).
They can be wrong if they should on reflection change this belief.
Nesov, I’m taking emotivism to be the theory that moral judgments are just expressions of current emotional attitude, and therefore can’t be wrong, even if on reflection one would change one’s emotional attitude. And I’m arguing against that theory.
I was responding to a slightly different situation: you suggested that sometimes, considerations of “correctness” or “right/wrong” don’t apply. I pointed out that we can get a sketch of these notions for most things quite easily. This sketch of “correctness” is in no way intended as something taken to be the accurate principle with unlimited normative power. The question of not drowning the normative notions (in more shaky opinions) is distinct from the question of whether there are any normative notions to drown to begin with.
I think I agree with what you’re saying, but I’m not entirely sure whether I’m interpreting you correctly or whether you’re being sufficiently vague that I’m falling prey to the double illusion of transparency. Could you reformulate that?
I generally consider “you ought to do X” to mean “I’d prefer it if you did X”, and do not think judgements of “ought” can be wrong in this sense. (Aside for the normal questions of “what does a preference mean”, but I don’t find those relevant in this particular situation.) I agree that there are definitions of “ought” by which moral judgements can actually be wrong.
Incidentally, since I just read your comment over at Remind Physicalists where you pointed out that upvotes (or by extension, my “great post” comment) don’t convey you information about what it was about the post that was good: I found the most value in this post from the fact that it made the general argument of “our moral arguments tend to be rationalizations” with better citations and backing than I’d previously seen. The fact that it also made the case of deontology in particular tending to be rationalization was interesting, but not as valuable.
That some judgment or opinion that can be changed on further reflection (and that goes for all actions; perhaps you ate incorrect sort of cheese), motivates introducing the (more abstract) idea of correctness. Even if something is just a behavior, one can look back and rewrite the heuristics that generated it, to act differently next time. When this process itself is abstracted from the details of implementation, you get a first draft of a notion of correctness. With its help, you can avoid what you would otherwise correct and do the improved thing instead.
You’re right, though I’m not sure if “correctness” is the word I’d use for that, as it has undesirable connotations. Maybe something like “stable (upon reflection)”.
What are the undesirable connotations of “correctness”?
“Correct” is closely connected with a moral “ought”, which in turn has a number of different definitions (and thus connotations) depending on who you speak with. The statement “it would be correct for Clippy to exterminate humanity and turn the planet into a paperclip factory” might be technically right if we equate “stable” and “correct”, but it sure does sound odd. People who are already into the jargon might be fine with it, but it’s certain to create unneeded misunderstandings with newcomers.
Also, I suspect that taking a criteria like stability under reflection and calling it correctness may act as a semantic stopsign. If we just call it stability, it’s easier to ask questions like “should we require moral judgements to be stable” and “are there things other than stability that we should require”. If we call it correctness, we have already framed the default hypothesis as “stability is the thing that’s required”.
Now I’m confused about what your position is. What you said originally was:
But if you’re now saying that it makes sense to ask questions like “should we require moral judgements to be stable”, that seems to imply that moral judgments can be wrong (or at least it’s unclear that moral judgements can’t be wrong). Because asking that question implies that you think the answer might be yes, in which case unstable moral judgments would be wrong. Am I missing something here?
You’re right, I was being unclear. Sorry.
When I originally said that moral judgments couldn’t be right or wrong, I was defining “ought” in the common sense meaning of the word, which I believe to roughly correspond to emotivism.
When I said that we shouldn’t use the word correctness to refer to stability, and that we might have various criteria for correctness, I meant “ought” or “correct” in the sense of some hypothetical goal system we may wish to give an AI.
There’s some sort of a complex overlap/interaction between those two meanings in my mind, which contributed to my initial unclear usage and which prompted the mention in my original comment. Right now I’m unable to untangle my intuitions about that connection, as I hadn’t realized the existence of the issue before reading your comment.
Here’s my argument against emotivism. First, I don’t dispute that empirically most people form moral judgments from their emotional responses with little or no conscious reflection. I do dispute that this implies when they state moral judgements, those judgements do not express propositions but only emotional attitudes (and therefore can’t be right or wrong).
Consider an analogy with empirical judgements. Suppose someone says “Earth is flat.” Are they stating a proposition about the way the world is, or just expressing that they have a certain belief? If it’s the latter, then they can’t be wrong (assuming they’re not deliberately lying). I think we would say that a statement like “Earth is flat” does express a proposition and not just a belief, and therefore can be wrong, even if the person stating it did so based purely on gut instinct, without any conscious deliberation.
You might argue that the analogy isn’t exact, because it’s clear what kind of proposition is expressed by “Earth is flat”, but we don’t know what kind of proposition moral judgements could be expressing, nor could we find out by asking the people who are stating those moral judgements. I would answer that it’s actually not obvious what “Earth is flat” means, given that the true ontology of the world is probably something like Tegmarks’ Level 4 multiverse with its infinite copies of both round and flat Earths. Certainly the person saying “Earth is flat” couldn’t tell you exactly what proposition they are stating. I could also bring up other examples of statements whose meanings are unclear, which we nevertheless do not think “can’t be right or wrong”, such as “UDT is closer to the correct decision theory than CDT is” or “given what we know about computational complexity, we should bet on P!=NP”.
(To be clear, I think it may still turn out to be the case that moral judgments can’t be said to mean anything, and are mere expressions of emotional attitude (or, more generally, brain output). I just don’t see how anyone can state that confidently at this point.)
I’d be interested in your thoughts once you’ve untangled them.
As far as I can tell, in this comment you present an analogy between moral judgements and empirical judgements. You then provide arguments against a specific claim saying “these two situations don’t share a deep cause”. But you don’t seem to have provided arguments for the judgements sharing a deep cause in the first place. It seems like a surface analogy to me.
Perhaps I should have said “reason for skepticism” instead of “argument”. Let me put it this way: what reasons do you have for thinking that moral judgments can’t be right or wrong, and have you checked whether those reasons don’t apply equally to empirical judgments?
(Note this is the same sort of “reason for skepticism” that I expressed in Boredom vs. Scope Insensitivity for example.)
Occam’s Razor, I suppose. Something roughly like emotivism seems like a wholly adequate explanation of what moral judgements are, both from a psychological and evolutionary point of view. I just don’t see any need to presume that moral judgements would be anything else, nor do I know what else they could be. From a decision-theoretical perspective, too, preferences (in the form of utility functions) are merely what the organism wants, and are simply taken as givens.
On the other hand, empirical judgements clearly do need to be evaluated for their correctness, if they are to be useful in achieving an organism’s preferences and/or survival.
They can be wrong if they should on reflection change this belief.
Nesov, I’m taking emotivism to be the theory that moral judgments are just expressions of current emotional attitude, and therefore can’t be wrong, even if on reflection one would change one’s emotional attitude. And I’m arguing against that theory.
Ah, I see, that was stupid misinterpretation on my part.
I was responding to a slightly different situation: you suggested that sometimes, considerations of “correctness” or “right/wrong” don’t apply. I pointed out that we can get a sketch of these notions for most things quite easily. This sketch of “correctness” is in no way intended as something taken to be the accurate principle with unlimited normative power. The question of not drowning the normative notions (in more shaky opinions) is distinct from the question of whether there are any normative notions to drown to begin with.
I think I agree with what you’re saying, but I’m not entirely sure whether I’m interpreting you correctly or whether you’re being sufficiently vague that I’m falling prey to the double illusion of transparency. Could you reformulate that?
Thanks for the detail!