Wise master: “You ought to help the poor by giving 50% of your income to efficient charity and supporting the European-style welfare state.”
Student: “Alright.”
*student runs off and gives 50% of his or her income to efficient charity and supports the European-style welfare state
This dialog rings true as a fact about ought statements—once we become convinced of them, they do and should constrain our behavior.
But my dialogs and your dialogs contradict each other! Because if “ought” determines our behavior, and we can define what “ought” means, then we can define proper behavior into existence—a construction as absurd as Descartes defining God into existence or Plato defining man as both a hairless featherless biped and a mortal.
We must give up one, and I say give up yours. “ought” is one of those words that we are not free to define—it has a single meaning. Look to its consequences, not its causes.
I’m not sure I understand. Are you saying that we are not free to stipulate definitions for the word-tools we use (when it comes to morality), because you have a conceptual intuition in favor of motivational internalism for the use of ‘ought’ terms?
Wikipedia defines motivational internalism as the belief that:
there is an internal, necessary connection between one’s conviction that X ought to be done and one’s motivation to do X.
I want to view this as a morally necessary connection. One should do what one ought to do, and this serves as the definition of “ought”.
You will note that I am using circular definitions. That is because I can’t define “should” except in terms of things that have a hidden “should” in there. But I am trying to access the part of you that understands what I am saying.
The useful analogue is this:
modus ponens: “If you know ‘A’, and you know ’If A, then B”, then you know B”
It’s a circular definition getting at something which you can’t put into words. I would be wrong to define “If-then” as something else, like maybe “If A, then B” means “75% of elephants with A written on them believe B” because it’s already defined.
Unfortunately, I still don’t follow you. Or at least, the only interpretations I’ve come up with look so obviously false that I resist attributing them to you. Maybe I can grok your disagreement from another angle. Let me try to pinpoint where we disagree. I hope you’ll have some time to approach mutual understanding on this issue. When Will Sawin disagrees with me, I pay attention.
Do you agree that there are many words X such that X is used by different humans to mean slightly different things?
Do you agree that there are many words X such that different humans have different intuitions about the exact extension of X, especially in bizarre sci-fi hypothetical scenarios?
Do you agree that many humans use imperative terms like ‘ought’ and ‘should’ to communicate particular meanings, with these meanings often being stipulated within the context of a certain community?
Thanks. I’m thinking of doing a post on the discussion section where I can explain where my intuitions come from in more detail.
For your questions:
Yes.
Yes.
I don’t really know what the third question means. It seems like the primary use of “ought” and “should” is as part of an attempt to convince people to do what you say they should do. I would say that is the meaning being communicated. There are various ways this could be within the context of a community. Are you saying that you’re only trying to convince members of that community?
Your dialogue looks similar to the one about losing weight above. I can define proper behavior given my terminal values. If I want to lose weight, I should eat less. Upon learning this fact, I start eating less. My values and some facts about the world are sufficient to determine my proper behavior. “Defining my behavior into existence” seems no more absurd to me than defining the rational action using a decision theory.
I’m not sure I’ve explained myself very clearly here. Please advise on what, if anything, that I’m saying is hard to understand.
If it is the case that you should do what you want, yes.
If you want to punch babies, then you should not punch babies. (x)
If you should lose weight, then you should eat less.
Proper values and some facts about the world are sufficient to determine proper behavior.
What are proper values? Well, they’re the kind of values that determine proper behavior.
x: Saying this requirems me to know a moral fact. This moral fact is a consequence of an assumption I made about the true nature of reality. But to assume is to stoop lower than to define.
If you want to punch babies, then you should not punch babies. (x)
This is WillSawin_Should. NormalAnomaly_Should says the same thing, because we’re both humans. #$%^$_Should, where #$%^$ is the name of an alien from planet Mog, may say something completely different. You and I both use the letter sequence s-h-o-u-l-d to refer to the output of our own unique should-functions.
Lukeprog, the above is how I understand your post. Is it correct?
No. We both use the letter sequence “should” to direct our actions.
We believe that we should follow the results of our should-functions. We believe that the alien from Mog is wrong to follow the results of his should-function. These are beliefs, not definitions.
Imagine if you said “The sun will rise tomorrow” and I responded:
“This is NormalAnomaly_Will. WillSawin_Will says the same thing, because we’re both humans. #$%^$_Will, where #$%^$ is the name of an alien from planet Mog, may say something completely different. You and I both use the letter sequence w-i-l-l to refer to the output of our own unique will-functions.”
Normal_Anomaly’s ontology is coherent. What you describe regarding beliefs is also coherent but refers to a different part of reality space than what Normal is trying to describe.
I don’t understand what “ontology” and “reality space” mean in this context.
Here’s a guess:
You’re saying that the word “WillSawin_Should” is a reasonable word to use. It is well-defined, and useful in some contexts. But Plain-Old-Should is also a word with a meaning that is useful in some contexts.
I was trying to convey that when you speak of beliefs and determination of actions you are describing an entirely different concept than what Normal_Anomaly was describing. To the extent that presenting your statements as a contradiction of Normal’s is both a conversational and epistemic error.
So you’re defining “should” to describe actions that best further one’s terminal values? Or is there an additional “shouldness” about terminal values too?
Also, regarding
Because if “ought” determines our [proper] behavior, and we can define what “ought” means, then we can define proper behavior into existence
in the grandparent, it sounds like you’re equivocating between defining what the word “ought” means and changing the true nature of the concept that “ought” usually refers to. (Unless I was wrong to add the “proper” in the quote, in which case I actually don’t know what point you were making.) To wit: “ought” is just a word that we can define as we like, but the concept that “ought” usually refers to is an adaptation and declaring that “ought” actually means something different will not change our actual behavior, except insofar as you succeed in changing others’ terminal values.
Incidentally this is a very slippery topic for me to talk about for reasons that I don’t fully understand, but I suspect it has to do with my moral intuitions constantly intervening and loudly claiming “no, it should be this way!” and the like. I also strongly suspect that this difficulty is nearly universal among humans.
Or is there an additional “shouldness” about terminal values too?
There is.
(Unless I was wrong to add the “proper” in the quote, in which case I actually don’t know what point you were making.)
You weren’t.
in the grandparent, it sounds like you’re equivocating between defining what the word “ought” means and changing the true nature of the concept that “ought” usually refers to.
I do not think I am equivocating. Rather, I disagree with lukeprog about what people are changing when they disagree about morality.
lukeprog thinks that people disagree about what ought means / the definition of ought.
I believe that (almost) everybody things “ought” means the same thing, and that people disagree about the concept that “ought” usually refers to.
This concept is special because it has a reverse definition. Normally a word is defined by the situations in which you can infer that a statement about that word is true. However, “ought” is defined the other way—by what you can do when you infer that a statement about “ought” is true.
Is it the case that Katy ought to buy a car? Well, I don’t know. But I know that if Katy is rational, and she becomes convinced that she ought to buy a car, then she will buy a car.
I believe that (almost) everybody things “ought” means the same thing, and that people disagree about the concept that “ought” usually refers to.
What is the difference between what “ought” means and what it refers to?
Edit:
This concept is special because it has a reverse definition. Normally a word is defined by the situations in which you can infer that a statement about that word is true. However, “ought” is defined the other way—by what you can do when you infer that a statement about “ought” is true.
In the above, do you say that “You ought to do X.” is exactly equivalent to the command”Do X!”, and “I ought to do X.” means “I will do X on the first opportunity and not by accident.” ?
Is it the case that Katy ought to buy a car? Well, I don’t know. But I know that if Katy is rational, and she becomes convinced that she ought to buy a car, then she will buy a car.
Ought we base the definition of “ought” on a pretty complicated notion of rationality?
In the above, do you say that “You ought to do X.” is exactly equivalent to the command”Do X!”, and “I ought to do X.” means “I will do X on the first opportunity and not by accident.” ?
To the first one, yes, but they have different connotations.
To the second one, sort of. “I” can get fuzzy here. I have akrasia problems. I should do my work, but I will not do it for a while. If you cut out a sufficiently small portion of my mind then this portion doesn’t have the opportunity to do my work until it actually does my work, because the rest of my mind is preventing it.
Furthermore I am thinking about them more internally. “should” isn’t part of predicting actions, its part of choosing them.
Ought we base the definition of “ought” on a pretty complicated notion of rationality?
It doesn’t seem complicated to me. Certainly simpler than lukeprog’s definitions.
These issues are ones that should be cleared up by the discussion post I’m going to write in a second.
I gave the situation of one person commanding another. You replied with
a scenario about one person with different internal systems. I don’t know why
you did that.
(yay, I finally caused a confusion that should be really easy to clear up!)
Alice and Bob agree that “Earth” means “that giant thing under us”. Alice and Bob disagree about the Earth, though. They disagree about that giant thing under them. Alice thinks it’s round, and Bob thinks it’s flat.
No, I think there is still equivocation in the claim that your dialog and Luke’s contradict one another. Luke is talking about the meaning of the word “Earth” and you are talking about the giant thing under us.
I also do not completely buy the assertion that “ought” is special because it has a reverse definition. This assertion itself sounds to me like a top-down definition of the ordinary type, if an unusually complex one.
Well there are two possible definitions, Luke’s and my reverse definition (or top-down definition of the ordinary type).
If you accept both definitions, then you have just proved that the right thing to do is XYZ. One shouldn’t be able to prove this just from definitions. Therefore you cannot accept both definitions.
Suppose we propose to define rationality extensionally. Scientists study rationality for many decades and finally come up with a comprehensive definition of rationality that becomes the consensus. And then they start using that definition to shape their own inference patterns. “The rational thing to do is XYZ,” they conclude, using their definitions.
Because the rational thing means the thing that produces good results, so you need a shouldness claim to scientifically study rationality claims, and science cannot produce shouldness claims.
The hidden assumption is something like “Good results are those produced by thinking processes that people think are rational” or something along those lines. If you accept that assumption, or any similar assumption, such a study is valid.
Let’s temporarily segregate rational thought vs. rational action, though they will ultimately need to be reconciled. I think that we can, and must, characterize rational thought first. We must, because “good results” are good only insofar as they are desired by a rational agent. We can, because while human beings aren’t very good individually at explicitly defining rationality, they are good, collectively via the scientific enterprise, at knowing it when they see it.
In this context, “science cannot produce shouldness claims” is contentious. Best not to make a premise out of it.
But why are rational thoughts good? Because they produce rational actions.
The circular bootstrapping way of doing this may not be untenable. (This is similar to arguments of Eliezer’s.)
In this context, “science cannot produce shouldness claims” is contentious. Best not to make a premise out of it.
You’re making hidden assumptions about what “good” and “rational” mean. Of course, some people accept those assumptions, that’s their prerogative. But those assumptions are not true by definition.
Yes I am making assumptions (they’re pretty open) about goodness and rationality. Or better: hypotheses. I advance them because I think theories constructed around them will ultimately cohere better with our most confident judgments and discoveries. Try them and see. By all means try alternatives and compare.
Circular bootstrapping is fine. And sure, the main reason rational thoughts are good is that they typically lead to good actions. But they’re still rational thoughts—thus correct—even when they don’t.
I’m not really sure why this was downvoted, compared to everything else I’ve written on the topic.
It seems to completely miss Normal_Anomaly’s point, speaking right past him. As to the ‘compared to everything else you have written’ I refrained from downvoting your replies to myself even though I would have downvoted them if they were replies to a third party. It is a general policy of mine that I find practical, all else being equal.
I would much prefer to keep Luke’s. Basically because it is is actually useful when communicating with others who aren’t interested in having the other person’s values rammed down their throat. Because if you went around saying an ought at me using your definition then obviously you should expect me to reject it regardless of what the content is. Because the way you are using the term is such that it assumes that the recipient is ultimately subject to something that refers to your own mind.
So if you tell me I should go do something, and I agree with you, and I never go do that, you would see nothing inconsistent?
I’m totally comfortable with claims of the form “If you believe XYZ normative statements, then you should do W.” It should work just as well as conditionals about physical statements.
So you’re saying that I am only allowed to use “should” to mean “WillSawin_should”. I can’t use it to mean “wedrifid_should”.
No, that is another rather bizarre thing which I definitely did not say. Perhaps it will be best for me to just leave it with my initial affirmation of Luke’s post:
We must give up one, and I say give up yours.
I would much prefer to keep Luke’s.
In my observation Luke’s system for reducing moral claims provides more potential for enabling effective communication between agents and a more comprehensive way to form a useful epistemic model of such conversations.
No, that is another rather bizarre thing which I definitely did not say. Perhaps it will be best for me to just leave it with my initial affirmation of Luke’s post:
So suppose I say:
“I wedrifid_should do X” and then don’t do X. Clearly, I am not being inconsistent.
but if I say:
“I should do X” and then don’t do X then I am being inconsistent.
Something must therefore prevent me from using “should” to mean “wedrifid_should”.
I’d agree that you can (and probably do) use plain old “should” to mean multiple things. The trouble is that this isn’t very useful for communication. So when communicating, us humans use heuristics to figure out what “should” is meant.
In the example of the conversation, if I say “you should X” and you say “I agree,” then I generally use a shortcut to think you meant Will-should. The obvious reason for this is that if you meant Manfred-should, you would have just repeated my own statement back to me, which would be not communicating anything, and it’s a decent shortcut to assume that when people say something they want to communicate. The only other obvious “should” in the conversation is Will-should, so it’s a good guess that you meant Will-should.
“I agree” generally means the same thing as repeating someone’s statement back at them. We can expand:
“You wedrifd_should do X”
“I agree, I will_should do X”
I seem to be making an error of interpretation here if I say things the way you normally say them! Why, in this instance, is it considered normal and acceptable to interpret professed agreement as expressing a different belief than the one being agreed to?
Huh, yeah that is weird. But on thinking about it, I can only think of two situations I’ve heard or used “I agree.” One is if there’s a problem with an unsure solution, where it means “My solution-finding algorithm also returned that,” and if someone offers a suggestion about what should be done, where I seem to be claiming it usually means “My should-finding algorithm also returned that.”
Because there’s no objective standard against which “should algorithms” can be tested, like there is for the standard for “solution-finding algorithms” If there was no objective standard for solutions, I would absolutely stop talking about “the solution” and start talking about the Manfred_solution.
Hm. I agree that you can disagree about some world-state that you’d like, but I don’t understand how we could move that from “we disagree” to “there is one specific world-state that is the standard.” So I stand by “no objective standard” for now.
If we disagree about what world state is best, there has to be some kind of statement I believe and you don’t, right? Otherwise, we wouldn’t disagree. Some kind of statement like “This world state is best.”
But the difference isn’t about some measurable property of the world, but about internal algorithms for deciding what to do.
Sure, to the extent that humans are irrational and can pit one desire against another, arguing about how to determine “best” is not a total waste of time, but I don’t think that has much bearing on subjectivity.
I’m losing the thread of the conversation at this point.
Perhaps the meaning of the paragraph you quote wasn’t clear—I was trying hard to be polite rather than frank. You seem to be attacking a straw man using rhetorical questions so trivial that I would consider them disingenuous prior to adjusting for things like the illusion of transparency. Your conversation with lukeprog seems like one with more potential for useful communication. He cares about the subject far more than I do.
But my dialogs and your dialogs contradict each other! Because if “ought” determines our behavior, and we can define what “ought” means, then we can define proper behavior into existence
Moral ideas don’t determinne behaviour with any great reliability, so there is no
analytical or necessary relationship there. If that’s what you were getting at.
I don’t want to spam but if people haven’t noticed then hopefully this comment should inform them that my first-ever lesswrong post, which might or might not make this clearer, is up.
Consider this dialog:
Student: “Wise master, what ought I do?”
Wise master: “You ought to help the poor by giving 50% of your income to efficient charity and supporting the European-style welfare state.”
Student: “Alright.”
*student runs off and gives 50% of his or her income to efficient charity and supports the European-style welfare state
This dialog rings true as a fact about ought statements—once we become convinced of them, they do and should constrain our behavior.
But my dialogs and your dialogs contradict each other! Because if “ought” determines our behavior, and we can define what “ought” means, then we can define proper behavior into existence—a construction as absurd as Descartes defining God into existence or Plato defining man as both a hairless featherless biped and a mortal.
We must give up one, and I say give up yours. “ought” is one of those words that we are not free to define—it has a single meaning. Look to its consequences, not its causes.
I’m not sure I understand. Are you saying that we are not free to stipulate definitions for the word-tools we use (when it comes to morality), because you have a conceptual intuition in favor of motivational internalism for the use of ‘ought’ terms?
Wikipedia defines motivational internalism as the belief that:
I want to view this as a morally necessary connection. One should do what one ought to do, and this serves as the definition of “ought”.
You will note that I am using circular definitions. That is because I can’t define “should” except in terms of things that have a hidden “should” in there. But I am trying to access the part of you that understands what I am saying.
The useful analogue is this:
modus ponens: “If you know ‘A’, and you know ’If A, then B”, then you know B”
It’s a circular definition getting at something which you can’t put into words. I would be wrong to define “If-then” as something else, like maybe “If A, then B” means “75% of elephants with A written on them believe B” because it’s already defined.
Does that make any sense?
Unfortunately, I still don’t follow you. Or at least, the only interpretations I’ve come up with look so obviously false that I resist attributing them to you. Maybe I can grok your disagreement from another angle. Let me try to pinpoint where we disagree. I hope you’ll have some time to approach mutual understanding on this issue. When Will Sawin disagrees with me, I pay attention.
Do you agree that there are many words X such that X is used by different humans to mean slightly different things?
Do you agree that there are many words X such that different humans have different intuitions about the exact extension of X, especially in bizarre sci-fi hypothetical scenarios?
Do you agree that many humans use imperative terms like ‘ought’ and ‘should’ to communicate particular meanings, with these meanings often being stipulated within the context of a certain community?
I’ll stop there for now.
Thanks. I’m thinking of doing a post on the discussion section where I can explain where my intuitions come from in more detail.
For your questions:
Yes.
Yes.
I don’t really know what the third question means. It seems like the primary use of “ought” and “should” is as part of an attempt to convince people to do what you say they should do. I would say that is the meaning being communicated. There are various ways this could be within the context of a community. Are you saying that you’re only trying to convince members of that community?
Note: I’m planning to come back to this discussion in a few days. Recently my time has been swamped running SI’s summer minicamp.
I may also write something which expresses my ideas in a new, more concise and clear form.
I think that would be the most efficient thing to do. For now, I’ll wait on that.
If you haven’t noticed, I just made that post.
Any response to this?
Excellent. I’m busy the next few days, but I’ll respond when I can, on that thread.
I think you meant to leave out either the “except” or the “don’t”?
Correct.
Your dialogue looks similar to the one about losing weight above. I can define proper behavior given my terminal values. If I want to lose weight, I should eat less. Upon learning this fact, I start eating less. My values and some facts about the world are sufficient to determine my proper behavior. “Defining my behavior into existence” seems no more absurd to me than defining the rational action using a decision theory.
I’m not sure I’ve explained myself very clearly here. Please advise on what, if anything, that I’m saying is hard to understand.
If it is the case that you should do what you want, yes.
If you want to punch babies, then you should not punch babies. (x)
If you should lose weight, then you should eat less.
Proper values and some facts about the world are sufficient to determine proper behavior.
What are proper values? Well, they’re the kind of values that determine proper behavior.
x: Saying this requirems me to know a moral fact. This moral fact is a consequence of an assumption I made about the true nature of reality. But to assume is to stoop lower than to define.
This is WillSawin_Should. NormalAnomaly_Should says the same thing, because we’re both humans. #$%^$_Should, where #$%^$ is the name of an alien from planet Mog, may say something completely different. You and I both use the letter sequence s-h-o-u-l-d to refer to the output of our own unique should-functions.
Lukeprog, the above is how I understand your post. Is it correct?
No. We both use the letter sequence “should” to direct our actions.
We believe that we should follow the results of our should-functions. We believe that the alien from Mog is wrong to follow the results of his should-function. These are beliefs, not definitions.
Imagine if you said “The sun will rise tomorrow” and I responded:
“This is NormalAnomaly_Will. WillSawin_Will says the same thing, because we’re both humans. #$%^$_Will, where #$%^$ is the name of an alien from planet Mog, may say something completely different. You and I both use the letter sequence w-i-l-l to refer to the output of our own unique will-functions.”
Normal_Anomaly’s ontology is coherent. What you describe regarding beliefs is also coherent but refers to a different part of reality space than what Normal is trying to describe.
I don’t understand what “ontology” and “reality space” mean in this context.
Here’s a guess:
You’re saying that the word “WillSawin_Should” is a reasonable word to use. It is well-defined, and useful in some contexts. But Plain-Old-Should is also a word with a meaning that is useful in some contexts.
in which case I would agree with you.
I was trying to convey that when you speak of beliefs and determination of actions you are describing an entirely different concept than what Normal_Anomaly was describing. To the extent that presenting your statements as a contradiction of Normal’s is both a conversational and epistemic error.
You can write_underscored_names by escaping the _ by preceding it with a \.
So you’re defining “should” to describe actions that best further one’s terminal values? Or is there an additional “shouldness” about terminal values too?
Also, regarding
in the grandparent, it sounds like you’re equivocating between defining what the word “ought” means and changing the true nature of the concept that “ought” usually refers to. (Unless I was wrong to add the “proper” in the quote, in which case I actually don’t know what point you were making.) To wit: “ought” is just a word that we can define as we like, but the concept that “ought” usually refers to is an adaptation and declaring that “ought” actually means something different will not change our actual behavior, except insofar as you succeed in changing others’ terminal values.
Incidentally this is a very slippery topic for me to talk about for reasons that I don’t fully understand, but I suspect it has to do with my moral intuitions constantly intervening and loudly claiming “no, it should be this way!” and the like. I also strongly suspect that this difficulty is nearly universal among humans.
There is.
You weren’t.
I do not think I am equivocating. Rather, I disagree with lukeprog about what people are changing when they disagree about morality.
lukeprog thinks that people disagree about what ought means / the definition of ought.
I believe that (almost) everybody things “ought” means the same thing, and that people disagree about the concept that “ought” usually refers to.
This concept is special because it has a reverse definition. Normally a word is defined by the situations in which you can infer that a statement about that word is true. However, “ought” is defined the other way—by what you can do when you infer that a statement about “ought” is true.
Is it the case that Katy ought to buy a car? Well, I don’t know. But I know that if Katy is rational, and she becomes convinced that she ought to buy a car, then she will buy a car.
What is the difference between what “ought” means and what it refers to?
Edit:
In the above, do you say that “You ought to do X.” is exactly equivalent to the command”Do X!”, and “I ought to do X.” means “I will do X on the first opportunity and not by accident.” ?
Ought we base the definition of “ought” on a pretty complicated notion of rationality?
To the first one, yes, but they have different connotations.
To the second one, sort of. “I” can get fuzzy here. I have akrasia problems. I should do my work, but I will not do it for a while. If you cut out a sufficiently small portion of my mind then this portion doesn’t have the opportunity to do my work until it actually does my work, because the rest of my mind is preventing it.
Furthermore I am thinking about them more internally. “should” isn’t part of predicting actions, its part of choosing them.
It doesn’t seem complicated to me. Certainly simpler than lukeprog’s definitions.
These issues are ones that should be cleared up by the discussion post I’m going to write in a second.
It seems that my further questions rather ought to wait a second, then.
It isn’t equivalent to a moral “ought”, since one person can command another to do something they both think is immoral.
This would require one of two situations:
a. A person consisting of multiple competing subagents, where the “ought” used by one is not the same as the “ought” used by another.
b. .A person with two different systems of morality, one dictating what is moral and the other how much they will accept deviating from it.
In either case you would need two words because there are two different kinds of should in the mind.
I gave the situation of one person commanding another. You replied with a scenario about one person with different internal systems. I don’t know why you did that.
It’s generally believed that if you shouldn’t tell people to do things they shouldn’t do.
So your problem reduces to the problem of someone who does things that they believe they shouldn’t.
If you’re not willing to make that reduction, I’ll have to think about things further.
I think it is obvious that involves someone doing something they think they shouldn’t. Which is not uncommon.
Which requires either a or b.
(yay, I finally caused a confusion that should be really easy to clear up!)
Alice and Bob agree that “Earth” means “that giant thing under us”. Alice and Bob disagree about the Earth, though. They disagree about that giant thing under them. Alice thinks it’s round, and Bob thinks it’s flat.
Yes, this is the distinction I had in mind.
So do you now think that I do not equivocate?
No, I think there is still equivocation in the claim that your dialog and Luke’s contradict one another. Luke is talking about the meaning of the word “Earth” and you are talking about the giant thing under us.
I also do not completely buy the assertion that “ought” is special because it has a reverse definition. This assertion itself sounds to me like a top-down definition of the ordinary type, if an unusually complex one.
Well there are two possible definitions, Luke’s and my reverse definition (or top-down definition of the ordinary type).
If you accept both definitions, then you have just proved that the right thing to do is XYZ. One shouldn’t be able to prove this just from definitions. Therefore you cannot accept both definitions.
Let’s try an analogy in another normative arena.
Suppose we propose to define rationality extensionally. Scientists study rationality for many decades and finally come up with a comprehensive definition of rationality that becomes the consensus. And then they start using that definition to shape their own inference patterns. “The rational thing to do is XYZ,” they conclude, using their definitions.
Where’s the problem?
Because the rational thing means the thing that produces good results, so you need a shouldness claim to scientifically study rationality claims, and science cannot produce shouldness claims.
The hidden assumption is something like “Good results are those produced by thinking processes that people think are rational” or something along those lines. If you accept that assumption, or any similar assumption, such a study is valid.
Let’s temporarily segregate rational thought vs. rational action, though they will ultimately need to be reconciled. I think that we can, and must, characterize rational thought first. We must, because “good results” are good only insofar as they are desired by a rational agent. We can, because while human beings aren’t very good individually at explicitly defining rationality, they are good, collectively via the scientific enterprise, at knowing it when they see it.
In this context, “science cannot produce shouldness claims” is contentious. Best not to make a premise out of it.
But why are rational thoughts good? Because they produce rational actions.
The circular bootstrapping way of doing this may not be untenable. (This is similar to arguments of Eliezer’s.)
You’re making hidden assumptions about what “good” and “rational” mean. Of course, some people accept those assumptions, that’s their prerogative. But those assumptions are not true by definition.
Yes I am making assumptions (they’re pretty open) about goodness and rationality. Or better: hypotheses. I advance them because I think theories constructed around them will ultimately cohere better with our most confident judgments and discoveries. Try them and see. By all means try alternatives and compare.
Circular bootstrapping is fine. And sure, the main reason rational thoughts are good is that they typically lead to good actions. But they’re still rational thoughts—thus correct—even when they don’t.
I think if you accept that you are making “assumptions” or “hypotheses” you agree with me.
Because you are thinking about the moral issue in a way reminiscent of scientific issues, as a quest for truth, not as a proof-by-definition.
I have difficulty to apply the analogy to ought.
I’m not really sure why this was downvoted, compared to everything else I’ve written on the topic.
Did it have to do with the excessive bolding somehow? Do my claims sound especially stupid stated like this?
It seems to completely miss Normal_Anomaly’s point, speaking right past him. As to the ‘compared to everything else you have written’ I refrained from downvoting your replies to myself even though I would have downvoted them if they were replies to a third party. It is a general policy of mine that I find practical, all else being equal.
Not for objective metaethicists, who seem to be able to escape your circle.
This doesn’t seem to actually be a term, after a few seconds of googling. Could you provide a link to a description of this philosophy?
I would much prefer to keep Luke’s. Basically because it is is actually useful when communicating with others who aren’t interested in having the other person’s values rammed down their throat. Because if you went around saying an ought at me using your definition then obviously you should expect me to reject it regardless of what the content is. Because the way you are using the term is such that it assumes that the recipient is ultimately subject to something that refers to your own mind.
So if you tell me I should go do something, and I agree with you, and I never go do that, you would see nothing inconsistent?
I’m totally comfortable with claims of the form “If you believe XYZ normative statements, then you should do W.” It should work just as well as conditionals about physical statements.
No, that is not something that is implied by my statements.
It is an example of someone not acting according to their own professed ideals and is inconsistent in the same way that all such things are.
So you’re saying that I am only allowed to use “should” to mean “WillSawin_should”. I can’t use it to mean “wedrifid_should”.
This seems like an odd way to run a conversation to me.
No, that is another rather bizarre thing which I definitely did not say. Perhaps it will be best for me to just leave it with my initial affirmation of Luke’s post:
In my observation Luke’s system for reducing moral claims provides more potential for enabling effective communication between agents and a more comprehensive way to form a useful epistemic model of such conversations.
So suppose I say:
“I wedrifid_should do X” and then don’t do X. Clearly, I am not being inconsistent.
but if I say:
“I should do X” and then don’t do X then I am being inconsistent.
Something must therefore prevent me from using “should” to mean “wedrifid_should”.
I’d agree that you can (and probably do) use plain old “should” to mean multiple things. The trouble is that this isn’t very useful for communication. So when communicating, us humans use heuristics to figure out what “should” is meant.
In the example of the conversation, if I say “you should X” and you say “I agree,” then I generally use a shortcut to think you meant Will-should. The obvious reason for this is that if you meant Manfred-should, you would have just repeated my own statement back to me, which would be not communicating anything, and it’s a decent shortcut to assume that when people say something they want to communicate. The only other obvious “should” in the conversation is Will-should, so it’s a good guess that you meant Will-should.
“I agree” generally means the same thing as repeating someone’s statement back at them. We can expand:
“You wedrifd_should do X”
“I agree, I will_should do X”
I seem to be making an error of interpretation here if I say things the way you normally say them! Why, in this instance, is it considered normal and acceptable to interpret professed agreement as expressing a different belief than the one being agreed to?
It all seems fishy to me.
Huh, yeah that is weird. But on thinking about it, I can only think of two situations I’ve heard or used “I agree.” One is if there’s a problem with an unsure solution, where it means “My solution-finding algorithm also returned that,” and if someone offers a suggestion about what should be done, where I seem to be claiming it usually means “My should-finding algorithm also returned that.”
In the first case, would you say that the \Manfred_solution is something or other? You and I mean something different by “solution”?
Of course not.
So why would you do something different for “should”?
Because there’s no objective standard against which “should algorithms” can be tested, like there is for the standard for “solution-finding algorithms” If there was no objective standard for solutions, I would absolutely stop talking about “the solution” and start talking about the Manfred_solution.
Didn’t you say in the other thread that we can disagree about the proper state of the world?
When we do that, what thing are we disagreeing about? It’s certainly not a standard, but how can it be subjective?
That’s the objective thing I am talking about.
Hm. I agree that you can disagree about some world-state that you’d like, but I don’t understand how we could move that from “we disagree” to “there is one specific world-state that is the standard.” So I stand by “no objective standard” for now.
I assume you are talking about proper or desirable world-states rather than actual ones.
I didn’t say it was the standard.
The idea is this.
If we disagree about what world state is best, there has to be some kind of statement I believe and you don’t, right? Otherwise, we wouldn’t disagree. Some kind of statement like “This world state is best.”
But the difference isn’t about some measurable property of the world, but about internal algorithms for deciding what to do.
Sure, to the extent that humans are irrational and can pit one desire against another, arguing about how to determine “best” is not a total waste of time, but I don’t think that has much bearing on subjectivity.
I’m losing the thread of the conversation at this point.
I have no solution to that problem.
Perhaps the meaning of the paragraph you quote wasn’t clear—I was trying hard to be polite rather than frank. You seem to be attacking a straw man using rhetorical questions so trivial that I would consider them disingenuous prior to adjusting for things like the illusion of transparency. Your conversation with lukeprog seems like one with more potential for useful communication. He cares about the subject far more than I do.
Moral ideas don’t determinne behaviour with any great reliability, so there is no analytical or necessary relationship there. If that’s what you were getting at.
I don’t want to spam but if people haven’t noticed then hopefully this comment should inform them that my first-ever lesswrong post, which might or might not make this clearer, is up.