Your dialogue looks similar to the one about losing weight above. I can define proper behavior given my terminal values. If I want to lose weight, I should eat less. Upon learning this fact, I start eating less. My values and some facts about the world are sufficient to determine my proper behavior. “Defining my behavior into existence” seems no more absurd to me than defining the rational action using a decision theory.
I’m not sure I’ve explained myself very clearly here. Please advise on what, if anything, that I’m saying is hard to understand.
If it is the case that you should do what you want, yes.
If you want to punch babies, then you should not punch babies. (x)
If you should lose weight, then you should eat less.
Proper values and some facts about the world are sufficient to determine proper behavior.
What are proper values? Well, they’re the kind of values that determine proper behavior.
x: Saying this requirems me to know a moral fact. This moral fact is a consequence of an assumption I made about the true nature of reality. But to assume is to stoop lower than to define.
If you want to punch babies, then you should not punch babies. (x)
This is WillSawin_Should. NormalAnomaly_Should says the same thing, because we’re both humans. #$%^$_Should, where #$%^$ is the name of an alien from planet Mog, may say something completely different. You and I both use the letter sequence s-h-o-u-l-d to refer to the output of our own unique should-functions.
Lukeprog, the above is how I understand your post. Is it correct?
No. We both use the letter sequence “should” to direct our actions.
We believe that we should follow the results of our should-functions. We believe that the alien from Mog is wrong to follow the results of his should-function. These are beliefs, not definitions.
Imagine if you said “The sun will rise tomorrow” and I responded:
“This is NormalAnomaly_Will. WillSawin_Will says the same thing, because we’re both humans. #$%^$_Will, where #$%^$ is the name of an alien from planet Mog, may say something completely different. You and I both use the letter sequence w-i-l-l to refer to the output of our own unique will-functions.”
Normal_Anomaly’s ontology is coherent. What you describe regarding beliefs is also coherent but refers to a different part of reality space than what Normal is trying to describe.
I don’t understand what “ontology” and “reality space” mean in this context.
Here’s a guess:
You’re saying that the word “WillSawin_Should” is a reasonable word to use. It is well-defined, and useful in some contexts. But Plain-Old-Should is also a word with a meaning that is useful in some contexts.
I was trying to convey that when you speak of beliefs and determination of actions you are describing an entirely different concept than what Normal_Anomaly was describing. To the extent that presenting your statements as a contradiction of Normal’s is both a conversational and epistemic error.
So you’re defining “should” to describe actions that best further one’s terminal values? Or is there an additional “shouldness” about terminal values too?
Also, regarding
Because if “ought” determines our [proper] behavior, and we can define what “ought” means, then we can define proper behavior into existence
in the grandparent, it sounds like you’re equivocating between defining what the word “ought” means and changing the true nature of the concept that “ought” usually refers to. (Unless I was wrong to add the “proper” in the quote, in which case I actually don’t know what point you were making.) To wit: “ought” is just a word that we can define as we like, but the concept that “ought” usually refers to is an adaptation and declaring that “ought” actually means something different will not change our actual behavior, except insofar as you succeed in changing others’ terminal values.
Incidentally this is a very slippery topic for me to talk about for reasons that I don’t fully understand, but I suspect it has to do with my moral intuitions constantly intervening and loudly claiming “no, it should be this way!” and the like. I also strongly suspect that this difficulty is nearly universal among humans.
Or is there an additional “shouldness” about terminal values too?
There is.
(Unless I was wrong to add the “proper” in the quote, in which case I actually don’t know what point you were making.)
You weren’t.
in the grandparent, it sounds like you’re equivocating between defining what the word “ought” means and changing the true nature of the concept that “ought” usually refers to.
I do not think I am equivocating. Rather, I disagree with lukeprog about what people are changing when they disagree about morality.
lukeprog thinks that people disagree about what ought means / the definition of ought.
I believe that (almost) everybody things “ought” means the same thing, and that people disagree about the concept that “ought” usually refers to.
This concept is special because it has a reverse definition. Normally a word is defined by the situations in which you can infer that a statement about that word is true. However, “ought” is defined the other way—by what you can do when you infer that a statement about “ought” is true.
Is it the case that Katy ought to buy a car? Well, I don’t know. But I know that if Katy is rational, and she becomes convinced that she ought to buy a car, then she will buy a car.
I believe that (almost) everybody things “ought” means the same thing, and that people disagree about the concept that “ought” usually refers to.
What is the difference between what “ought” means and what it refers to?
Edit:
This concept is special because it has a reverse definition. Normally a word is defined by the situations in which you can infer that a statement about that word is true. However, “ought” is defined the other way—by what you can do when you infer that a statement about “ought” is true.
In the above, do you say that “You ought to do X.” is exactly equivalent to the command”Do X!”, and “I ought to do X.” means “I will do X on the first opportunity and not by accident.” ?
Is it the case that Katy ought to buy a car? Well, I don’t know. But I know that if Katy is rational, and she becomes convinced that she ought to buy a car, then she will buy a car.
Ought we base the definition of “ought” on a pretty complicated notion of rationality?
In the above, do you say that “You ought to do X.” is exactly equivalent to the command”Do X!”, and “I ought to do X.” means “I will do X on the first opportunity and not by accident.” ?
To the first one, yes, but they have different connotations.
To the second one, sort of. “I” can get fuzzy here. I have akrasia problems. I should do my work, but I will not do it for a while. If you cut out a sufficiently small portion of my mind then this portion doesn’t have the opportunity to do my work until it actually does my work, because the rest of my mind is preventing it.
Furthermore I am thinking about them more internally. “should” isn’t part of predicting actions, its part of choosing them.
Ought we base the definition of “ought” on a pretty complicated notion of rationality?
It doesn’t seem complicated to me. Certainly simpler than lukeprog’s definitions.
These issues are ones that should be cleared up by the discussion post I’m going to write in a second.
I gave the situation of one person commanding another. You replied with
a scenario about one person with different internal systems. I don’t know why
you did that.
(yay, I finally caused a confusion that should be really easy to clear up!)
Alice and Bob agree that “Earth” means “that giant thing under us”. Alice and Bob disagree about the Earth, though. They disagree about that giant thing under them. Alice thinks it’s round, and Bob thinks it’s flat.
No, I think there is still equivocation in the claim that your dialog and Luke’s contradict one another. Luke is talking about the meaning of the word “Earth” and you are talking about the giant thing under us.
I also do not completely buy the assertion that “ought” is special because it has a reverse definition. This assertion itself sounds to me like a top-down definition of the ordinary type, if an unusually complex one.
Well there are two possible definitions, Luke’s and my reverse definition (or top-down definition of the ordinary type).
If you accept both definitions, then you have just proved that the right thing to do is XYZ. One shouldn’t be able to prove this just from definitions. Therefore you cannot accept both definitions.
Suppose we propose to define rationality extensionally. Scientists study rationality for many decades and finally come up with a comprehensive definition of rationality that becomes the consensus. And then they start using that definition to shape their own inference patterns. “The rational thing to do is XYZ,” they conclude, using their definitions.
Because the rational thing means the thing that produces good results, so you need a shouldness claim to scientifically study rationality claims, and science cannot produce shouldness claims.
The hidden assumption is something like “Good results are those produced by thinking processes that people think are rational” or something along those lines. If you accept that assumption, or any similar assumption, such a study is valid.
Let’s temporarily segregate rational thought vs. rational action, though they will ultimately need to be reconciled. I think that we can, and must, characterize rational thought first. We must, because “good results” are good only insofar as they are desired by a rational agent. We can, because while human beings aren’t very good individually at explicitly defining rationality, they are good, collectively via the scientific enterprise, at knowing it when they see it.
In this context, “science cannot produce shouldness claims” is contentious. Best not to make a premise out of it.
But why are rational thoughts good? Because they produce rational actions.
The circular bootstrapping way of doing this may not be untenable. (This is similar to arguments of Eliezer’s.)
In this context, “science cannot produce shouldness claims” is contentious. Best not to make a premise out of it.
You’re making hidden assumptions about what “good” and “rational” mean. Of course, some people accept those assumptions, that’s their prerogative. But those assumptions are not true by definition.
Yes I am making assumptions (they’re pretty open) about goodness and rationality. Or better: hypotheses. I advance them because I think theories constructed around them will ultimately cohere better with our most confident judgments and discoveries. Try them and see. By all means try alternatives and compare.
Circular bootstrapping is fine. And sure, the main reason rational thoughts are good is that they typically lead to good actions. But they’re still rational thoughts—thus correct—even when they don’t.
I’m not really sure why this was downvoted, compared to everything else I’ve written on the topic.
It seems to completely miss Normal_Anomaly’s point, speaking right past him. As to the ‘compared to everything else you have written’ I refrained from downvoting your replies to myself even though I would have downvoted them if they were replies to a third party. It is a general policy of mine that I find practical, all else being equal.
Your dialogue looks similar to the one about losing weight above. I can define proper behavior given my terminal values. If I want to lose weight, I should eat less. Upon learning this fact, I start eating less. My values and some facts about the world are sufficient to determine my proper behavior. “Defining my behavior into existence” seems no more absurd to me than defining the rational action using a decision theory.
I’m not sure I’ve explained myself very clearly here. Please advise on what, if anything, that I’m saying is hard to understand.
If it is the case that you should do what you want, yes.
If you want to punch babies, then you should not punch babies. (x)
If you should lose weight, then you should eat less.
Proper values and some facts about the world are sufficient to determine proper behavior.
What are proper values? Well, they’re the kind of values that determine proper behavior.
x: Saying this requirems me to know a moral fact. This moral fact is a consequence of an assumption I made about the true nature of reality. But to assume is to stoop lower than to define.
This is WillSawin_Should. NormalAnomaly_Should says the same thing, because we’re both humans. #$%^$_Should, where #$%^$ is the name of an alien from planet Mog, may say something completely different. You and I both use the letter sequence s-h-o-u-l-d to refer to the output of our own unique should-functions.
Lukeprog, the above is how I understand your post. Is it correct?
No. We both use the letter sequence “should” to direct our actions.
We believe that we should follow the results of our should-functions. We believe that the alien from Mog is wrong to follow the results of his should-function. These are beliefs, not definitions.
Imagine if you said “The sun will rise tomorrow” and I responded:
“This is NormalAnomaly_Will. WillSawin_Will says the same thing, because we’re both humans. #$%^$_Will, where #$%^$ is the name of an alien from planet Mog, may say something completely different. You and I both use the letter sequence w-i-l-l to refer to the output of our own unique will-functions.”
Normal_Anomaly’s ontology is coherent. What you describe regarding beliefs is also coherent but refers to a different part of reality space than what Normal is trying to describe.
I don’t understand what “ontology” and “reality space” mean in this context.
Here’s a guess:
You’re saying that the word “WillSawin_Should” is a reasonable word to use. It is well-defined, and useful in some contexts. But Plain-Old-Should is also a word with a meaning that is useful in some contexts.
in which case I would agree with you.
I was trying to convey that when you speak of beliefs and determination of actions you are describing an entirely different concept than what Normal_Anomaly was describing. To the extent that presenting your statements as a contradiction of Normal’s is both a conversational and epistemic error.
You can write_underscored_names by escaping the _ by preceding it with a \.
So you’re defining “should” to describe actions that best further one’s terminal values? Or is there an additional “shouldness” about terminal values too?
Also, regarding
in the grandparent, it sounds like you’re equivocating between defining what the word “ought” means and changing the true nature of the concept that “ought” usually refers to. (Unless I was wrong to add the “proper” in the quote, in which case I actually don’t know what point you were making.) To wit: “ought” is just a word that we can define as we like, but the concept that “ought” usually refers to is an adaptation and declaring that “ought” actually means something different will not change our actual behavior, except insofar as you succeed in changing others’ terminal values.
Incidentally this is a very slippery topic for me to talk about for reasons that I don’t fully understand, but I suspect it has to do with my moral intuitions constantly intervening and loudly claiming “no, it should be this way!” and the like. I also strongly suspect that this difficulty is nearly universal among humans.
There is.
You weren’t.
I do not think I am equivocating. Rather, I disagree with lukeprog about what people are changing when they disagree about morality.
lukeprog thinks that people disagree about what ought means / the definition of ought.
I believe that (almost) everybody things “ought” means the same thing, and that people disagree about the concept that “ought” usually refers to.
This concept is special because it has a reverse definition. Normally a word is defined by the situations in which you can infer that a statement about that word is true. However, “ought” is defined the other way—by what you can do when you infer that a statement about “ought” is true.
Is it the case that Katy ought to buy a car? Well, I don’t know. But I know that if Katy is rational, and she becomes convinced that she ought to buy a car, then she will buy a car.
What is the difference between what “ought” means and what it refers to?
Edit:
In the above, do you say that “You ought to do X.” is exactly equivalent to the command”Do X!”, and “I ought to do X.” means “I will do X on the first opportunity and not by accident.” ?
Ought we base the definition of “ought” on a pretty complicated notion of rationality?
To the first one, yes, but they have different connotations.
To the second one, sort of. “I” can get fuzzy here. I have akrasia problems. I should do my work, but I will not do it for a while. If you cut out a sufficiently small portion of my mind then this portion doesn’t have the opportunity to do my work until it actually does my work, because the rest of my mind is preventing it.
Furthermore I am thinking about them more internally. “should” isn’t part of predicting actions, its part of choosing them.
It doesn’t seem complicated to me. Certainly simpler than lukeprog’s definitions.
These issues are ones that should be cleared up by the discussion post I’m going to write in a second.
It seems that my further questions rather ought to wait a second, then.
It isn’t equivalent to a moral “ought”, since one person can command another to do something they both think is immoral.
This would require one of two situations:
a. A person consisting of multiple competing subagents, where the “ought” used by one is not the same as the “ought” used by another.
b. .A person with two different systems of morality, one dictating what is moral and the other how much they will accept deviating from it.
In either case you would need two words because there are two different kinds of should in the mind.
I gave the situation of one person commanding another. You replied with a scenario about one person with different internal systems. I don’t know why you did that.
It’s generally believed that if you shouldn’t tell people to do things they shouldn’t do.
So your problem reduces to the problem of someone who does things that they believe they shouldn’t.
If you’re not willing to make that reduction, I’ll have to think about things further.
I think it is obvious that involves someone doing something they think they shouldn’t. Which is not uncommon.
Which requires either a or b.
(yay, I finally caused a confusion that should be really easy to clear up!)
Alice and Bob agree that “Earth” means “that giant thing under us”. Alice and Bob disagree about the Earth, though. They disagree about that giant thing under them. Alice thinks it’s round, and Bob thinks it’s flat.
Yes, this is the distinction I had in mind.
So do you now think that I do not equivocate?
No, I think there is still equivocation in the claim that your dialog and Luke’s contradict one another. Luke is talking about the meaning of the word “Earth” and you are talking about the giant thing under us.
I also do not completely buy the assertion that “ought” is special because it has a reverse definition. This assertion itself sounds to me like a top-down definition of the ordinary type, if an unusually complex one.
Well there are two possible definitions, Luke’s and my reverse definition (or top-down definition of the ordinary type).
If you accept both definitions, then you have just proved that the right thing to do is XYZ. One shouldn’t be able to prove this just from definitions. Therefore you cannot accept both definitions.
Let’s try an analogy in another normative arena.
Suppose we propose to define rationality extensionally. Scientists study rationality for many decades and finally come up with a comprehensive definition of rationality that becomes the consensus. And then they start using that definition to shape their own inference patterns. “The rational thing to do is XYZ,” they conclude, using their definitions.
Where’s the problem?
Because the rational thing means the thing that produces good results, so you need a shouldness claim to scientifically study rationality claims, and science cannot produce shouldness claims.
The hidden assumption is something like “Good results are those produced by thinking processes that people think are rational” or something along those lines. If you accept that assumption, or any similar assumption, such a study is valid.
Let’s temporarily segregate rational thought vs. rational action, though they will ultimately need to be reconciled. I think that we can, and must, characterize rational thought first. We must, because “good results” are good only insofar as they are desired by a rational agent. We can, because while human beings aren’t very good individually at explicitly defining rationality, they are good, collectively via the scientific enterprise, at knowing it when they see it.
In this context, “science cannot produce shouldness claims” is contentious. Best not to make a premise out of it.
But why are rational thoughts good? Because they produce rational actions.
The circular bootstrapping way of doing this may not be untenable. (This is similar to arguments of Eliezer’s.)
You’re making hidden assumptions about what “good” and “rational” mean. Of course, some people accept those assumptions, that’s their prerogative. But those assumptions are not true by definition.
Yes I am making assumptions (they’re pretty open) about goodness and rationality. Or better: hypotheses. I advance them because I think theories constructed around them will ultimately cohere better with our most confident judgments and discoveries. Try them and see. By all means try alternatives and compare.
Circular bootstrapping is fine. And sure, the main reason rational thoughts are good is that they typically lead to good actions. But they’re still rational thoughts—thus correct—even when they don’t.
I think if you accept that you are making “assumptions” or “hypotheses” you agree with me.
Because you are thinking about the moral issue in a way reminiscent of scientific issues, as a quest for truth, not as a proof-by-definition.
I have difficulty to apply the analogy to ought.
I’m not really sure why this was downvoted, compared to everything else I’ve written on the topic.
Did it have to do with the excessive bolding somehow? Do my claims sound especially stupid stated like this?
It seems to completely miss Normal_Anomaly’s point, speaking right past him. As to the ‘compared to everything else you have written’ I refrained from downvoting your replies to myself even though I would have downvoted them if they were replies to a third party. It is a general policy of mine that I find practical, all else being equal.
Not for objective metaethicists, who seem to be able to escape your circle.
This doesn’t seem to actually be a term, after a few seconds of googling. Could you provide a link to a description of this philosophy?