I believe that (almost) everybody things “ought” means the same thing, and that people disagree about the concept that “ought” usually refers to.
What is the difference between what “ought” means and what it refers to?
Edit:
This concept is special because it has a reverse definition. Normally a word is defined by the situations in which you can infer that a statement about that word is true. However, “ought” is defined the other way—by what you can do when you infer that a statement about “ought” is true.
In the above, do you say that “You ought to do X.” is exactly equivalent to the command”Do X!”, and “I ought to do X.” means “I will do X on the first opportunity and not by accident.” ?
Is it the case that Katy ought to buy a car? Well, I don’t know. But I know that if Katy is rational, and she becomes convinced that she ought to buy a car, then she will buy a car.
Ought we base the definition of “ought” on a pretty complicated notion of rationality?
In the above, do you say that “You ought to do X.” is exactly equivalent to the command”Do X!”, and “I ought to do X.” means “I will do X on the first opportunity and not by accident.” ?
To the first one, yes, but they have different connotations.
To the second one, sort of. “I” can get fuzzy here. I have akrasia problems. I should do my work, but I will not do it for a while. If you cut out a sufficiently small portion of my mind then this portion doesn’t have the opportunity to do my work until it actually does my work, because the rest of my mind is preventing it.
Furthermore I am thinking about them more internally. “should” isn’t part of predicting actions, its part of choosing them.
Ought we base the definition of “ought” on a pretty complicated notion of rationality?
It doesn’t seem complicated to me. Certainly simpler than lukeprog’s definitions.
These issues are ones that should be cleared up by the discussion post I’m going to write in a second.
I gave the situation of one person commanding another. You replied with
a scenario about one person with different internal systems. I don’t know why
you did that.
(yay, I finally caused a confusion that should be really easy to clear up!)
Alice and Bob agree that “Earth” means “that giant thing under us”. Alice and Bob disagree about the Earth, though. They disagree about that giant thing under them. Alice thinks it’s round, and Bob thinks it’s flat.
No, I think there is still equivocation in the claim that your dialog and Luke’s contradict one another. Luke is talking about the meaning of the word “Earth” and you are talking about the giant thing under us.
I also do not completely buy the assertion that “ought” is special because it has a reverse definition. This assertion itself sounds to me like a top-down definition of the ordinary type, if an unusually complex one.
Well there are two possible definitions, Luke’s and my reverse definition (or top-down definition of the ordinary type).
If you accept both definitions, then you have just proved that the right thing to do is XYZ. One shouldn’t be able to prove this just from definitions. Therefore you cannot accept both definitions.
Suppose we propose to define rationality extensionally. Scientists study rationality for many decades and finally come up with a comprehensive definition of rationality that becomes the consensus. And then they start using that definition to shape their own inference patterns. “The rational thing to do is XYZ,” they conclude, using their definitions.
Because the rational thing means the thing that produces good results, so you need a shouldness claim to scientifically study rationality claims, and science cannot produce shouldness claims.
The hidden assumption is something like “Good results are those produced by thinking processes that people think are rational” or something along those lines. If you accept that assumption, or any similar assumption, such a study is valid.
Let’s temporarily segregate rational thought vs. rational action, though they will ultimately need to be reconciled. I think that we can, and must, characterize rational thought first. We must, because “good results” are good only insofar as they are desired by a rational agent. We can, because while human beings aren’t very good individually at explicitly defining rationality, they are good, collectively via the scientific enterprise, at knowing it when they see it.
In this context, “science cannot produce shouldness claims” is contentious. Best not to make a premise out of it.
But why are rational thoughts good? Because they produce rational actions.
The circular bootstrapping way of doing this may not be untenable. (This is similar to arguments of Eliezer’s.)
In this context, “science cannot produce shouldness claims” is contentious. Best not to make a premise out of it.
You’re making hidden assumptions about what “good” and “rational” mean. Of course, some people accept those assumptions, that’s their prerogative. But those assumptions are not true by definition.
Yes I am making assumptions (they’re pretty open) about goodness and rationality. Or better: hypotheses. I advance them because I think theories constructed around them will ultimately cohere better with our most confident judgments and discoveries. Try them and see. By all means try alternatives and compare.
Circular bootstrapping is fine. And sure, the main reason rational thoughts are good is that they typically lead to good actions. But they’re still rational thoughts—thus correct—even when they don’t.
What is the difference between what “ought” means and what it refers to?
Edit:
In the above, do you say that “You ought to do X.” is exactly equivalent to the command”Do X!”, and “I ought to do X.” means “I will do X on the first opportunity and not by accident.” ?
Ought we base the definition of “ought” on a pretty complicated notion of rationality?
To the first one, yes, but they have different connotations.
To the second one, sort of. “I” can get fuzzy here. I have akrasia problems. I should do my work, but I will not do it for a while. If you cut out a sufficiently small portion of my mind then this portion doesn’t have the opportunity to do my work until it actually does my work, because the rest of my mind is preventing it.
Furthermore I am thinking about them more internally. “should” isn’t part of predicting actions, its part of choosing them.
It doesn’t seem complicated to me. Certainly simpler than lukeprog’s definitions.
These issues are ones that should be cleared up by the discussion post I’m going to write in a second.
It seems that my further questions rather ought to wait a second, then.
It isn’t equivalent to a moral “ought”, since one person can command another to do something they both think is immoral.
This would require one of two situations:
a. A person consisting of multiple competing subagents, where the “ought” used by one is not the same as the “ought” used by another.
b. .A person with two different systems of morality, one dictating what is moral and the other how much they will accept deviating from it.
In either case you would need two words because there are two different kinds of should in the mind.
I gave the situation of one person commanding another. You replied with a scenario about one person with different internal systems. I don’t know why you did that.
It’s generally believed that if you shouldn’t tell people to do things they shouldn’t do.
So your problem reduces to the problem of someone who does things that they believe they shouldn’t.
If you’re not willing to make that reduction, I’ll have to think about things further.
I think it is obvious that involves someone doing something they think they shouldn’t. Which is not uncommon.
Which requires either a or b.
(yay, I finally caused a confusion that should be really easy to clear up!)
Alice and Bob agree that “Earth” means “that giant thing under us”. Alice and Bob disagree about the Earth, though. They disagree about that giant thing under them. Alice thinks it’s round, and Bob thinks it’s flat.
Yes, this is the distinction I had in mind.
So do you now think that I do not equivocate?
No, I think there is still equivocation in the claim that your dialog and Luke’s contradict one another. Luke is talking about the meaning of the word “Earth” and you are talking about the giant thing under us.
I also do not completely buy the assertion that “ought” is special because it has a reverse definition. This assertion itself sounds to me like a top-down definition of the ordinary type, if an unusually complex one.
Well there are two possible definitions, Luke’s and my reverse definition (or top-down definition of the ordinary type).
If you accept both definitions, then you have just proved that the right thing to do is XYZ. One shouldn’t be able to prove this just from definitions. Therefore you cannot accept both definitions.
Let’s try an analogy in another normative arena.
Suppose we propose to define rationality extensionally. Scientists study rationality for many decades and finally come up with a comprehensive definition of rationality that becomes the consensus. And then they start using that definition to shape their own inference patterns. “The rational thing to do is XYZ,” they conclude, using their definitions.
Where’s the problem?
Because the rational thing means the thing that produces good results, so you need a shouldness claim to scientifically study rationality claims, and science cannot produce shouldness claims.
The hidden assumption is something like “Good results are those produced by thinking processes that people think are rational” or something along those lines. If you accept that assumption, or any similar assumption, such a study is valid.
Let’s temporarily segregate rational thought vs. rational action, though they will ultimately need to be reconciled. I think that we can, and must, characterize rational thought first. We must, because “good results” are good only insofar as they are desired by a rational agent. We can, because while human beings aren’t very good individually at explicitly defining rationality, they are good, collectively via the scientific enterprise, at knowing it when they see it.
In this context, “science cannot produce shouldness claims” is contentious. Best not to make a premise out of it.
But why are rational thoughts good? Because they produce rational actions.
The circular bootstrapping way of doing this may not be untenable. (This is similar to arguments of Eliezer’s.)
You’re making hidden assumptions about what “good” and “rational” mean. Of course, some people accept those assumptions, that’s their prerogative. But those assumptions are not true by definition.
Yes I am making assumptions (they’re pretty open) about goodness and rationality. Or better: hypotheses. I advance them because I think theories constructed around them will ultimately cohere better with our most confident judgments and discoveries. Try them and see. By all means try alternatives and compare.
Circular bootstrapping is fine. And sure, the main reason rational thoughts are good is that they typically lead to good actions. But they’re still rational thoughts—thus correct—even when they don’t.
I think if you accept that you are making “assumptions” or “hypotheses” you agree with me.
Because you are thinking about the moral issue in a way reminiscent of scientific issues, as a quest for truth, not as a proof-by-definition.
I have difficulty to apply the analogy to ought.