Well there are two possible definitions, Luke’s and my reverse definition (or top-down definition of the ordinary type).
If you accept both definitions, then you have just proved that the right thing to do is XYZ. One shouldn’t be able to prove this just from definitions. Therefore you cannot accept both definitions.
Suppose we propose to define rationality extensionally. Scientists study rationality for many decades and finally come up with a comprehensive definition of rationality that becomes the consensus. And then they start using that definition to shape their own inference patterns. “The rational thing to do is XYZ,” they conclude, using their definitions.
Because the rational thing means the thing that produces good results, so you need a shouldness claim to scientifically study rationality claims, and science cannot produce shouldness claims.
The hidden assumption is something like “Good results are those produced by thinking processes that people think are rational” or something along those lines. If you accept that assumption, or any similar assumption, such a study is valid.
Let’s temporarily segregate rational thought vs. rational action, though they will ultimately need to be reconciled. I think that we can, and must, characterize rational thought first. We must, because “good results” are good only insofar as they are desired by a rational agent. We can, because while human beings aren’t very good individually at explicitly defining rationality, they are good, collectively via the scientific enterprise, at knowing it when they see it.
In this context, “science cannot produce shouldness claims” is contentious. Best not to make a premise out of it.
But why are rational thoughts good? Because they produce rational actions.
The circular bootstrapping way of doing this may not be untenable. (This is similar to arguments of Eliezer’s.)
In this context, “science cannot produce shouldness claims” is contentious. Best not to make a premise out of it.
You’re making hidden assumptions about what “good” and “rational” mean. Of course, some people accept those assumptions, that’s their prerogative. But those assumptions are not true by definition.
Yes I am making assumptions (they’re pretty open) about goodness and rationality. Or better: hypotheses. I advance them because I think theories constructed around them will ultimately cohere better with our most confident judgments and discoveries. Try them and see. By all means try alternatives and compare.
Circular bootstrapping is fine. And sure, the main reason rational thoughts are good is that they typically lead to good actions. But they’re still rational thoughts—thus correct—even when they don’t.
Well there are two possible definitions, Luke’s and my reverse definition (or top-down definition of the ordinary type).
If you accept both definitions, then you have just proved that the right thing to do is XYZ. One shouldn’t be able to prove this just from definitions. Therefore you cannot accept both definitions.
Let’s try an analogy in another normative arena.
Suppose we propose to define rationality extensionally. Scientists study rationality for many decades and finally come up with a comprehensive definition of rationality that becomes the consensus. And then they start using that definition to shape their own inference patterns. “The rational thing to do is XYZ,” they conclude, using their definitions.
Where’s the problem?
Because the rational thing means the thing that produces good results, so you need a shouldness claim to scientifically study rationality claims, and science cannot produce shouldness claims.
The hidden assumption is something like “Good results are those produced by thinking processes that people think are rational” or something along those lines. If you accept that assumption, or any similar assumption, such a study is valid.
Let’s temporarily segregate rational thought vs. rational action, though they will ultimately need to be reconciled. I think that we can, and must, characterize rational thought first. We must, because “good results” are good only insofar as they are desired by a rational agent. We can, because while human beings aren’t very good individually at explicitly defining rationality, they are good, collectively via the scientific enterprise, at knowing it when they see it.
In this context, “science cannot produce shouldness claims” is contentious. Best not to make a premise out of it.
But why are rational thoughts good? Because they produce rational actions.
The circular bootstrapping way of doing this may not be untenable. (This is similar to arguments of Eliezer’s.)
You’re making hidden assumptions about what “good” and “rational” mean. Of course, some people accept those assumptions, that’s their prerogative. But those assumptions are not true by definition.
Yes I am making assumptions (they’re pretty open) about goodness and rationality. Or better: hypotheses. I advance them because I think theories constructed around them will ultimately cohere better with our most confident judgments and discoveries. Try them and see. By all means try alternatives and compare.
Circular bootstrapping is fine. And sure, the main reason rational thoughts are good is that they typically lead to good actions. But they’re still rational thoughts—thus correct—even when they don’t.
I think if you accept that you are making “assumptions” or “hypotheses” you agree with me.
Because you are thinking about the moral issue in a way reminiscent of scientific issues, as a quest for truth, not as a proof-by-definition.