Rational-ought beliefs and actions are the ones optimal for achieving your goals. Goals and optimallity can be explained in scientific language. Rational-ought is not moral-ought. Moral-ought is harder to explain because it about the goals an agent should have,not the ones they happen to have.
I’m sincerely not sure about the rational-ought/moral-ought distinction—I haven’t thought enough about it. But anyway, I think moral-ought is a red herring, here. As far as I can see, the claims made in the post apply to rational-oughts. That was certainly the intention. In other posts on LW about fallacies and the details of rational thinking, it’s a commonplace to use quite normative language in connection with rationality. Indeed, a primary goal is to help people to think and behave more rationally, because this is seen for each of us to be a good. ‘One ought not to procrastinate’, ‘One ought to compensate for one’s biases’, etc..
Try somehow to shoehorn normative facts into a naturalistic world-view, at the possible peril of the coherence of that world-view.
Easily done with non-moral norms such as rationality.
Would love to see the details… :-)
Not if you think of purpose as a metaphysical fundamental. Easily, if a purpose is just a particular idea in the mind. If I intend to but a lawnmower, and I write “buy lawnmower” on a piece of paper, there is nothing mysterious about the note, or about the state of mind that preceded it.
I’m not sure I get this. The intention behind drawing the initial distinction between is/ought problems was to make clear the focus is not on, as it were, the mind of the beholder. The question is a less specific variant of the question as to how any mere physical being comes to have intentions (e.g., to buy a lawnmower) in the first place.
That you want to do something does not mean you ought to do it in the categorial, unconditional sense of moral-ought.
I agree, but I think it does mean you ought to in a qualified sense. Your merely being in a physical or computational state, however, by itself doesn’t, or so the thought goes.
I’m sincerely not sure about the rational-ought/moral-ought distinction—I haven’t thought enough about it. But anyway, I think moral-ought is a red herring, here. As far as I can see, the claims made in the post apply to rational-oughts.
Rational-oughts are just shortcuts for actions and patterns of actions which are most efficient for reaching your goals. Moral-oughts are about the goals themselves. Determining which actions are most efficient for reaching your goals is entirely naturalistic, and reduces ultimately to statements about what is. Moral-oughts reduce both to what is, and to what satisfies more important goals. The ultimate problem is that there’s no way to justify a top-level goal. For a designed mind with a top-level goal, this is not actually a problem, since there’s no way to reason one’s way to a top-level goal change—it can be taken as an ‘is’. For entities without top-level goals, however, such as humans, this is a serious problem, since it means that there’s no ultimate justification for any action at all, only interim justifications, the force of which grows weaker rather than stronger as you climb the reflective tower of goals.
The ultimate problem is that there’s no way to justify a top-level goal. For a designed mind with a top-level goal, this is not actually a problem, since there’s no way to reason one’s way to a top-level goal change—it can be taken as an ‘is’.
That makes it easier for a designed mind to do rational-ought, but equally harder to do moral-ought.
It might make it easier for a top-level-goal-haver (TLGH) to choose a rational-ought, since there can be no real conflict, but it doesn’t necessarily make it easier to reason about, given such a goal. However, I’d say that it makes it much, much easier to do (what the TLGH sees as) moral-ought, since the TLGH presumably has a concrete top level goal, rather than having to figure it out (or have the illusion of trying to figure it out). The TLGH knows what the morally right thing to do is—it’s hardwired. Figuring out morality is harder when you don’t already have a moral arrow preset for you.
That isn’t to say that we’d agreed that a TLGH has the “correct” arrow of morality, but the TLGH can be completely sure that it does, since that’s really what it means to have a top level goal. Any wondering about whether a TLGH did the right thing, by itself, will be rational-ought, not moral-ought.
Now, if you meant that it will be harder for it to act like we’d consider a moral entity, then I’d say (again, assuming a top level goal) that it will either do so, or it won’t, but it won’t be difficult to force itself to do the right thing. This also assumes such a straightforward goal-seeking design is possible for an intelligence. I don’t have an opinion on that.
The TLGH knows what the morally right thing to do is—it’s hardwired. Figuring out morality is harder when you don’t already have a moral arrow preset for you.
Knowledge requires justificaiton. A TLGH that understands epssiemology wouild see itslef as not knowing its
TLG, since “it was hardwired into me” is no justification. This applies to humans: we are capable of dounting that our evolutionarily derived moral attitudes arre the correct ones.
Evolutionary psychology tells us that our evolutionary history has given us certain moral attitudes and behaviour. So far, so good. Some scientifically minded types take this to constitute a theory of objective morality all in itself. However, that would be subject to the Open Question objection: we can ask of our inherited morality whether it is actually right. (Unrelatedly, we are probably not determined to follow it, since we can overcome strong evolutionary imperatives in, for instance, voluntay celibacy). This not a merely abstract issue either, since EP has been used to support some contentious claims; for instance, that men should be forgiven for adultery since it is “in their genes” to seek muliple partners.
Any wondering about whether a TLGH did the right thing, by itself, will be rational-ought, not moral-ought.
And if there is any kind of objective truth about which goals are the true top level goals, that is going to have
to come from reasoning. Emipricism fails because there are no perceivable moral facts, and ordinary facts fall into the is-ought divide.
Rationality is probably better at removing goals than setting them, better at thous-shalt-nots than thou-shalts That is in line with the liberal-secular view of morality, where it would be strange and maybe even obnoxious for everyone to be pursuing the same aim.
Knowledge requires justificaiton. A TLGH that understands epssiemology wouild see itslef as not knowing its TLG, since “it was hardwired into me” is no justification. This applies to humans: we are capable of dounting that our evolutionarily derived moral attitudes arre the correct ones.
This only applies to humans because we are not TLGHs. Beliefs and goals require justification because we might change them. Beliefs and goals which are hardwired do not require justification; they must be taken as given. As far as I’m aware, humans only ever have beliefs or goals that seem hardwired in this sense in the case of damage, like people with Capgras delusion.
However, that would be subject to the Open Question objection: we can ask of our inherited morality whether it is actually right. (Unrelatedly, we are probably not determined to follow it, since we can overcome strong evolutionary imperatives in, for instance, voluntary celibacy).
In fact, I would argue that we can only genuinely ask if our “inherited morality” is right because we are not determined to follow it.
This only applies to humans because we are not TLGHs. Beliefs and goals require justification because we might change them.
I said knowledge requires justification. I was appealing to the standard True Justified Belief theory of knowledge. That belief per se does not need justification is not relevant.
A TLGH that understands epistemology wouild see itself as not knowing its TLG, since “it was hardwired into me” is no justification.
So, it’s no justification in this technical sense, and it might cheerfully agree that it doesn’t “know” its TLG in this sense, but that’s completely aside from the 100% certainty with which it holds it, a certainty which can be utterly unshakable by reason or argument.
I misunderstood what you were saying due to “justification” being a technical term, here. :)
It’s been sketched out several times already, by various people.
1.You have a set of goals (aposteriori “is”)
You have a set of strategies for achieving goals with varying levels of efficiency (aposteriori “is”)
Being rational is applying rationality to achieve goals optimally (analytical “is”), ie if you are want to be rational, you ought to optimise your UF.
Of course that isnt pure empiricism (what is?)
because 3 is a sort of conceptual analysis of “oughtness”. I am not bothered
about that for a number of reasons: I am not commited to the insolubility of the is/ought gap, nor to the non existence
of objective ethics.
I’m not sure I get this. The intention behind drawing the initial distinction between is/ought problems was to make clear the focus is not on, as it were, the mind of the beholder. The question is a less specific variant of the question as to how any mere physical being comes to have intentions (e.g., to buy a lawnmower) in the first place.
I don’t see why the etiology of intentions should pose any more of a problem than the representation of intentions. You can build robots that seek out light sources. “seek light sources” is represented in its programming. It came from
the progammer. Where’s the problem?
I agree, but I think it does mean you ought to in a qualified sense.
But the qualified sense is easily explained as goal+strategy. You rational-ought to adopt strategies to achieve
your goals.
Your merely being in a physical or computational state, however, by itself doesn’t, or so the thought goes
Concrete facts about my goals and siituation, and abstract facts about which strategies achieve which goals
are allt hat is needed to establish truths about rational-ought. What is unnaturalistic about that? The abstract
facts about how strageties may be unnaturualisable in a sense, but it is a rather unimpactive sense. Abstract
reasoning in general isn’t (at least usefully) reducible to atoms, but that doesnt mean it is “about” some non physical realm. In a sense it isn’t about anything, It just operates on its own level.
I’m sincerely not sure about the rational-ought/moral-ought distinction—I haven’t thought enough about it. But anyway, I think moral-ought is a red herring, here. As far as I can see, the claims made in the post apply to rational-oughts. That was certainly the intention. In other posts on LW about fallacies and the details of rational thinking, it’s a commonplace to use quite normative language in connection with rationality. Indeed, a primary goal is to help people to think and behave more rationally, because this is seen for each of us to be a good. ‘One ought not to procrastinate’, ‘One ought to compensate for one’s biases’, etc..
Would love to see the details… :-)
I’m not sure I get this. The intention behind drawing the initial distinction between is/ought problems was to make clear the focus is not on, as it were, the mind of the beholder. The question is a less specific variant of the question as to how any mere physical being comes to have intentions (e.g., to buy a lawnmower) in the first place.
I agree, but I think it does mean you ought to in a qualified sense. Your merely being in a physical or computational state, however, by itself doesn’t, or so the thought goes.
Rational-oughts are just shortcuts for actions and patterns of actions which are most efficient for reaching your goals. Moral-oughts are about the goals themselves. Determining which actions are most efficient for reaching your goals is entirely naturalistic, and reduces ultimately to statements about what is. Moral-oughts reduce both to what is, and to what satisfies more important goals. The ultimate problem is that there’s no way to justify a top-level goal. For a designed mind with a top-level goal, this is not actually a problem, since there’s no way to reason one’s way to a top-level goal change—it can be taken as an ‘is’. For entities without top-level goals, however, such as humans, this is a serious problem, since it means that there’s no ultimate justification for any action at all, only interim justifications, the force of which grows weaker rather than stronger as you climb the reflective tower of goals.
That makes it easier for a designed mind to do rational-ought, but equally harder to do moral-ought.
It might make it easier for a top-level-goal-haver (TLGH) to choose a rational-ought, since there can be no real conflict, but it doesn’t necessarily make it easier to reason about, given such a goal. However, I’d say that it makes it much, much easier to do (what the TLGH sees as) moral-ought, since the TLGH presumably has a concrete top level goal, rather than having to figure it out (or have the illusion of trying to figure it out). The TLGH knows what the morally right thing to do is—it’s hardwired. Figuring out morality is harder when you don’t already have a moral arrow preset for you.
That isn’t to say that we’d agreed that a TLGH has the “correct” arrow of morality, but the TLGH can be completely sure that it does, since that’s really what it means to have a top level goal. Any wondering about whether a TLGH did the right thing, by itself, will be rational-ought, not moral-ought.
Now, if you meant that it will be harder for it to act like we’d consider a moral entity, then I’d say (again, assuming a top level goal) that it will either do so, or it won’t, but it won’t be difficult to force itself to do the right thing. This also assumes such a straightforward goal-seeking design is possible for an intelligence. I don’t have an opinion on that.
Knowledge requires justificaiton. A TLGH that understands epssiemology wouild see itslef as not knowing its TLG, since “it was hardwired into me” is no justification. This applies to humans: we are capable of dounting that our evolutionarily derived moral attitudes arre the correct ones.
Evolutionary psychology tells us that our evolutionary history has given us certain moral attitudes and behaviour. So far, so good. Some scientifically minded types take this to constitute a theory of objective morality all in itself. However, that would be subject to the Open Question objection: we can ask of our inherited morality whether it is actually right. (Unrelatedly, we are probably not determined to follow it, since we can overcome strong evolutionary imperatives in, for instance, voluntay celibacy). This not a merely abstract issue either, since EP has been used to support some contentious claims; for instance, that men should be forgiven for adultery since it is “in their genes” to seek muliple partners.
And if there is any kind of objective truth about which goals are the true top level goals, that is going to have to come from reasoning. Emipricism fails because there are no perceivable moral facts, and ordinary facts fall into the is-ought divide.
Rationality is probably better at removing goals than setting them, better at thous-shalt-nots than thou-shalts That is in line with the liberal-secular view of morality, where it would be strange and maybe even obnoxious for everyone to be pursuing the same aim.
This only applies to humans because we are not TLGHs. Beliefs and goals require justification because we might change them. Beliefs and goals which are hardwired do not require justification; they must be taken as given. As far as I’m aware, humans only ever have beliefs or goals that seem hardwired in this sense in the case of damage, like people with Capgras delusion.
In fact, I would argue that we can only genuinely ask if our “inherited morality” is right because we are not determined to follow it.
I said knowledge requires justification. I was appealing to the standard True Justified Belief theory of knowledge. That belief per se does not need justification is not relevant.
So,
So, it’s no justification in this technical sense, and it might cheerfully agree that it doesn’t “know” its TLG in this sense, but that’s completely aside from the 100% certainty with which it holds it, a certainty which can be utterly unshakable by reason or argument.
I misunderstood what you were saying due to “justification” being a technical term, here. :)
It’s been sketched out several times already, by various people.
1.You have a set of goals (aposteriori “is”)
You have a set of strategies for achieving goals with varying levels of efficiency (aposteriori “is”)
Being rational is applying rationality to achieve goals optimally (analytical “is”), ie if you are want to be rational, you ought to optimise your UF.
Of course that isnt pure empiricism (what is?) because 3 is a sort of conceptual analysis of “oughtness”. I am not bothered about that for a number of reasons: I am not commited to the insolubility of the is/ought gap, nor to the non existence of objective ethics.
I don’t see why the etiology of intentions should pose any more of a problem than the representation of intentions. You can build robots that seek out light sources. “seek light sources” is represented in its programming. It came from the progammer. Where’s the problem?
But the qualified sense is easily explained as goal+strategy. You rational-ought to adopt strategies to achieve your goals.
Concrete facts about my goals and siituation, and abstract facts about which strategies achieve which goals are allt hat is needed to establish truths about rational-ought. What is unnaturalistic about that? The abstract facts about how strageties may be unnaturualisable in a sense, but it is a rather unimpactive sense. Abstract reasoning in general isn’t (at least usefully) reducible to atoms, but that doesnt mean it is “about” some non physical realm. In a sense it isn’t about anything, It just operates on its own level.