The ultimate problem is that there’s no way to justify a top-level goal. For a designed mind with a top-level goal, this is not actually a problem, since there’s no way to reason one’s way to a top-level goal change—it can be taken as an ‘is’.
That makes it easier for a designed mind to do rational-ought, but equally harder to do moral-ought.
It might make it easier for a top-level-goal-haver (TLGH) to choose a rational-ought, since there can be no real conflict, but it doesn’t necessarily make it easier to reason about, given such a goal. However, I’d say that it makes it much, much easier to do (what the TLGH sees as) moral-ought, since the TLGH presumably has a concrete top level goal, rather than having to figure it out (or have the illusion of trying to figure it out). The TLGH knows what the morally right thing to do is—it’s hardwired. Figuring out morality is harder when you don’t already have a moral arrow preset for you.
That isn’t to say that we’d agreed that a TLGH has the “correct” arrow of morality, but the TLGH can be completely sure that it does, since that’s really what it means to have a top level goal. Any wondering about whether a TLGH did the right thing, by itself, will be rational-ought, not moral-ought.
Now, if you meant that it will be harder for it to act like we’d consider a moral entity, then I’d say (again, assuming a top level goal) that it will either do so, or it won’t, but it won’t be difficult to force itself to do the right thing. This also assumes such a straightforward goal-seeking design is possible for an intelligence. I don’t have an opinion on that.
The TLGH knows what the morally right thing to do is—it’s hardwired. Figuring out morality is harder when you don’t already have a moral arrow preset for you.
Knowledge requires justificaiton. A TLGH that understands epssiemology wouild see itslef as not knowing its
TLG, since “it was hardwired into me” is no justification. This applies to humans: we are capable of dounting that our evolutionarily derived moral attitudes arre the correct ones.
Evolutionary psychology tells us that our evolutionary history has given us certain moral attitudes and behaviour. So far, so good. Some scientifically minded types take this to constitute a theory of objective morality all in itself. However, that would be subject to the Open Question objection: we can ask of our inherited morality whether it is actually right. (Unrelatedly, we are probably not determined to follow it, since we can overcome strong evolutionary imperatives in, for instance, voluntay celibacy). This not a merely abstract issue either, since EP has been used to support some contentious claims; for instance, that men should be forgiven for adultery since it is “in their genes” to seek muliple partners.
Any wondering about whether a TLGH did the right thing, by itself, will be rational-ought, not moral-ought.
And if there is any kind of objective truth about which goals are the true top level goals, that is going to have
to come from reasoning. Emipricism fails because there are no perceivable moral facts, and ordinary facts fall into the is-ought divide.
Rationality is probably better at removing goals than setting them, better at thous-shalt-nots than thou-shalts That is in line with the liberal-secular view of morality, where it would be strange and maybe even obnoxious for everyone to be pursuing the same aim.
Knowledge requires justificaiton. A TLGH that understands epssiemology wouild see itslef as not knowing its TLG, since “it was hardwired into me” is no justification. This applies to humans: we are capable of dounting that our evolutionarily derived moral attitudes arre the correct ones.
This only applies to humans because we are not TLGHs. Beliefs and goals require justification because we might change them. Beliefs and goals which are hardwired do not require justification; they must be taken as given. As far as I’m aware, humans only ever have beliefs or goals that seem hardwired in this sense in the case of damage, like people with Capgras delusion.
However, that would be subject to the Open Question objection: we can ask of our inherited morality whether it is actually right. (Unrelatedly, we are probably not determined to follow it, since we can overcome strong evolutionary imperatives in, for instance, voluntary celibacy).
In fact, I would argue that we can only genuinely ask if our “inherited morality” is right because we are not determined to follow it.
This only applies to humans because we are not TLGHs. Beliefs and goals require justification because we might change them.
I said knowledge requires justification. I was appealing to the standard True Justified Belief theory of knowledge. That belief per se does not need justification is not relevant.
A TLGH that understands epistemology wouild see itself as not knowing its TLG, since “it was hardwired into me” is no justification.
So, it’s no justification in this technical sense, and it might cheerfully agree that it doesn’t “know” its TLG in this sense, but that’s completely aside from the 100% certainty with which it holds it, a certainty which can be utterly unshakable by reason or argument.
I misunderstood what you were saying due to “justification” being a technical term, here. :)
That makes it easier for a designed mind to do rational-ought, but equally harder to do moral-ought.
It might make it easier for a top-level-goal-haver (TLGH) to choose a rational-ought, since there can be no real conflict, but it doesn’t necessarily make it easier to reason about, given such a goal. However, I’d say that it makes it much, much easier to do (what the TLGH sees as) moral-ought, since the TLGH presumably has a concrete top level goal, rather than having to figure it out (or have the illusion of trying to figure it out). The TLGH knows what the morally right thing to do is—it’s hardwired. Figuring out morality is harder when you don’t already have a moral arrow preset for you.
That isn’t to say that we’d agreed that a TLGH has the “correct” arrow of morality, but the TLGH can be completely sure that it does, since that’s really what it means to have a top level goal. Any wondering about whether a TLGH did the right thing, by itself, will be rational-ought, not moral-ought.
Now, if you meant that it will be harder for it to act like we’d consider a moral entity, then I’d say (again, assuming a top level goal) that it will either do so, or it won’t, but it won’t be difficult to force itself to do the right thing. This also assumes such a straightforward goal-seeking design is possible for an intelligence. I don’t have an opinion on that.
Knowledge requires justificaiton. A TLGH that understands epssiemology wouild see itslef as not knowing its TLG, since “it was hardwired into me” is no justification. This applies to humans: we are capable of dounting that our evolutionarily derived moral attitudes arre the correct ones.
Evolutionary psychology tells us that our evolutionary history has given us certain moral attitudes and behaviour. So far, so good. Some scientifically minded types take this to constitute a theory of objective morality all in itself. However, that would be subject to the Open Question objection: we can ask of our inherited morality whether it is actually right. (Unrelatedly, we are probably not determined to follow it, since we can overcome strong evolutionary imperatives in, for instance, voluntay celibacy). This not a merely abstract issue either, since EP has been used to support some contentious claims; for instance, that men should be forgiven for adultery since it is “in their genes” to seek muliple partners.
And if there is any kind of objective truth about which goals are the true top level goals, that is going to have to come from reasoning. Emipricism fails because there are no perceivable moral facts, and ordinary facts fall into the is-ought divide.
Rationality is probably better at removing goals than setting them, better at thous-shalt-nots than thou-shalts That is in line with the liberal-secular view of morality, where it would be strange and maybe even obnoxious for everyone to be pursuing the same aim.
This only applies to humans because we are not TLGHs. Beliefs and goals require justification because we might change them. Beliefs and goals which are hardwired do not require justification; they must be taken as given. As far as I’m aware, humans only ever have beliefs or goals that seem hardwired in this sense in the case of damage, like people with Capgras delusion.
In fact, I would argue that we can only genuinely ask if our “inherited morality” is right because we are not determined to follow it.
I said knowledge requires justification. I was appealing to the standard True Justified Belief theory of knowledge. That belief per se does not need justification is not relevant.
So,
So, it’s no justification in this technical sense, and it might cheerfully agree that it doesn’t “know” its TLG in this sense, but that’s completely aside from the 100% certainty with which it holds it, a certainty which can be utterly unshakable by reason or argument.
I misunderstood what you were saying due to “justification” being a technical term, here. :)