From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
I think that is rather drastic. Science may not accept beliefs and values as fundamental, but it can accept that as higher-level descriptions, cf Dennet’s
Intentional Stance.
can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Again, I find it incredible that natural facts have no relation to morality. Morality
would be very different in women laid eggs or men had balls of steel.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
To say that moral values are both objective and disconnected from physical
fact implies that they exist in their own domain, which is where some people,with some justice, tend to balk.
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent.
For some value of “incoherent”. Personally, I find it useful to strike out the word
and replace it with something more precise, such a “semantically meaningless”,
“contradictory”, “self-underminng” etc.
Again, I find it incredible that natural facts have no relation to morality. Morality would be very different in women laid eggs or men had balls of steel.
I take the position that while we may well have evolved with different values, they wouldn’t be morality. “Morality” is subjunctively objective. Nothing to do with natural facts, except insofar as they give us clues about what values we in fact did evolve with.
I take the position that while we may well have evolved with different values, they wouldn’t be morality.
How do you know that the values we have evolved with are moral? (The claim that natural facts are relevant to moral reasoning is different to the claim that natually-evolved behavioural instincts are ipso facto moral)
I’m not sure what you want to know. I feel motivated to be moral, and the things that motivate thinking machines are what I call “values”. Hence, our values are moral.
But of course naturally-evolved values are not moral simply by virtue of being values. Morality isn’t about values, it’s about life and death and happiness and sadness and many other things beside.
I think that is rather drastic. Science may not accept beliefs and values as fundamental, but it can accept that as higher-level descriptions, cf Dennet’s Intentional Stance.
I acknowledge this is a subject of lively debate. Still, I stick to the proposition that you can’t derive an ought from an is, and that this is what’s at stake here. Since you can’t make sense of a person as rational if it’s not the case there’s anything she ought or ought not to do (and I admit you may think this needs defending), natural science lacks the means to ascribe rationality. Now, if we’re talking about the social sciences, that’s another matter. There is a discontinuity between these and the purely natural sciences. I read Dennett many years ago, and thought something like this divide is what his different stances are about, but I’d be open to hear a different view.
Again, I find it incredible that natural facts have no relation to morality.
I didn’t say this—just that from a purely scientific point of view, morality is invisible. From an engaged, subjective point of view, where morality is visible, natural facts are relevant.
To say that moral values are both objective and disconnected from physical fact implies that they exist in their own domain, which is where some people,with some justice, tend to balk.
Here’s another stab at it: natural science can in principle tell us everything there is to know about a person’s inner workings and dispositions, right down to what sounds she is likely to utter in what circumstances. It might tell someone she will make the sounds, eg, ‘I ought to go to class’ in given circs.. But no amount of knowledge of this kind will give her a reason to go to class (would you agree?). To get reasons -not to mention linguistic meaning and any intentional states- you need a subjective -ie, non-scientific- point of view. The two views are incommensurable, but neither is dispensable -people need reasons.
I acknowledge this is a subject of lively debate. Still, I stick to the proposition that you can’t derive an ought from an is, and that this is what’s at stake here.
Since you can’t make sense of a person as rational if it’s not the case there’s anything she ought or ought not to do (and I admit you may think this needs defending), natural science lacks the means to ascribe rationality.
But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible.
Rational agents should win, in short.
That seems to be an analytical truth arrived at by unpacking “rational”. Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn’ts. I have been trying to press that unpacking morality leads to the similar analytical truth: ” a moral agent ought to adopt universalisable goals.”
I didn’t say this—just that from a purely scientific point of view, morality is invisible.
“Oughts” in general appear wherever you have rules, which are often abstractly
defined so that they apply to physal systems as well as anything else.
Here’s another stab at it: natural science can in principle tell us everything there is to know about a person’s inner workings and dispositions, right down to what sounds she is likely to utter in what circumstances. It might tell someone she will make the sounds, eg, ‘I ought to go to class’ in given circs.. But no amount of knowledge of this kind will give her a reason to go to class (would you agree?).
I think LWers would say there are facts about her utility function from which
conclusions can be drawn about how she should maximise it (and how she
would if the were rational).
To get reasons -not to mention linguistic meaning and any intentional states- you need a subjective -ie, non-scientific- point of view.
I don’t see why. If a person or other system has goals and is acting to achieve
those goals in an effective way, then their goals can be inferred from their actions.
But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible. Rational agents should win, in short. That seems to be an analytical truth arrived at by unpacking “rational”. Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn’ts. I have been trying to press that unpacking morality leads to the similar analytical truth: ” a moral agent ought to adopt universalisable goals.”
I expressed myself badly. I agree entirely with this.
“Oughts” in general appear wherever you have rules, which are often abstractly defined so that they apply to physal systems as well as anything else.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
I think LWers would say there are facts about her utility function from which conclusions can be drawn about how she should maximise it (and how she would if the were rational).
And I want to persuade LWers
1) that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are,
and
2) that this is ok—these are still respectable facts, notwithstanding.
I don’t see why. If a person or other system has goals and is acting to achieve those goals in an effective way, then their goals can be inferred from their actions.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
1) that facts about her utility functions aren’t naturalistic facts, as facts about her > cholesterol level or about neural activity in different parts of her cortex, are,
And they are likely to riposte that facts about her UF are naturalistic just because they
can be inferred from her behaviour. You seem to be in need of a narrow,
sipulative definition of naturalistic.
Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
You introduced the word “basic” there. It might be the case that goals disappear on
a very fine-grained atomistic view of things (along with rules and structures and various other things). But that would mean that goals aren’t basic physical facts. Naturalism tends to be defined more epistemically than physicalism, so the inferrabilty of UFs (or goals or intentions) from coarse-grained physical behaviour
is a good basis for supposing them to be natural by that usage.
And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No collection of merely naturalistic facts will constrain these. There have been lots of theories advanced which try, but the concensus, I think, is that there is no easy naturalistic solution.
And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No collection of merely naturalistic facts will constrain these. There have been lots of theories advances which try, but the concensus, I think, is that there is no easy naturalistic solution.
And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No number of merely naturalistic facts will constrain these. There have been any number of theories advances which try, but the concensus, I think, is that all fail.
And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’ (is this what you have in mind?) First, notice that X may also prefer his/her philosophy TA to his/her EM Fields and Waves TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No number of merely naturalistic facts will constrain these. There have been lots of theories advanced which try, but the concensus, I think, is that there’s no easy naturalistic solution.
But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible. Rational agents should win, in short. That seems to be an analytical truth arrived at by unpacking “rational”. Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn’ts. I have been trying to press that unpacking morality leads to the similar analytical truth: ” a moral agent ought to adopt universalisable goals.”
I expressed myself badly. I agree entirely with this.
“Oughts” in general appear wherever you have rules, which are often abstractly defined so that they apply to physal systems as well as anything else.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
I think LWers would say there are facts about her utility function from which conclusions can be drawn about how she should maximise it (and how she would if the were rational).
And I want to persuade LWers
*that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are,
and
*that this is ok—these are still respectable facts, notwithstanding.
I don’t see why. If a person or other system has goals and is acting to achieve those goals in an effective way, then their goals can be inferred from their actions.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible. Rational agents should win, in short. That seems to be an analytical truth arrived at by unpacking “rational”. Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn’ts. I have been trying to press that unpacking morality leads to the similar analytical truth: ” a moral agent ought to adopt universalisable goals.”
I expressed myself badly. I agree entirely with this.
“Oughts” in general appear wherever you have rules, which are often abstractly defined so that they apply to physal systems as well as anything else.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
I think LWers would say there are facts about her utility function from which conclusions can be drawn about how she should maximise it (and how she would if the were rational).
And I want to persuade LWers
that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are,
and
*that this is ok—these are still respectable facts, notwithstanding.
I don’t see why. If a person or other system has goals and is acting to achieve those goals in an effective way, then their goals can be inferred from their actions.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
I think that is rather drastic. Science may not accept beliefs and values as fundamental, but it can accept that as higher-level descriptions, cf Dennet’s Intentional Stance.
Again, I find it incredible that natural facts have no relation to morality. Morality would be very different in women laid eggs or men had balls of steel.
To say that moral values are both objective and disconnected from physical fact implies that they exist in their own domain, which is where some people,with some justice, tend to balk.
For some value of “incoherent”. Personally, I find it useful to strike out the word and replace it with something more precise, such a “semantically meaningless”, “contradictory”, “self-underminng” etc.
I take the position that while we may well have evolved with different values, they wouldn’t be morality. “Morality” is subjunctively objective. Nothing to do with natural facts, except insofar as they give us clues about what values we in fact did evolve with.
How do you know that the values we have evolved with are moral? (The claim that natural facts are relevant to moral reasoning is different to the claim that natually-evolved behavioural instincts are ipso facto moral)
I’m not sure what you want to know. I feel motivated to be moral, and the things that motivate thinking machines are what I call “values”. Hence, our values are moral.
But of course naturally-evolved values are not moral simply by virtue of being values. Morality isn’t about values, it’s about life and death and happiness and sadness and many other things beside.
I acknowledge this is a subject of lively debate. Still, I stick to the proposition that you can’t derive an ought from an is, and that this is what’s at stake here. Since you can’t make sense of a person as rational if it’s not the case there’s anything she ought or ought not to do (and I admit you may think this needs defending), natural science lacks the means to ascribe rationality. Now, if we’re talking about the social sciences, that’s another matter. There is a discontinuity between these and the purely natural sciences. I read Dennett many years ago, and thought something like this divide is what his different stances are about, but I’d be open to hear a different view.
I didn’t say this—just that from a purely scientific point of view, morality is invisible. From an engaged, subjective point of view, where morality is visible, natural facts are relevant.
Here’s another stab at it: natural science can in principle tell us everything there is to know about a person’s inner workings and dispositions, right down to what sounds she is likely to utter in what circumstances. It might tell someone she will make the sounds, eg, ‘I ought to go to class’ in given circs.. But no amount of knowledge of this kind will give her a reason to go to class (would you agree?). To get reasons -not to mention linguistic meaning and any intentional states- you need a subjective -ie, non-scientific- point of view. The two views are incommensurable, but neither is dispensable -people need reasons.
I acknowledge this is a subject of lively debate. Still, I stick to the proposition that you can’t derive an ought from an is, and that this is what’s at stake here.
But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible. Rational agents should win, in short. That seems to be an analytical truth arrived at by unpacking “rational”. Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn’ts. I have been trying to press that unpacking morality leads to the similar analytical truth: ” a moral agent ought to adopt universalisable goals.”
“Oughts” in general appear wherever you have rules, which are often abstractly defined so that they apply to physal systems as well as anything else.
I think LWers would say there are facts about her utility function from which conclusions can be drawn about how she should maximise it (and how she would if the were rational).
I don’t see why. If a person or other system has goals and is acting to achieve those goals in an effective way, then their goals can be inferred from their actions.
I expressed myself badly. I agree entirely with this.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
And I want to persuade LWers
1) that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are,
and
2) that this is ok—these are still respectable facts, notwithstanding.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour. You seem to be in need of a narrow, sipulative definition of naturalistic.
You introduced the word “basic” there. It might be the case that goals disappear on a very fine-grained atomistic view of things (along with rules and structures and various other things). But that would mean that goals aren’t basic physical facts. Naturalism tends to be defined more epistemically than physicalism, so the inferrabilty of UFs (or goals or intentions) from coarse-grained physical behaviour is a good basis for supposing them to be natural by that usage.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No collection of merely naturalistic facts will constrain these. There have been lots of theories advanced which try, but the concensus, I think, is that there is no easy naturalistic solution.
Oh, that’s the philosopher’s definition of naturalistic. OTOH, you could just adopt the scientists version and scan their brain.
Well, alright, please tell me: what is a Utility Function, that it can be inferred from a brain scan? How’s this supposed to work, in broad terms?
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No collection of merely naturalistic facts will constrain these. There have been lots of theories advances which try, but the concensus, I think, is that there is no easy naturalistic solution.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No number of merely naturalistic facts will constrain these. There have been any number of theories advances which try, but the concensus, I think, is that all fail.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’ (is this what you have in mind?) First, notice that X may also prefer his/her philosophy TA to his/her EM Fields and Waves TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No number of merely naturalistic facts will constrain these. There have been lots of theories advanced which try, but the concensus, I think, is that there’s no easy naturalistic solution.
I expressed myself badly. I agree entirely with this.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
And I want to persuade LWers
*that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are,
and
*that this is ok—these are still respectable facts, notwithstanding.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
I expressed myself badly. I agree entirely with this.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
And I want to persuade LWers
that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are, and *that this is ok—these are still respectable facts, notwithstanding.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.