Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist:
I should disclose that I don’t find ultimately any kind of objectivism coherent, including “objective reality”.
-though the apparent tension in being a solipsist who argues gets to the root of the issue.
For what it may be worth:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.
I took a slightly different tack, which is maybe moot given your admission to being a solipsist
Solipsism is an ontological stance: in short, “there is nothing out there but my own mind.” I am saying something slightly different: “To speak of there being something/nothing out there is meaningless to me unless I can see why to care.” Then again, I’d say this is tautological/obvious in that “meaning” just is “why it matters to me.”
My “position” (really a meta-position about philosophical positions) is just that language obscures what is going on. It may take a while to make this clear, but if we continue I’m sure it will be.
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view
I’m not a naturalist. I’m not skeptical of “objective” because of such reasons; I am skeptical of it merely because I don’t know what the word refers to (unless it means something like “in accordance with consensus”). In the end, I engage in intellectual discourse in order to win, be happier, get what I want, get pleasure, maximize my utility, or whatever you’ll call it (I mean them all synonymously).
If after engaging in such discourse I am not able to do that, I will eventually want to ask, “So what? What difference does it make to my anticipations? How does this help me get what I want and/or avoid what I don’t want?”
Solipsism is an ontological stance: in short, “there is nothing out there but my own mind.” I am saying something slightly different: “To speak of there being something/nothing out there is meaningless to me unless I can see why to care.” Then again, I’d say this is tautological/obvious in that “meaning” just is “why it matters to me.”
Do you cross the road with your eyes shut? If not, you are assuming, like everyone else, that there are things out there which are terminally disutiilitous.
My “position” (really a meta-position about philosophical positions) is just that language obscures what is going on.
Whose language ? What language? If you think all language is a problem, what do you intend to replace it with?
I’m not a naturalist. I’m not skeptical of “objective” because of such reasons; I am skeptical of it merely because I don’t know what the word refers to
It refers to the stuff that doesn’t go away when you stop believing in it.
“To speak of there being something/nothing out there is meaningless to me unless I can see why to care.”
Do you cross the road with your eyes shut? If not, you are assuming, like everyone else, that there are things out there which are terminally disutiilitous.
Note the bold.
Whose language ? What language?
English, and all the rest that I know of.
If you think all language is a problem, what do you intend to replace it with?
Something better would be nice, but what of it? I am simply saying that language obscures what is going on. You may or may not find that insight useful.
It refers to the stuff that doesn’t go away when you stop believing in it.
If so, I suggest “permanent” as a clearer word choice.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
I think that is rather drastic. Science may not accept beliefs and values as fundamental, but it can accept that as higher-level descriptions, cf Dennet’s
Intentional Stance.
can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Again, I find it incredible that natural facts have no relation to morality. Morality
would be very different in women laid eggs or men had balls of steel.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
To say that moral values are both objective and disconnected from physical
fact implies that they exist in their own domain, which is where some people,with some justice, tend to balk.
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent.
For some value of “incoherent”. Personally, I find it useful to strike out the word
and replace it with something more precise, such a “semantically meaningless”,
“contradictory”, “self-underminng” etc.
Again, I find it incredible that natural facts have no relation to morality. Morality would be very different in women laid eggs or men had balls of steel.
I take the position that while we may well have evolved with different values, they wouldn’t be morality. “Morality” is subjunctively objective. Nothing to do with natural facts, except insofar as they give us clues about what values we in fact did evolve with.
I take the position that while we may well have evolved with different values, they wouldn’t be morality.
How do you know that the values we have evolved with are moral? (The claim that natural facts are relevant to moral reasoning is different to the claim that natually-evolved behavioural instincts are ipso facto moral)
I’m not sure what you want to know. I feel motivated to be moral, and the things that motivate thinking machines are what I call “values”. Hence, our values are moral.
But of course naturally-evolved values are not moral simply by virtue of being values. Morality isn’t about values, it’s about life and death and happiness and sadness and many other things beside.
I think that is rather drastic. Science may not accept beliefs and values as fundamental, but it can accept that as higher-level descriptions, cf Dennet’s Intentional Stance.
I acknowledge this is a subject of lively debate. Still, I stick to the proposition that you can’t derive an ought from an is, and that this is what’s at stake here. Since you can’t make sense of a person as rational if it’s not the case there’s anything she ought or ought not to do (and I admit you may think this needs defending), natural science lacks the means to ascribe rationality. Now, if we’re talking about the social sciences, that’s another matter. There is a discontinuity between these and the purely natural sciences. I read Dennett many years ago, and thought something like this divide is what his different stances are about, but I’d be open to hear a different view.
Again, I find it incredible that natural facts have no relation to morality.
I didn’t say this—just that from a purely scientific point of view, morality is invisible. From an engaged, subjective point of view, where morality is visible, natural facts are relevant.
To say that moral values are both objective and disconnected from physical fact implies that they exist in their own domain, which is where some people,with some justice, tend to balk.
Here’s another stab at it: natural science can in principle tell us everything there is to know about a person’s inner workings and dispositions, right down to what sounds she is likely to utter in what circumstances. It might tell someone she will make the sounds, eg, ‘I ought to go to class’ in given circs.. But no amount of knowledge of this kind will give her a reason to go to class (would you agree?). To get reasons -not to mention linguistic meaning and any intentional states- you need a subjective -ie, non-scientific- point of view. The two views are incommensurable, but neither is dispensable -people need reasons.
I acknowledge this is a subject of lively debate. Still, I stick to the proposition that you can’t derive an ought from an is, and that this is what’s at stake here.
Since you can’t make sense of a person as rational if it’s not the case there’s anything she ought or ought not to do (and I admit you may think this needs defending), natural science lacks the means to ascribe rationality.
But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible.
Rational agents should win, in short.
That seems to be an analytical truth arrived at by unpacking “rational”. Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn’ts. I have been trying to press that unpacking morality leads to the similar analytical truth: ” a moral agent ought to adopt universalisable goals.”
I didn’t say this—just that from a purely scientific point of view, morality is invisible.
“Oughts” in general appear wherever you have rules, which are often abstractly
defined so that they apply to physal systems as well as anything else.
Here’s another stab at it: natural science can in principle tell us everything there is to know about a person’s inner workings and dispositions, right down to what sounds she is likely to utter in what circumstances. It might tell someone she will make the sounds, eg, ‘I ought to go to class’ in given circs.. But no amount of knowledge of this kind will give her a reason to go to class (would you agree?).
I think LWers would say there are facts about her utility function from which
conclusions can be drawn about how she should maximise it (and how she
would if the were rational).
To get reasons -not to mention linguistic meaning and any intentional states- you need a subjective -ie, non-scientific- point of view.
I don’t see why. If a person or other system has goals and is acting to achieve
those goals in an effective way, then their goals can be inferred from their actions.
But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible. Rational agents should win, in short. That seems to be an analytical truth arrived at by unpacking “rational”. Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn’ts. I have been trying to press that unpacking morality leads to the similar analytical truth: ” a moral agent ought to adopt universalisable goals.”
I expressed myself badly. I agree entirely with this.
“Oughts” in general appear wherever you have rules, which are often abstractly defined so that they apply to physal systems as well as anything else.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
I think LWers would say there are facts about her utility function from which conclusions can be drawn about how she should maximise it (and how she would if the were rational).
And I want to persuade LWers
1) that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are,
and
2) that this is ok—these are still respectable facts, notwithstanding.
I don’t see why. If a person or other system has goals and is acting to achieve those goals in an effective way, then their goals can be inferred from their actions.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
1) that facts about her utility functions aren’t naturalistic facts, as facts about her > cholesterol level or about neural activity in different parts of her cortex, are,
And they are likely to riposte that facts about her UF are naturalistic just because they
can be inferred from her behaviour. You seem to be in need of a narrow,
sipulative definition of naturalistic.
Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
You introduced the word “basic” there. It might be the case that goals disappear on
a very fine-grained atomistic view of things (along with rules and structures and various other things). But that would mean that goals aren’t basic physical facts. Naturalism tends to be defined more epistemically than physicalism, so the inferrabilty of UFs (or goals or intentions) from coarse-grained physical behaviour
is a good basis for supposing them to be natural by that usage.
And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No collection of merely naturalistic facts will constrain these. There have been lots of theories advanced which try, but the concensus, I think, is that there is no easy naturalistic solution.
And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No collection of merely naturalistic facts will constrain these. There have been lots of theories advances which try, but the concensus, I think, is that there is no easy naturalistic solution.
And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No number of merely naturalistic facts will constrain these. There have been any number of theories advances which try, but the concensus, I think, is that all fail.
And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’ (is this what you have in mind?) First, notice that X may also prefer his/her philosophy TA to his/her EM Fields and Waves TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No number of merely naturalistic facts will constrain these. There have been lots of theories advanced which try, but the concensus, I think, is that there’s no easy naturalistic solution.
But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible. Rational agents should win, in short. That seems to be an analytical truth arrived at by unpacking “rational”. Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn’ts. I have been trying to press that unpacking morality leads to the similar analytical truth: ” a moral agent ought to adopt universalisable goals.”
I expressed myself badly. I agree entirely with this.
“Oughts” in general appear wherever you have rules, which are often abstractly defined so that they apply to physal systems as well as anything else.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
I think LWers would say there are facts about her utility function from which conclusions can be drawn about how she should maximise it (and how she would if the were rational).
And I want to persuade LWers
*that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are,
and
*that this is ok—these are still respectable facts, notwithstanding.
I don’t see why. If a person or other system has goals and is acting to achieve those goals in an effective way, then their goals can be inferred from their actions.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible. Rational agents should win, in short. That seems to be an analytical truth arrived at by unpacking “rational”. Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn’ts. I have been trying to press that unpacking morality leads to the similar analytical truth: ” a moral agent ought to adopt universalisable goals.”
I expressed myself badly. I agree entirely with this.
“Oughts” in general appear wherever you have rules, which are often abstractly defined so that they apply to physal systems as well as anything else.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
I think LWers would say there are facts about her utility function from which conclusions can be drawn about how she should maximise it (and how she would if the were rational).
And I want to persuade LWers
that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are,
and
*that this is ok—these are still respectable facts, notwithstanding.
I don’t see why. If a person or other system has goals and is acting to achieve those goals in an effective way, then their goals can be inferred from their actions.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist:
-though the apparent tension in being a solipsist who argues gets to the root of the issue.
For what it may be worth:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.
Solipsism is an ontological stance: in short, “there is nothing out there but my own mind.” I am saying something slightly different: “To speak of there being something/nothing out there is meaningless to me unless I can see why to care.” Then again, I’d say this is tautological/obvious in that “meaning” just is “why it matters to me.”
My “position” (really a meta-position about philosophical positions) is just that language obscures what is going on. It may take a while to make this clear, but if we continue I’m sure it will be.
I’m not a naturalist. I’m not skeptical of “objective” because of such reasons; I am skeptical of it merely because I don’t know what the word refers to (unless it means something like “in accordance with consensus”). In the end, I engage in intellectual discourse in order to win, be happier, get what I want, get pleasure, maximize my utility, or whatever you’ll call it (I mean them all synonymously).
If after engaging in such discourse I am not able to do that, I will eventually want to ask, “So what? What difference does it make to my anticipations? How does this help me get what I want and/or avoid what I don’t want?”
Do you cross the road with your eyes shut? If not, you are assuming, like everyone else, that there are things out there which are terminally disutiilitous.
Whose language ? What language? If you think all language is a problem, what do you intend to replace it with?
It refers to the stuff that doesn’t go away when you stop believing in it.
Note the bold.
English, and all the rest that I know of.
Something better would be nice, but what of it? I am simply saying that language obscures what is going on. You may or may not find that insight useful.
If so, I suggest “permanent” as a clearer word choice.
I think that is rather drastic. Science may not accept beliefs and values as fundamental, but it can accept that as higher-level descriptions, cf Dennet’s Intentional Stance.
Again, I find it incredible that natural facts have no relation to morality. Morality would be very different in women laid eggs or men had balls of steel.
To say that moral values are both objective and disconnected from physical fact implies that they exist in their own domain, which is where some people,with some justice, tend to balk.
For some value of “incoherent”. Personally, I find it useful to strike out the word and replace it with something more precise, such a “semantically meaningless”, “contradictory”, “self-underminng” etc.
I take the position that while we may well have evolved with different values, they wouldn’t be morality. “Morality” is subjunctively objective. Nothing to do with natural facts, except insofar as they give us clues about what values we in fact did evolve with.
How do you know that the values we have evolved with are moral? (The claim that natural facts are relevant to moral reasoning is different to the claim that natually-evolved behavioural instincts are ipso facto moral)
I’m not sure what you want to know. I feel motivated to be moral, and the things that motivate thinking machines are what I call “values”. Hence, our values are moral.
But of course naturally-evolved values are not moral simply by virtue of being values. Morality isn’t about values, it’s about life and death and happiness and sadness and many other things beside.
I acknowledge this is a subject of lively debate. Still, I stick to the proposition that you can’t derive an ought from an is, and that this is what’s at stake here. Since you can’t make sense of a person as rational if it’s not the case there’s anything she ought or ought not to do (and I admit you may think this needs defending), natural science lacks the means to ascribe rationality. Now, if we’re talking about the social sciences, that’s another matter. There is a discontinuity between these and the purely natural sciences. I read Dennett many years ago, and thought something like this divide is what his different stances are about, but I’d be open to hear a different view.
I didn’t say this—just that from a purely scientific point of view, morality is invisible. From an engaged, subjective point of view, where morality is visible, natural facts are relevant.
Here’s another stab at it: natural science can in principle tell us everything there is to know about a person’s inner workings and dispositions, right down to what sounds she is likely to utter in what circumstances. It might tell someone she will make the sounds, eg, ‘I ought to go to class’ in given circs.. But no amount of knowledge of this kind will give her a reason to go to class (would you agree?). To get reasons -not to mention linguistic meaning and any intentional states- you need a subjective -ie, non-scientific- point of view. The two views are incommensurable, but neither is dispensable -people need reasons.
I acknowledge this is a subject of lively debate. Still, I stick to the proposition that you can’t derive an ought from an is, and that this is what’s at stake here.
But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible. Rational agents should win, in short. That seems to be an analytical truth arrived at by unpacking “rational”. Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn’ts. I have been trying to press that unpacking morality leads to the similar analytical truth: ” a moral agent ought to adopt universalisable goals.”
“Oughts” in general appear wherever you have rules, which are often abstractly defined so that they apply to physal systems as well as anything else.
I think LWers would say there are facts about her utility function from which conclusions can be drawn about how she should maximise it (and how she would if the were rational).
I don’t see why. If a person or other system has goals and is acting to achieve those goals in an effective way, then their goals can be inferred from their actions.
I expressed myself badly. I agree entirely with this.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
And I want to persuade LWers
1) that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are,
and
2) that this is ok—these are still respectable facts, notwithstanding.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour. You seem to be in need of a narrow, sipulative definition of naturalistic.
You introduced the word “basic” there. It might be the case that goals disappear on a very fine-grained atomistic view of things (along with rules and structures and various other things). But that would mean that goals aren’t basic physical facts. Naturalism tends to be defined more epistemically than physicalism, so the inferrabilty of UFs (or goals or intentions) from coarse-grained physical behaviour is a good basis for supposing them to be natural by that usage.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No collection of merely naturalistic facts will constrain these. There have been lots of theories advanced which try, but the concensus, I think, is that there is no easy naturalistic solution.
Oh, that’s the philosopher’s definition of naturalistic. OTOH, you could just adopt the scientists version and scan their brain.
Well, alright, please tell me: what is a Utility Function, that it can be inferred from a brain scan? How’s this supposed to work, in broad terms?
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No collection of merely naturalistic facts will constrain these. There have been lots of theories advances which try, but the concensus, I think, is that there is no easy naturalistic solution.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No number of merely naturalistic facts will constrain these. There have been any number of theories advances which try, but the concensus, I think, is that all fail.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’ (is this what you have in mind?) First, notice that X may also prefer his/her philosophy TA to his/her EM Fields and Waves TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No number of merely naturalistic facts will constrain these. There have been lots of theories advanced which try, but the concensus, I think, is that there’s no easy naturalistic solution.
I expressed myself badly. I agree entirely with this.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
And I want to persuade LWers
*that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are,
and
*that this is ok—these are still respectable facts, notwithstanding.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
I expressed myself badly. I agree entirely with this.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
And I want to persuade LWers
that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are, and *that this is ok—these are still respectable facts, notwithstanding.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.