I think Peterdjones’s answer hits it on the head. I understand you’ve thrashed-out related issues elsewhere, but it seems to me your claim that the idea of an objective value judgment is incoherent would again require doing quite a bit of philosophy to justify.
Really I meant to be throwing the ball back to lukeprog to give us an idea of what the ‘arguing about facts and anticipations’ alternative is, if not just philosophy pretending not to be. I could have been more clear about this. Part of my complaint is the wanting to have it both ways. For example, the thinking in the post anticipations would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism. If LWers are serious about this idea, they really should look into its implications if they want to avoid inadvertent contradictions in the world-views. That means doing some philosophy.
As far as objective value, I simply don’t understand what anyone means by the term. And I think lukeprog’s point could be summed up as, “Trying to figure out how each discussant is defining their terms is not really ‘doing philosophy’; it’s just the groundwork necessary for people not to talk past each other.”
As far as making beliefs pay rent, a simpler way to put it is: If you say I should believe X but I can’t figure out what anticipations X entails, I will just respond, “So what?”
To unite the two themes: The ultimate definition would tell me why to care.
The ultimate definition would tell me why to care.
In the space of all possible meta-ethics, some meta-ethics are cooperative, and other meta-ethics are not so. This means that if you can choose which metaethics to spread to society, you stand a better chance at your own goals, if you spread cooperative metaethics. And cooperative metaethics is what we call “morality”, by and large.
It’s “Do unto others...”, but abstracted a bit, so that we really mean “Use the reasoning to determine what to do unto others, that you would rather they used when deciding how to do unto you.”
Omega puts you in a room with a big red button.
“Press this button and you get ten dollars but another person will be poisoned to slowly die. If you don’t press it I punch you on the nose and you get no money.
They have a similar button which they can use to kill you and get 10 dollars. You can’t communicate with them. In fact they think they’re the only person being given the option of a button, so this problem isn’t exactly like Prisoner’s dilemma. They don’t even know you exist or that their own life is at stake.”
“But here’s the offer I’m making just to you, not them. I can imprint you both with the decision theory of your choice, Amanojack; ofcourse if you identify yourself in your decision theory, they’ll be identifying themself.
“Careful though: This is a one time offer, and then I may put both of you to further different tests. So choose the decision theory that you want both of you to have, and make it abstract enough to help you survive, regardless of specific circumstances.”
Given the above scenario, you’ll end up wanting people to choose protecting the life of strangers more than than picking 10 dollars.
Not quite my point. I’m not talking about what your preferences would be. That would be subjective, personal. I’m talking about what everyone’s meta-ethical preferences would be, if self-consistent, and abstracted enough.
My argument is essentially that objective morality can be considered the position in meta-ethical-space which if occupied by all agents would lead to the maximization of utility.
That makes it objectively (because it refers to all the agents, not some of them, or one of them) different from other points in meta-ethical-space, and so it can be considered to lead to an objectively better morality.
Yeah, because calling it that makes it pretty hard to understand. If you just mean Collective Greatest Happiness Utilitarianism, then that would be a good name. Objective morality can mean way too many different things. This way at least you’re saying in what sense it’s supposed to be objective.
As for this collectivism, though, I don’t go for it. There is no way to know another’s utility function, no way to compare utility functions among people, etc. other than subjectively. And who’s going to be the person or group that decides? SIAI? I personally think all this collectivism is a carryover from the idea of (collective) democracy and other silly ideas. But that’s a debate for another day.
I’m getting a bad vibe here, and no longer feel we’re having the same conversation
“Person or group that decides”? Who said anything about anyone deciding anything? And my point was that this perhaps this is the meta-ethical position that every rational agent individually converges to. So nobody “decides”, or everyone does. And if they don’t reach the same decision, then there’s no single objective morality—but even i
so perhaps there’s a limited set of coherent metaethical positions, like two or three of them.
I personally think all this collectivism is a carryover from the idea of (collective) democracy and other silly ideas.
I think my post was inspired more by TDT solutions to Prisoner’s dilemma and Newcomb’s box, a decision theory that takes into account the copies/simulations of its own self, or other problems that involve humans getting copied and needing to make a decision in blind coordination with their copies.
I imagined system that are not wholly copied, but rather just the module that determines the meta-ethical constraints, and tried to figure out to which directions would such system try to modify themselves, in the knowledge that other such system would similarly modify themselves.
You’re right, I think I’m confused about what you were talking about, or I inferred too much. I’m not really following at this point either.
One thing, though, is that you’re using meta-ethics to mean ethics. Meta-ethics is basically the study of what people mean by moral language, like whether ought is interpreted as a command, as God’s will, as a way to get along with others, etc. That’ll tend to cause some confusion. A good heuristic is, “Ethics is about what people ought to do, whereas meta-ethics is about what ought means (or what people intend by it).”
One thing, though, is that you’re using meta-ethics to mean ethics.
I’m not.
An ethic may say:
I should support same-sex marriage. (SSM-YES) or perhaps:
I should oppose same-sex marraige (SSM-NO)
The reason for this position is the meta-ethic: e.g.
Because I should act to increase average utility. (UTIL-AVERAGE)
Because I should act to increase total utility. (UTIL-TOTAL)
Because I should act to increase total amount of freedom (FREEDOM-GOOD)
Because I should act to increase average societal happiness. (SOCIETAL-HAPPYGOOD-AVERAGE)
Because I should obey the will of our voters (DEMOCRACY-GOOD)
Because I should do what God commands. (OBEY-GOD).
But some metaethical positions are invalid because of false assumptions (e.g. God’s existence). Other positions may not be abstract enough that they could possibly become universal or apply to all situations. Some combinations of ethics and metaethics may be the result of other factual or reasoning mistakes (e.g. someone thinks SSM will harm society, but it ends up helping it, even by the person’s own measuring).
So, NO, I don’t speak necessarily about Collective Greatest Happiness Utilitarianism. I’m NOT talking about a specific metaethic, not even necessarily a consequentialistic metaethic (let alone a “Greatest happiness utilitarianism”) I’m speaking about the hypothetical point in metaethical space that everyone would hypothetically prefer everyone to have—an Attractor of metaethical positions.
As for this collectivism, though, I don’t go for it. There is no way to know another’s utility function, no way to compare utility functions among people, etc. other than subjectively.
That’s very contestable. It has frequently argued here that preferences can be inferred from behaviour; it’s also been argued that introspection (if that is what you mean by “subjectively”) is not a reliable guide to motivation.
This is the whole demonstrated preference thing. I don’t buy it myself, but that’s a debate for another time. What I mean by subjectively is that I will value one person’s life more than another person’s life, or I could think that I want that $1,000,000 more than a rich person wants it, but that’s just all in my head. To compare utility functions and work from demonstrated preference usually—not always—is a precursor to some kind of authoritarian scheme. I can’t say there is anything like that coming, but it does set off some alarm bells. Anyway, this is not something I can substantiate right now.
Attempts to reduce real, altrusitic ethics back down to selfish/instrumental ethics tend not to work that well, because the gains from co-operation are remote, and there
are many realistic instances where selfish action produces immediate rewards (cd the Prudent Predatory objection Rand’s egoistic ethics).
OTOH, since many people are selfish, they are made to care by having legal and social sanctions against excessively selfish behaviour.
Attempts to reduce real, altrusitic ethics back down to selfish/instrumental ethics tend not to work that well,
I wasn’t talking about altruistic ethics, which can lead someone to sacrifice their lifes to prevent someone else getting a bruise; and thus would be almost as disastrous as selfishness if widespread. I was talking about cooperative ethics—which overlaps with but doesn’t equal altruism, same as it overlaps but doesn’t equal selfishness.
The difference between morality and immorality, is that morality can at its most abstract possible level be cooperative, and immorality can’t.
This by itself isn’t a reason that can force someone to care—you can’t make a rock care about anything, but that’s not a problem with your argument. But it’s something that leads to different expectations about the world, namely what Amanojack was asking for.
In a world populated by beings whose beliefs approach objective morality, I expect more cooperation and mutual well-being, all other things being equal. In a world whose beliefs don’t approach it, i expect more war and other devastation.
I wasn’t talking about altruistic ethics, which can lead someone to sacrifice their lifes to prevent someone else getting a bruise;
Although it usually doesn’t.
and thus would be almost as disastrous as selfishness if widespread. I was talking about cooperative ethics—which overlaps with but doesn’t equal altruism, same as it overlaps but doesn’t equal selfishness.
I think that you version of altruism is a straw man, and that what most people
mean by altruism isn’t very different from co operation.
The difference between morality and immorality, is that morality can at its most abstract possible level be cooperative, and immorality can’t.
Or, as I call it, universalisability.
But it’s something that leads to different expectations about the world, namely what Amanojack was asking for.
That argument doesn’t have to be made at all. Morality can stand as a refutation of the claim that anticipiation of experience is of ultimate importance. And it can be made differently: if you rejig your values, you can expect to antipate different experiences—it can be a self-fulffilling prophecy and not merely passive anticipation.
In a world populated by beings whose beliefs approach objective morality, I expect more cooperation and mutual well-being, all other things being equal. In a world whose beliefs don’t approach it, i expect more war and other devastation.
There is an argument from self interest, but it is tertiary to the two arguments I mentioned above.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist:
I should disclose that I don’t find ultimately any kind of objectivism coherent, including “objective reality”.
-though the apparent tension in being a solipsist who argues gets to the root of the issue.
For what it may be worth:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.
I took a slightly different tack, which is maybe moot given your admission to being a solipsist
Solipsism is an ontological stance: in short, “there is nothing out there but my own mind.” I am saying something slightly different: “To speak of there being something/nothing out there is meaningless to me unless I can see why to care.” Then again, I’d say this is tautological/obvious in that “meaning” just is “why it matters to me.”
My “position” (really a meta-position about philosophical positions) is just that language obscures what is going on. It may take a while to make this clear, but if we continue I’m sure it will be.
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view
I’m not a naturalist. I’m not skeptical of “objective” because of such reasons; I am skeptical of it merely because I don’t know what the word refers to (unless it means something like “in accordance with consensus”). In the end, I engage in intellectual discourse in order to win, be happier, get what I want, get pleasure, maximize my utility, or whatever you’ll call it (I mean them all synonymously).
If after engaging in such discourse I am not able to do that, I will eventually want to ask, “So what? What difference does it make to my anticipations? How does this help me get what I want and/or avoid what I don’t want?”
Solipsism is an ontological stance: in short, “there is nothing out there but my own mind.” I am saying something slightly different: “To speak of there being something/nothing out there is meaningless to me unless I can see why to care.” Then again, I’d say this is tautological/obvious in that “meaning” just is “why it matters to me.”
Do you cross the road with your eyes shut? If not, you are assuming, like everyone else, that there are things out there which are terminally disutiilitous.
My “position” (really a meta-position about philosophical positions) is just that language obscures what is going on.
Whose language ? What language? If you think all language is a problem, what do you intend to replace it with?
I’m not a naturalist. I’m not skeptical of “objective” because of such reasons; I am skeptical of it merely because I don’t know what the word refers to
It refers to the stuff that doesn’t go away when you stop believing in it.
“To speak of there being something/nothing out there is meaningless to me unless I can see why to care.”
Do you cross the road with your eyes shut? If not, you are assuming, like everyone else, that there are things out there which are terminally disutiilitous.
Note the bold.
Whose language ? What language?
English, and all the rest that I know of.
If you think all language is a problem, what do you intend to replace it with?
Something better would be nice, but what of it? I am simply saying that language obscures what is going on. You may or may not find that insight useful.
It refers to the stuff that doesn’t go away when you stop believing in it.
If so, I suggest “permanent” as a clearer word choice.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
I think that is rather drastic. Science may not accept beliefs and values as fundamental, but it can accept that as higher-level descriptions, cf Dennet’s
Intentional Stance.
can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Again, I find it incredible that natural facts have no relation to morality. Morality
would be very different in women laid eggs or men had balls of steel.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
To say that moral values are both objective and disconnected from physical
fact implies that they exist in their own domain, which is where some people,with some justice, tend to balk.
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent.
For some value of “incoherent”. Personally, I find it useful to strike out the word
and replace it with something more precise, such a “semantically meaningless”,
“contradictory”, “self-underminng” etc.
Again, I find it incredible that natural facts have no relation to morality. Morality would be very different in women laid eggs or men had balls of steel.
I take the position that while we may well have evolved with different values, they wouldn’t be morality. “Morality” is subjunctively objective. Nothing to do with natural facts, except insofar as they give us clues about what values we in fact did evolve with.
I take the position that while we may well have evolved with different values, they wouldn’t be morality.
How do you know that the values we have evolved with are moral? (The claim that natural facts are relevant to moral reasoning is different to the claim that natually-evolved behavioural instincts are ipso facto moral)
I’m not sure what you want to know. I feel motivated to be moral, and the things that motivate thinking machines are what I call “values”. Hence, our values are moral.
But of course naturally-evolved values are not moral simply by virtue of being values. Morality isn’t about values, it’s about life and death and happiness and sadness and many other things beside.
I think that is rather drastic. Science may not accept beliefs and values as fundamental, but it can accept that as higher-level descriptions, cf Dennet’s Intentional Stance.
I acknowledge this is a subject of lively debate. Still, I stick to the proposition that you can’t derive an ought from an is, and that this is what’s at stake here. Since you can’t make sense of a person as rational if it’s not the case there’s anything she ought or ought not to do (and I admit you may think this needs defending), natural science lacks the means to ascribe rationality. Now, if we’re talking about the social sciences, that’s another matter. There is a discontinuity between these and the purely natural sciences. I read Dennett many years ago, and thought something like this divide is what his different stances are about, but I’d be open to hear a different view.
Again, I find it incredible that natural facts have no relation to morality.
I didn’t say this—just that from a purely scientific point of view, morality is invisible. From an engaged, subjective point of view, where morality is visible, natural facts are relevant.
To say that moral values are both objective and disconnected from physical fact implies that they exist in their own domain, which is where some people,with some justice, tend to balk.
Here’s another stab at it: natural science can in principle tell us everything there is to know about a person’s inner workings and dispositions, right down to what sounds she is likely to utter in what circumstances. It might tell someone she will make the sounds, eg, ‘I ought to go to class’ in given circs.. But no amount of knowledge of this kind will give her a reason to go to class (would you agree?). To get reasons -not to mention linguistic meaning and any intentional states- you need a subjective -ie, non-scientific- point of view. The two views are incommensurable, but neither is dispensable -people need reasons.
I acknowledge this is a subject of lively debate. Still, I stick to the proposition that you can’t derive an ought from an is, and that this is what’s at stake here.
Since you can’t make sense of a person as rational if it’s not the case there’s anything she ought or ought not to do (and I admit you may think this needs defending), natural science lacks the means to ascribe rationality.
But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible.
Rational agents should win, in short.
That seems to be an analytical truth arrived at by unpacking “rational”. Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn’ts. I have been trying to press that unpacking morality leads to the similar analytical truth: ” a moral agent ought to adopt universalisable goals.”
I didn’t say this—just that from a purely scientific point of view, morality is invisible.
“Oughts” in general appear wherever you have rules, which are often abstractly
defined so that they apply to physal systems as well as anything else.
Here’s another stab at it: natural science can in principle tell us everything there is to know about a person’s inner workings and dispositions, right down to what sounds she is likely to utter in what circumstances. It might tell someone she will make the sounds, eg, ‘I ought to go to class’ in given circs.. But no amount of knowledge of this kind will give her a reason to go to class (would you agree?).
I think LWers would say there are facts about her utility function from which
conclusions can be drawn about how she should maximise it (and how she
would if the were rational).
To get reasons -not to mention linguistic meaning and any intentional states- you need a subjective -ie, non-scientific- point of view.
I don’t see why. If a person or other system has goals and is acting to achieve
those goals in an effective way, then their goals can be inferred from their actions.
But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible. Rational agents should win, in short. That seems to be an analytical truth arrived at by unpacking “rational”. Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn’ts. I have been trying to press that unpacking morality leads to the similar analytical truth: ” a moral agent ought to adopt universalisable goals.”
I expressed myself badly. I agree entirely with this.
“Oughts” in general appear wherever you have rules, which are often abstractly defined so that they apply to physal systems as well as anything else.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
I think LWers would say there are facts about her utility function from which conclusions can be drawn about how she should maximise it (and how she would if the were rational).
And I want to persuade LWers
1) that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are,
and
2) that this is ok—these are still respectable facts, notwithstanding.
I don’t see why. If a person or other system has goals and is acting to achieve those goals in an effective way, then their goals can be inferred from their actions.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
1) that facts about her utility functions aren’t naturalistic facts, as facts about her > cholesterol level or about neural activity in different parts of her cortex, are,
And they are likely to riposte that facts about her UF are naturalistic just because they
can be inferred from her behaviour. You seem to be in need of a narrow,
sipulative definition of naturalistic.
Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
You introduced the word “basic” there. It might be the case that goals disappear on
a very fine-grained atomistic view of things (along with rules and structures and various other things). But that would mean that goals aren’t basic physical facts. Naturalism tends to be defined more epistemically than physicalism, so the inferrabilty of UFs (or goals or intentions) from coarse-grained physical behaviour
is a good basis for supposing them to be natural by that usage.
And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No collection of merely naturalistic facts will constrain these. There have been lots of theories advanced which try, but the concensus, I think, is that there is no easy naturalistic solution.
And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No collection of merely naturalistic facts will constrain these. There have been lots of theories advances which try, but the concensus, I think, is that there is no easy naturalistic solution.
And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No number of merely naturalistic facts will constrain these. There have been any number of theories advances which try, but the concensus, I think, is that all fail.
And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’ (is this what you have in mind?) First, notice that X may also prefer his/her philosophy TA to his/her EM Fields and Waves TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No number of merely naturalistic facts will constrain these. There have been lots of theories advanced which try, but the concensus, I think, is that there’s no easy naturalistic solution.
But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible. Rational agents should win, in short. That seems to be an analytical truth arrived at by unpacking “rational”. Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn’ts. I have been trying to press that unpacking morality leads to the similar analytical truth: ” a moral agent ought to adopt universalisable goals.”
I expressed myself badly. I agree entirely with this.
“Oughts” in general appear wherever you have rules, which are often abstractly defined so that they apply to physal systems as well as anything else.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
I think LWers would say there are facts about her utility function from which conclusions can be drawn about how she should maximise it (and how she would if the were rational).
And I want to persuade LWers
*that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are,
and
*that this is ok—these are still respectable facts, notwithstanding.
I don’t see why. If a person or other system has goals and is acting to achieve those goals in an effective way, then their goals can be inferred from their actions.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible. Rational agents should win, in short. That seems to be an analytical truth arrived at by unpacking “rational”. Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn’ts. I have been trying to press that unpacking morality leads to the similar analytical truth: ” a moral agent ought to adopt universalisable goals.”
I expressed myself badly. I agree entirely with this.
“Oughts” in general appear wherever you have rules, which are often abstractly defined so that they apply to physal systems as well as anything else.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
I think LWers would say there are facts about her utility function from which conclusions can be drawn about how she should maximise it (and how she would if the were rational).
And I want to persuade LWers
that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are,
and
*that this is ok—these are still respectable facts, notwithstanding.
I don’t see why. If a person or other system has goals and is acting to achieve those goals in an effective way, then their goals can be inferred from their actions.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist:
I should disclose that I don’t find ultimately any kind of objectivism coherent, including “objective reality”.
-though the apparent tension in being a solipsist who argues gets to the root of the issue.
For what it may be worth:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist
I should disclose that I don’t find ultimately any kind of objectivism coherent, including “objective reality”.
-though the apparent contradiction in being a solipsist who argues gets to the root of the issue.
For what it may be worth:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist:
I should disclose that I don’t find ultimately any kind of objectivism coherent, including ‘objective reality’.
-though the apparent contradiction in being a solipsist who argues gets to the root of the issue.
For what it may be worth:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says is more incisive and clear than what I came up with. I took a different tack, which is maybe moot given your admission to being a solipsist:
I should disclose that I don’t find ultimately any kind of objectivism coherent, including ‘objective reality’.”
-though the apparent tension in being a solipsist who argues gets to the root of the issue.
For what it may be worth, here’s what I had:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but anyway the idea at least is not incoherent. And showing otherwise would take doing some philosophy.
What they generally mean is “not subjective”. You might object that non-subjective value is contradictory, but that is not the same as objecting that it is incomprehensible, since
one has to understand the meanings of individual terms to see a contradiction.
As for anticipations: believing morality is objective entails that some of your beliefs
may be wrong by objective standards, and believing it is subjective does not entail that. So the belief in moral objectivity could lead to a revision of your aims and goals, which will in turn lead to different experiences.
I’m not saying non-subjective value is contradictory, just that I don’t know what it could mean. To me “value” is a verb, and the noun form is just a nominalization of the verb, like the noun “taste” is a nominalization of the verb “taste.” Ayn Rand tried to say there was such a thing as objectively good taste, even of foods, music, etc. I didn’t understand what she meant either.
As for anticipations: believing morality is objective entails that some of your beliefs may be wrong by objective standards, and believing it is subjective does not entail that. So the belief in moral objectivity could lead to a revision of your aims and goals, which will in turn lead to different experiences.
But before I would even want to revise my aims and goals, I’d have to anticipate something different than I do now. What does “some of your beliefs may be wrong by objective standards” make me anticipate that would motivate me to change my goals? (This is the same as the question in the other comment: What penalty do I suffer by having the “wrong” moral sentiments?)
value” is a verb, and the noun form is just a nominalization of the verb,
I don’t see the force to that argument.
“Believe” is a verb and “belief” is a nominalisation. But beliefs can be objectively
right or wrong—if they belong to the appropriate subject area.
Ayn Rand tried to say there was such a thing as objectively good taste, even of foods, music,
It is possible for aesthetics(and various other things) to be un-objectifiable whilst morality (and various other things) are objectifiable.
But before I would even want to revise my aims and goals, I’d have to anticipate something different than I do now.
Why?
What does “some of your beliefs may be wrong by objective standards” make me anticipate that would motivate me to change my goals?
You should be motivated by a desire to get things right in general. The anticipation
thing is just a part of that. It’s not an ultimate. But morality is an ultimate because
there is no more important value than a moral value.
(This is the same as the question in the other comment: What penalty do I suffer by having the “wrong” moral sentiments?)
If there is no personal gain from morality, that doesn’t mean you shouldn’t be moral. You should be moral by the definition of “moral”and “should”. It’s an analytical truth.
It is for selfishness to justify itself in the face of morality, not vice versa.
First of all, I should disclose that I don’t find ultimately any kind of objectivism coherent, including “objective reality.” It is useful to talk about objective reality and objectively right or wrong beliefs most of the time, but when you really drill down there are only beliefs that predict my experience more reliably or less reliably. In the end, nothing else matters to me (nor, I expect, anyone else—if they understand what I’m getting at here).
You should be motivated by a desire to get things right in general. The anticipation thing is just a part of that. It’s not an ultimate
So you disagree with EY about making beliefs pay rent? Like, maybe some beliefs don’t pay rent but are still important? I just don’t see how that makes sense.
You should be moral by the definition of “moral”and “should”.
This seems circular.
If there is no personal gain from morality, that doesn’t mean you shouldn’t be moral.
First of all, I should disclose that I don’t find ultimately any kind of objectivism coherent, including “objective reality.” It is useful to talk about objective reality and objectively right or wrong beliefs most of the time, but when you really drill down there are only beliefs that predict my experience more reliably or less reliably
How do you know that?
So you disagree with EY about making beliefs pay rent?
If disagreeing mean it is good to entertain useless beliefs, then no. If disagreeing means that instrumental utility is not the ultimate value , then yes.
You should be moral by the definition of “moral”and “should”.
This seems circular.
You say that like that’s a bad thing. I said it was analytical and analytical truths would be expected to sound tautologous or circular.
If there is no personal gain from morality, that doesn’t mean you shouldn’t be moral.
It is useful to talk about objective reality and objectively right or wrong beliefs most of the time, but when you really drill down there are only beliefs that predict my experience more reliably or less reliably
How do you know that?
Why do I think that is a useful phrasing? That would be a long post, but EY got the essential idea in Making Beliefs Pay Rent.
If disagreeing mean it is good to entertain useless beliefs, then no. If disagreeing means that instrumental utility is not the ultimate value , then yes.
Well, what use is your belief in “objective value”?
So it’s still true. Not caring is not refutation.
Ultimately, that is to say at a deep level of analysis, I am non-cognitive to words like “true” and “refute.” I would substitute “useful” and “show people why it is not useful,” respectively.
Why do I think that is a useful phrasing? That would be a long post, but EY got the essential idea in Making Beliefs Pay Rent.
I meant the second part: “but when you really drill down there are only beliefs that predict my experience more reliably or less reliably” How do you know that?
Well, what use is your belief in “objective value”?
What objective value are your instrumental beliefs? You keep assuming
useful-to-me is the ultimate value and it isn’t: Morality is, by definition.
Ultimately, that is to say at a deep level of analysis, I am non-cognitive to words like “true” and “refute.”
Then I have a bridge to sell you.
I would substitute “useful” and “show people why it is not useful,” respectively.
And would it be true that it is non-useful? Since to assert P is to assert “P is true”,
truth is a rather hard thing to eliminate. One would have to adopt the silence
of Diogenes.
Why do I think that is a useful phrasing? That would be a long post, but EY got the essential idea in Making Beliefs Pay Rent.
I meant the second part: “but when you really drill down there are only beliefs that predict my experience more reliably or less reliably” How do you know that?
That’s what I was responding to.
What objective value are your instrumental beliefs? You keep assuming useful-to-me is the ultimate value and it isn’t: Morality is, by definition.
Zorg: And what pan-galactic value are your objective values? Pan-galactic value is the ultimate value, dontcha know.
And would it be true that it is non-useful? Since to assert P is to assert “P is true”, truth is a rather hard thing to eliminate.
You just eliminated it: If to assert P is to assert “P is true,” then to assert “P is true” is to assert P. We could go back and forth like this for hours.
But you still haven’t defined objective value.
Dictionary says, “Not influenced by personal feelings, interpretations, or prejudice; based on facts; unbiased.”
How can a value be objective? ---EDIT: Especially since a value is a personal feeling. If you are defining “value” differently, how?
I meant the second part: “but when you really drill down there are only beliefs that predict my experience more reliably or less reliably” How do you know that?
That’s what I was responding to.
It is not the case that all beliefs can do is predict experience based on existing preferences. Beliefs can also set and modify preferences. I have given that counterargument several times.
Z org: And what pan-galactic value are your objective values? Pan-galactic value is the ultimate value, dontcha know.
I think moral values are ultimate because I can;t think of a valid argument
of the form “I should do because ”. Please
give an example of a pangalactic value that can be substituted for ,
You just eliminated it: If to assert P is to assert “P is true,” then to assert “P is true” is to assert P. We could go back and forth like this for hours.
Yeah,. but it sitll comes back to truth. If I tell you it will increase your happiness
to hit yourself on the head with a hammer, your response is going to have to
amount to “no, that’s not true”.
Dictionary says, [objective[ “Not influenced by personal feelings, interpretations, or prejudice; based on facts; unbiased.”
How can a value be objective?
By being (relatively) uninfluenced by personal feelings, interpretations, or prejudice; based on facts; unbiased.
Especially since a value is a personal feeling.
You haven’t remotely established that as an identity. It is true that some people
some of the time arrive at values through feelings. Others arrive at them (or revise
them) through facts and thinking.
you are defining “value” differently, how?
“Values can be defined as broad preferences concerning appropriate courses of action or outcomes”
If I tell you it will increase your happiness to hit yourself on the head with a hammer, your response is going to have to amount to “no, that’s not true”.
I’ll just decide not to follow the advice, or I’ll try it out and then after experiencing pain I will decide not to follow the advice again. I might tell you that, too, but I don’t need to use the word “true” or any equivalent to do that. I can just say it didn’t work.
People have been known to follow really bad advice, sometimes to their detriment and suffering a lot of pain along the way.
Some people have followed excessively stringent diets to the point of malnutrition or death. (This isn’t intended as a swipe at CR—people have been known to go a lot farther than that.)
People have attempted (for years or decades) to shut down their sexual feelings because they think their God wants it.
I’ll just decide not to follow the advice, or I’ll try it out and then after experiencing pain I will decide not to follow the advice again. I might tell you that, too, but I don’t need to use the word “true” or any equivalent to do that. I can just say it didn’t work.
Any word can be eliminated in favour of a definitions or paraphrase. Not coming out with an equivalent—showing that you have dispensed with the concept—is harder. Why didn’t it work? You’re going to have to paraphrase “Because it wasn’t true” or refuse to answer.
The concept of truth is for utility, not utility for truth. To get them backwards is to merely be confused by the words themselves. It’s impossible to show you’ve dispensed with any concept, except to show that it isn’t useful for what you’re doing. That is what I’ve done. I’m non-cognitive to God, truth, and objective value (except as recently defined). Usually they all sound like religion, though they all are or were at one time useful approximate means of expressing things in English.
The concept of truth is for utility, not utility for truth.
Truth is useful for whatever you want to do with it. If people can collect stamps for the sake of collecting stamps, they can collect truths for the sake of collecting truths.
I’m non-cognitive to God, truth, and objective value (except as recently defined). Usually they all sound like religion
Sounding like religion would not render something incomprehensible...but it could
easilly provoke an “I don’t like it” reaction, which is then dignified with the label “incoherent” or whatever.
It is not the case that all beliefs can do is predict experience based on existing preferences. Beliefs can also set and modify preferences.
I agree, if you mean things like, “If I now believe that she is really a he, I don’t want to take ‘her’ home anymore.”
I think moral values are ultimate because I can;t think of a valid argument of the form “I should do because ”.
Neither can I. I just don’t draw the same conclusion. There’s a difference between disagreeing with something and not knowing what it means, and I do seriously not know what you mean. I’m not sure why you would think it is veiled disagreement, seeing as lukeprog’s whole post was making this very same point about incoherence. (But incoherence also only has meaning in the sense of “incoherent to me” or someone else, so it’s not some kind of damning word. It simply means the message is not getting through to me. That could be your fault, my fault, or English’s fault, and I don’t really care which it is, but it would be preferable for something to actually make it across the inferential gap.)
EDIT: Oops, posted too soon.
“Values can be defined as broad preferences concerning appropriate courses of action or outcomes”
So basically you are saying that preferences can change because of facts/beliefs, right? And I agree with that. To give a more mundane example, if I learn Safeway doesn’t carry egg nog and I want egg nog, I may no longer want to go to Safeway. If I learn that egg nog is bad for my health, I may no longer want egg nog. If I believe health doesn’t matter because the Singularity is near, I may want egg nog again. If I believe that egg nog is actually made of human brains, I may not want it anymore.
At bottom, I act to get enjoyment and/or avoid pain, that is, to win. What actions I believe will bring me enjoyment will indeed vary depending on my beliefs. But it is always ultimately that winning/happiness/enjoyment/fun//deliciousness/pleasure that I am after, and no change in belief can change that. I could take short-term pain for long-term gain, but that would be because I feel better doing that than not.
But it seems to me that just because what I want can be influenced by what could be called objective or factual beliefs doesn’t make my want for deliciousness “uninfluenced by personal feelings.”
In summary, value/preferences can either be defined to include (1) only personal feelings (though they may be universal or semi-universal), or to also include (2) beliefs about what would or wouldn’t lead to such personal feelings. I can see how you mean that 2 could be objective, and then would want to call them thus “objective values.” But not for 1, because personal feelings are, well, personal.
If so, then it seems I am back to my initial response to lukeprog and ensuing brief discussion. In short, if it is only the belief in objective facts that is wrong, then I wouldn’t want to call that morality, but more just self-help, or just what the whole rest of LW is. It is not that someone could be wrong about their preferences/values 1, but preferences/values 2.
There’s a difference between disagreeing with something and not knowing what it means, and I do seriously not know what you mean. I’m not sure why you would think it is veiled disagreement, seeing as lukeprog’s whole post was making this very same point about incoherence. (But incoherence also only has meaning in the sense of “incoherent to me” or someone else,
“incoherence” means several things. Some of them, such a self-contradiction are
as objective as anything. You seem to find morality meaningless in some personal
sense. Looking at dictionaries doesn’t seem to work for you. Dictionaries tend
to define the moral as the good.It is hard to believe that anyone can grow up
not hearing the word “good” used a lot, unless they were raised by wolves. So
that’s why I see complaints of incoherence as being disguised disagreement.
At bottom, I act to get enjoyment and/or avoid pain, that is, to win.
If you say so. That doesn’t make morality false, meaningless or subjective. It makes
you an amoral hedonist.
But it seems to me that just because what I want can be influenced by what could be called objective or factual beliefs doesn’t make my want for deliciousness “uninfluenced by personal feelings.”
Perhaps not completley, but that sill leaves some things as relatively more
objective than others.
In summary, value/preferences can either be defined to include (1) only personal feelings (though they may be universal or semi-universal), or to also include (2) beliefs about what would or wouldn’t lead to such personal feelings. I can see how you mean that 2 could be objective, and then would want to call them thus “objective values.” But not for 1, because personal feelings are, well, personal.
Then your categories aren’t exhaustive, because preferences can also
be defined to include universalisable values alongside personal whims.
You may be making the classic of error of taking “subjective” to mean
“believed by a subject”
Dictionaries tend to define the moral as the good.It is hard to believe that anyone can grow up not hearing the word “good” used a lot, unless they were raised by wolves
The problem isn’t that I don’t know what it means. The problem is that it means many different things and I don’t know which of those you mean by it.
an amoral hedonist
I have moral sentiments (empathy, sense of justice, indignation, etc.), so I’m not amoral. And I am not particularly high time-preference, so I’m not a hedonist.
preferences can also be defined to include universalisable values alongside personal whims
If you mean preferences that everyone else shares, sure, but there’s no stipulation in my definitions that other people can’t share the preferences. In fact, I said, “(though they may be universal or semi-universal).”
You may be making the classic of error of taking “subjective” to mean “believed by a subject”
It’d be a “classic error” to assume you meant one definition of subjective rather than another, when you haven’t supplied one yourself? This is about the eight time in this discussion that I’ve thought that I can’t imagine what you think language even is.
I doubt we have any disagreement, to be honest. I think we only view language very, radically differently. (You could say we have a disagreement about language.)
Dictionaries tend to define the moral as the good.It is hard to believe that anyone can grow up not hearing the word “good” used a lot, unless they were raised by wolves
The problem isn’t that I don’t know what it means.
What “moral” means or what “good” means/?
The problem is that it means many different things and I don’t know which of those you mean by it.
No, that isn’t the problem. It has one basic meaning, but there are a lot of different theories about it. Elsewhere you say that utilitarianism renders objective morality meaningful. A theory of X cannot render X meaningful, but it can render X plausible.
I have moral sentiments (empathy, sense of justice, indignation, etc.), so I’m not amoral. And I am not particularly high time-preference, so I’m not a hedonist.
But you theorise that you only act on them(and that nobody ever acts but) toincrea se your pleasure.
If you mean preferences that everyone else shares, sure, but there’s no stipulation in my definitions that other people can’t share the preferences.
I don’t see the point in stipulating that preferences can’t be shared. People who
believe they can be just have to find another word. Nothing is proven.
You may be making the classic of error of taking “subjective” to mean “believed by a subject”
It’d be a “classic error” to assume you meant one definition of subjective rather than another, when you haven’t supplied one yourself?
I’ve quoted the dictionary derfinition, and that’s what I mean.
“existing in the mind; belonging to the thinking subject rather than to the object of thought ( opposed to objective).
2.
pertaining to or characteristic of an individual; personal; individual: a subjective evaluation.
3.
placing excessive emphasis on one’s own moods, attitudes, opinions, etc.; unduly egocentric”
This is about the eight time in this discussion that I’ve thought that I can’t imagine what you think language even is.
I think language is public, I think (genuine) disagreements about meaning can
be resolved with dictionaries, and I think you shouldn’t assume someone is using
idiosyncratic definitions unless they give you good reason.
As far as objective value, I simply don’t understand what anyone means by the term.
Objective truth is what you should believe even if you don’t. Objective values
are the values you should have even if you have different values.
And I think lukeprog’s point could be summed up as, “Trying to figure out how each discussant is defining their terms is not really ‘doing philosophy’; it’s just the groundwork necessary for people not to talk past each other.”
Where the groundwork is about 90% of the job...
As far as making beliefs pay rent, a simpler way to put it is: If you say I should believe X but I can’t figure out what anticipations X entails, I will just respond, “So what?”
That has been answered several times. You are assuming that instrumental value
is ultimate value, and it isn’t.
To unite the two themes: The ultimate definition would tell me why to care.
Imagine you are arguing with someone who doesn’t “get” rationality. If they
believe in instrumental values, you can persuade they they should care about
rationality because it will enable them to achieve their aims. If they don’t, you can’t.
Even good arguments will fail to work on some people.
You should care about morality because it is morality. Morality defines (the ultimate kind of) “should”.
“What I should do” =def “what is moral”.
Nor everyone does get that , which is why “don’t care” is “made to care” by various sanctions.
As far as objective value, I simply don’t understand what anyone means by the term.
Objective truth is what you should believe even if you don’t.
“Should” for what purpose?
Where the groundwork is about 90% of the job...
I certainly agree there. The question is whether it is more useful to assign the label “philosophy” to groundwork+theory or just the theory. A third possibility is that doing enough groundwork will make it clear to all discussants that there are no (or almost no) actually theories in what is now called “philosophy,” only groundwork, meaning we would all be in agreement and there is nothing to argue except definitions.
Imagine you are arguing with someone who doesn’t “get” rationality. If they believe in instrumental values, you can persuade they they should care about rationality because it will enable them to achieve their aims. If they don’t, you can’t.
I may not be able to convince them, but at least I would be trying to convince them on the grounds of helping them achieve their aims. It seems you’re saying that, in the present argument, you are not trying to help me achieve my aims (correct me if I’m wrong). This is what makes me curious about why you think I would care. The reasons I do participate, by the way, are that I hold out the chance that you have a reason why I would care (which maybe you are not articulating in a way that makes sense to me yet), that you or others will come to see my view that it’s all semantic confusion, and because I don’t want to sound dismissive or obstinate in continuing to say, “So what?”
Objective truth is what you should believe even if you don’t.
“Should” for what purpose?
Believing in truth is what rational people do.
Imagine you are arguing with someone who doesn’t “get” rationality. If they believe in instrumental values, you can persuade they they should care about rationality because it will enable them to achieve their aims. If they don’t, you can’t.
I may not be able to convince them, but at least I would be trying to convince them on the grounds of helping them achieve their aims.
Which is good because...?
It seems you’re saying that, in the present argument, you are not trying to help me achieve my aims (correct me if I’m wrong).
Correct.
This is what makes me curious about why you think I would care.
I can argue that your personal aims are not the ultimate value, and I can
suppose you might care about that just because it is true. That is how
arguments work: one rational agent tries topersuade another that something
is true. If one of the participants doesn’t care about truth at all, the process
probably isn’t going to work.
The reasons I do participate, by the way, are that I hold out the chance that you have a reason why I would care (which maybe you are not articulating in a way that makes sense to me yet), that you or others will come to see my view that it’s all semantic confusion, and because I don’t want to sound dismissive or obstinate in continuing to say, “So what?”
I think that horse has bolted. Inasmuch as you don’t care about truth per se. you have advertised yourself as being irrational.
Winning is what rational people do. We can go back and forth like this.
Which is good because...?
It benefits me, because I enjoy helping people. See, I can say, “So what?” in response to “You’re wrong.” Then you say, “You’re still wrong.” And I walk away feeling none the worse. Usually when someone claims I am wrong I take it seriously, but only because I know how it could ever, possibly, potentially ever affect me negatively. In this case you are saying it is different, and I can safely walk away with no terror ever to befall me for “being wrong.”
I can argue that your personal aims are not the ultimate value, and I can suppose you might care about that just because it is true. That is how arguments work: one rational agent tries topersuade another that something is true. If one of the participants doesn’t care about truth at all, the process probably isn’t going to work.
Sure, people usually argue whether something is “true or false” because such status makes a difference (at least potentially) to their pain or pleasure, happiness, utility, etc. As this is almost always the case, it is customarily unusual for someone to say they don’t care about something being true or false. But in a situation where, ex hypothesi, the thing being discussed—very unusually—is claimed to not have any effect on such things, “true” and “false” become pointless labels. I only ever use such labels because they can help me enjoy life more. When they can’t, I will happily discard them.
Sure, people usually argue whether something is “true or false” because such status makes a difference (at least potentially) to their pain or pleasure, happiness, utility, etc.
So you say. I can think of two arguments against that: people acquire true beliefs that
aren’t immediately useful, and untrue beliefs can be pleasing.
I never said they had to be “immediately useful” (hardly anything ever is). Untrue beliefs might be pleasing, but when people are arguing truth and falsehood it is not in order to prove that the beliefs they hold are untrue so that they can enjoy believing them, so it’s not an objection either.
A lot of people care about truth, even when (I suspect) they diminish their enjoyment needlessly by doing so, so no argument there. In the parent I’m just continuing to try to explain why my stance might sound weird. My point from farther above, though, is just that I don’t/wouldn’t care about “truth” in those rare and odd cases where it is already part of the premises that truth or falsehood will not affect me in any way.
For example, the thinking in the post anticipations would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism.
Hmm. I sounds to me like a kind of methodological twist on logical positivism...just don’t bother with things that don’t have empirical consequences.
I think Peterdjones’s answer hits it on the head. I understand you’ve thrashed-out related issues elsewhere, but it seems to me your claim that the idea of an objective value judgment is incoherent would again require doing quite a bit of philosophy to justify.
Really I meant to be throwing the ball back to lukeprog to give us an idea of what the ‘arguing about facts and anticipations’ alternative is, if not just philosophy pretending not to be. I could have been more clear about this. Part of my complaint is the wanting to have it both ways. For example, the thinking in the post anticipations would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism. If LWers are serious about this idea, they really should look into its implications if they want to avoid inadvertent contradictions in the world-views. That means doing some philosophy.
As far as objective value, I simply don’t understand what anyone means by the term. And I think lukeprog’s point could be summed up as, “Trying to figure out how each discussant is defining their terms is not really ‘doing philosophy’; it’s just the groundwork necessary for people not to talk past each other.”
As far as making beliefs pay rent, a simpler way to put it is: If you say I should believe X but I can’t figure out what anticipations X entails, I will just respond, “So what?”
To unite the two themes: The ultimate definition would tell me why to care.
In the space of all possible meta-ethics, some meta-ethics are cooperative, and other meta-ethics are not so. This means that if you can choose which metaethics to spread to society, you stand a better chance at your own goals, if you spread cooperative metaethics. And cooperative metaethics is what we call “morality”, by and large.
It’s “Do unto others...”, but abstracted a bit, so that we really mean “Use the reasoning to determine what to do unto others, that you would rather they used when deciding how to do unto you.”
Omega puts you in a room with a big red button. “Press this button and you get ten dollars but another person will be poisoned to slowly die. If you don’t press it I punch you on the nose and you get no money. They have a similar button which they can use to kill you and get 10 dollars. You can’t communicate with them. In fact they think they’re the only person being given the option of a button, so this problem isn’t exactly like Prisoner’s dilemma. They don’t even know you exist or that their own life is at stake.”
“But here’s the offer I’m making just to you, not them. I can imprint you both with the decision theory of your choice, Amanojack; ofcourse if you identify yourself in your decision theory, they’ll be identifying themself.
“Careful though: This is a one time offer, and then I may put both of you to further different tests. So choose the decision theory that you want both of you to have, and make it abstract enough to help you survive, regardless of specific circumstances.”
Given the above scenario, you’ll end up wanting people to choose protecting the life of strangers more than than picking 10 dollars.
I would indeed it prefer if other people had certain moral sentiments. I don’t think I ever suggested otherwise.
Not quite my point. I’m not talking about what your preferences would be. That would be subjective, personal. I’m talking about what everyone’s meta-ethical preferences would be, if self-consistent, and abstracted enough.
My argument is essentially that objective morality can be considered the position in meta-ethical-space which if occupied by all agents would lead to the maximization of utility.
That makes it objectively (because it refers to all the agents, not some of them, or one of them) different from other points in meta-ethical-space, and so it can be considered to lead to an objectively better morality.
Then why not just call it “universal morality”?
It’s called that too. Are you just objecting as to what we are calling it?
Yeah, because calling it that makes it pretty hard to understand. If you just mean Collective Greatest Happiness Utilitarianism, then that would be a good name. Objective morality can mean way too many different things. This way at least you’re saying in what sense it’s supposed to be objective.
As for this collectivism, though, I don’t go for it. There is no way to know another’s utility function, no way to compare utility functions among people, etc. other than subjectively. And who’s going to be the person or group that decides? SIAI? I personally think all this collectivism is a carryover from the idea of (collective) democracy and other silly ideas. But that’s a debate for another day.
I’m getting a bad vibe here, and no longer feel we’re having the same conversation
“Person or group that decides”? Who said anything about anyone deciding anything? And my point was that this perhaps this is the meta-ethical position that every rational agent individually converges to. So nobody “decides”, or everyone does. And if they don’t reach the same decision, then there’s no single objective morality—but even i so perhaps there’s a limited set of coherent metaethical positions, like two or three of them.
I think my post was inspired more by TDT solutions to Prisoner’s dilemma and Newcomb’s box, a decision theory that takes into account the copies/simulations of its own self, or other problems that involve humans getting copied and needing to make a decision in blind coordination with their copies.
I imagined system that are not wholly copied, but rather just the module that determines the meta-ethical constraints, and tried to figure out to which directions would such system try to modify themselves, in the knowledge that other such system would similarly modify themselves.
You’re right, I think I’m confused about what you were talking about, or I inferred too much. I’m not really following at this point either.
One thing, though, is that you’re using meta-ethics to mean ethics. Meta-ethics is basically the study of what people mean by moral language, like whether ought is interpreted as a command, as God’s will, as a way to get along with others, etc. That’ll tend to cause some confusion. A good heuristic is, “Ethics is about what people ought to do, whereas meta-ethics is about what ought means (or what people intend by it).”
I’m not.
An ethic may say:
I should support same-sex marriage. (SSM-YES)
or perhaps:
I should oppose same-sex marraige (SSM-NO)
The reason for this position is the meta-ethic:
e.g.
Because I should act to increase average utility. (UTIL-AVERAGE)
Because I should act to increase total utility. (UTIL-TOTAL)
Because I should act to increase total amount of freedom (FREEDOM-GOOD)
Because I should act to increase average societal happiness. (SOCIETAL-HAPPYGOOD-AVERAGE)
Because I should obey the will of our voters (DEMOCRACY-GOOD)
Because I should do what God commands. (OBEY-GOD).
But some metaethical positions are invalid because of false assumptions (e.g. God’s existence). Other positions may not be abstract enough that they could possibly become universal or apply to all situations. Some combinations of ethics and metaethics may be the result of other factual or reasoning mistakes (e.g. someone thinks SSM will harm society, but it ends up helping it, even by the person’s own measuring).
So, NO, I don’t speak necessarily about Collective Greatest Happiness Utilitarianism. I’m NOT talking about a specific metaethic, not even necessarily a consequentialistic metaethic (let alone a “Greatest happiness utilitarianism”) I’m speaking about the hypothetical point in metaethical space that everyone would hypothetically prefer everyone to have—an Attractor of metaethical positions.
That’s very contestable. It has frequently argued here that preferences can be inferred from behaviour; it’s also been argued that introspection (if that is what you mean by “subjectively”) is not a reliable guide to motivation.
This is the whole demonstrated preference thing. I don’t buy it myself, but that’s a debate for another time. What I mean by subjectively is that I will value one person’s life more than another person’s life, or I could think that I want that $1,000,000 more than a rich person wants it, but that’s just all in my head. To compare utility functions and work from demonstrated preference usually—not always—is a precursor to some kind of authoritarian scheme. I can’t say there is anything like that coming, but it does set off some alarm bells. Anyway, this is not something I can substantiate right now.
Attempts to reduce real, altrusitic ethics back down to selfish/instrumental ethics tend not to work that well, because the gains from co-operation are remote, and there are many realistic instances where selfish action produces immediate rewards (cd the Prudent Predatory objection Rand’s egoistic ethics).
OTOH, since many people are selfish, they are made to care by having legal and social sanctions against excessively selfish behaviour.
I wasn’t talking about altruistic ethics, which can lead someone to sacrifice their lifes to prevent someone else getting a bruise; and thus would be almost as disastrous as selfishness if widespread. I was talking about cooperative ethics—which overlaps with but doesn’t equal altruism, same as it overlaps but doesn’t equal selfishness.
The difference between morality and immorality, is that morality can at its most abstract possible level be cooperative, and immorality can’t.
This by itself isn’t a reason that can force someone to care—you can’t make a rock care about anything, but that’s not a problem with your argument. But it’s something that leads to different expectations about the world, namely what Amanojack was asking for.
In a world populated by beings whose beliefs approach objective morality, I expect more cooperation and mutual well-being, all other things being equal. In a world whose beliefs don’t approach it, i expect more war and other devastation.
Although it usually doesn’t.
I think that you version of altruism is a straw man, and that what most people mean by altruism isn’t very different from co operation.
Or, as I call it, universalisability.
That argument doesn’t have to be made at all. Morality can stand as a refutation of the claim that anticipiation of experience is of ultimate importance. And it can be made differently: if you rejig your values, you can expect to antipate different experiences—it can be a self-fulffilling prophecy and not merely passive anticipation.
There is an argument from self interest, but it is tertiary to the two arguments I mentioned above.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist:
-though the apparent tension in being a solipsist who argues gets to the root of the issue.
For what it may be worth:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.
Solipsism is an ontological stance: in short, “there is nothing out there but my own mind.” I am saying something slightly different: “To speak of there being something/nothing out there is meaningless to me unless I can see why to care.” Then again, I’d say this is tautological/obvious in that “meaning” just is “why it matters to me.”
My “position” (really a meta-position about philosophical positions) is just that language obscures what is going on. It may take a while to make this clear, but if we continue I’m sure it will be.
I’m not a naturalist. I’m not skeptical of “objective” because of such reasons; I am skeptical of it merely because I don’t know what the word refers to (unless it means something like “in accordance with consensus”). In the end, I engage in intellectual discourse in order to win, be happier, get what I want, get pleasure, maximize my utility, or whatever you’ll call it (I mean them all synonymously).
If after engaging in such discourse I am not able to do that, I will eventually want to ask, “So what? What difference does it make to my anticipations? How does this help me get what I want and/or avoid what I don’t want?”
Do you cross the road with your eyes shut? If not, you are assuming, like everyone else, that there are things out there which are terminally disutiilitous.
Whose language ? What language? If you think all language is a problem, what do you intend to replace it with?
It refers to the stuff that doesn’t go away when you stop believing in it.
Note the bold.
English, and all the rest that I know of.
Something better would be nice, but what of it? I am simply saying that language obscures what is going on. You may or may not find that insight useful.
If so, I suggest “permanent” as a clearer word choice.
I think that is rather drastic. Science may not accept beliefs and values as fundamental, but it can accept that as higher-level descriptions, cf Dennet’s Intentional Stance.
Again, I find it incredible that natural facts have no relation to morality. Morality would be very different in women laid eggs or men had balls of steel.
To say that moral values are both objective and disconnected from physical fact implies that they exist in their own domain, which is where some people,with some justice, tend to balk.
For some value of “incoherent”. Personally, I find it useful to strike out the word and replace it with something more precise, such a “semantically meaningless”, “contradictory”, “self-underminng” etc.
I take the position that while we may well have evolved with different values, they wouldn’t be morality. “Morality” is subjunctively objective. Nothing to do with natural facts, except insofar as they give us clues about what values we in fact did evolve with.
How do you know that the values we have evolved with are moral? (The claim that natural facts are relevant to moral reasoning is different to the claim that natually-evolved behavioural instincts are ipso facto moral)
I’m not sure what you want to know. I feel motivated to be moral, and the things that motivate thinking machines are what I call “values”. Hence, our values are moral.
But of course naturally-evolved values are not moral simply by virtue of being values. Morality isn’t about values, it’s about life and death and happiness and sadness and many other things beside.
I acknowledge this is a subject of lively debate. Still, I stick to the proposition that you can’t derive an ought from an is, and that this is what’s at stake here. Since you can’t make sense of a person as rational if it’s not the case there’s anything she ought or ought not to do (and I admit you may think this needs defending), natural science lacks the means to ascribe rationality. Now, if we’re talking about the social sciences, that’s another matter. There is a discontinuity between these and the purely natural sciences. I read Dennett many years ago, and thought something like this divide is what his different stances are about, but I’d be open to hear a different view.
I didn’t say this—just that from a purely scientific point of view, morality is invisible. From an engaged, subjective point of view, where morality is visible, natural facts are relevant.
Here’s another stab at it: natural science can in principle tell us everything there is to know about a person’s inner workings and dispositions, right down to what sounds she is likely to utter in what circumstances. It might tell someone she will make the sounds, eg, ‘I ought to go to class’ in given circs.. But no amount of knowledge of this kind will give her a reason to go to class (would you agree?). To get reasons -not to mention linguistic meaning and any intentional states- you need a subjective -ie, non-scientific- point of view. The two views are incommensurable, but neither is dispensable -people need reasons.
I acknowledge this is a subject of lively debate. Still, I stick to the proposition that you can’t derive an ought from an is, and that this is what’s at stake here.
But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible. Rational agents should win, in short. That seems to be an analytical truth arrived at by unpacking “rational”. Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn’ts. I have been trying to press that unpacking morality leads to the similar analytical truth: ” a moral agent ought to adopt universalisable goals.”
“Oughts” in general appear wherever you have rules, which are often abstractly defined so that they apply to physal systems as well as anything else.
I think LWers would say there are facts about her utility function from which conclusions can be drawn about how she should maximise it (and how she would if the were rational).
I don’t see why. If a person or other system has goals and is acting to achieve those goals in an effective way, then their goals can be inferred from their actions.
I expressed myself badly. I agree entirely with this.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
And I want to persuade LWers
1) that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are,
and
2) that this is ok—these are still respectable facts, notwithstanding.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour. You seem to be in need of a narrow, sipulative definition of naturalistic.
You introduced the word “basic” there. It might be the case that goals disappear on a very fine-grained atomistic view of things (along with rules and structures and various other things). But that would mean that goals aren’t basic physical facts. Naturalism tends to be defined more epistemically than physicalism, so the inferrabilty of UFs (or goals or intentions) from coarse-grained physical behaviour is a good basis for supposing them to be natural by that usage.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No collection of merely naturalistic facts will constrain these. There have been lots of theories advanced which try, but the concensus, I think, is that there is no easy naturalistic solution.
Oh, that’s the philosopher’s definition of naturalistic. OTOH, you could just adopt the scientists version and scan their brain.
Well, alright, please tell me: what is a Utility Function, that it can be inferred from a brain scan? How’s this supposed to work, in broad terms?
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No collection of merely naturalistic facts will constrain these. There have been lots of theories advances which try, but the concensus, I think, is that there is no easy naturalistic solution.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No number of merely naturalistic facts will constrain these. There have been any number of theories advances which try, but the concensus, I think, is that all fail.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’ (is this what you have in mind?) First, notice that X may also prefer his/her philosophy TA to his/her EM Fields and Waves TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No number of merely naturalistic facts will constrain these. There have been lots of theories advanced which try, but the concensus, I think, is that there’s no easy naturalistic solution.
I expressed myself badly. I agree entirely with this.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
And I want to persuade LWers
*that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are,
and
*that this is ok—these are still respectable facts, notwithstanding.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
I expressed myself badly. I agree entirely with this.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
And I want to persuade LWers
that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are, and *that this is ok—these are still respectable facts, notwithstanding.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist:
For what it may be worth:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist
-though the apparent contradiction in being a solipsist who argues gets to the root of the issue.
For what it may be worth:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them. From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist:
For what it may be worth:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says is more incisive and clear than what I came up with. I took a different tack, which is maybe moot given your admission to being a solipsist:
For what it may be worth, here’s what I had:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but anyway the idea at least is not incoherent. And showing otherwise would take doing some philosophy.
What they generally mean is “not subjective”. You might object that non-subjective value is contradictory, but that is not the same as objecting that it is incomprehensible, since one has to understand the meanings of individual terms to see a contradiction.
As for anticipations: believing morality is objective entails that some of your beliefs may be wrong by objective standards, and believing it is subjective does not entail that. So the belief in moral objectivity could lead to a revision of your aims and goals, which will in turn lead to different experiences.
I’m not saying non-subjective value is contradictory, just that I don’t know what it could mean. To me “value” is a verb, and the noun form is just a nominalization of the verb, like the noun “taste” is a nominalization of the verb “taste.” Ayn Rand tried to say there was such a thing as objectively good taste, even of foods, music, etc. I didn’t understand what she meant either.
But before I would even want to revise my aims and goals, I’d have to anticipate something different than I do now. What does “some of your beliefs may be wrong by objective standards” make me anticipate that would motivate me to change my goals? (This is the same as the question in the other comment: What penalty do I suffer by having the “wrong” moral sentiments?)
I don’t see the force to that argument. “Believe” is a verb and “belief” is a nominalisation. But beliefs can be objectively right or wrong—if they belong to the appropriate subject area.
It is possible for aesthetics(and various other things) to be un-objectifiable whilst morality (and various other things) are objectifiable.
Why?
You should be motivated by a desire to get things right in general. The anticipation thing is just a part of that. It’s not an ultimate. But morality is an ultimate because there is no more important value than a moral value.
If there is no personal gain from morality, that doesn’t mean you shouldn’t be moral. You should be moral by the definition of “moral”and “should”. It’s an analytical truth. It is for selfishness to justify itself in the face of morality, not vice versa.
First of all, I should disclose that I don’t find ultimately any kind of objectivism coherent, including “objective reality.” It is useful to talk about objective reality and objectively right or wrong beliefs most of the time, but when you really drill down there are only beliefs that predict my experience more reliably or less reliably. In the end, nothing else matters to me (nor, I expect, anyone else—if they understand what I’m getting at here).
So you disagree with EY about making beliefs pay rent? Like, maybe some beliefs don’t pay rent but are still important? I just don’t see how that makes sense.
This seems circular.
What if I say, “So what?”
How do you know that?
If disagreeing mean it is good to entertain useless beliefs, then no. If disagreeing means that instrumental utility is not the ultimate value , then yes.
You say that like that’s a bad thing. I said it was analytical and analytical truths would be expected to sound tautologous or circular.
So it’s still true. Not caring is not refutation.
Why do I think that is a useful phrasing? That would be a long post, but EY got the essential idea in Making Beliefs Pay Rent.
Well, what use is your belief in “objective value”?
Ultimately, that is to say at a deep level of analysis, I am non-cognitive to words like “true” and “refute.” I would substitute “useful” and “show people why it is not useful,” respectively.
I meant the second part: “but when you really drill down there are only beliefs that predict my experience more reliably or less reliably” How do you know that?
What objective value are your instrumental beliefs? You keep assuming useful-to-me is the ultimate value and it isn’t: Morality is, by definition.
Then I have a bridge to sell you.
And would it be true that it is non-useful? Since to assert P is to assert “P is true”, truth is a rather hard thing to eliminate. One would have to adopt the silence of Diogenes.
That’s what I was responding to.
Zorg: And what pan-galactic value are your objective values? Pan-galactic value is the ultimate value, dontcha know.
You just eliminated it: If to assert P is to assert “P is true,” then to assert “P is true” is to assert P. We could go back and forth like this for hours.
But you still haven’t defined objective value.
Dictionary says, “Not influenced by personal feelings, interpretations, or prejudice; based on facts; unbiased.”
How can a value be objective? ---EDIT: Especially since a value is a personal feeling. If you are defining “value” differently, how?
It is not the case that all beliefs can do is predict experience based on existing preferences. Beliefs can also set and modify preferences. I have given that counterargument several times.
I think moral values are ultimate because I can;t think of a valid argument of the form “I should do because ”. Please give an example of a pangalactic value that can be substituted for ,
Yeah,. but it sitll comes back to truth. If I tell you it will increase your happiness to hit yourself on the head with a hammer, your response is going to have to amount to “no, that’s not true”.
By being (relatively) uninfluenced by personal feelings, interpretations, or prejudice; based on facts; unbiased.
You haven’t remotely established that as an identity. It is true that some people some of the time arrive at values through feelings. Others arrive at them (or revise them) through facts and thinking.
“Values can be defined as broad preferences concerning appropriate courses of action or outcomes”
I missed this:
I’ll just decide not to follow the advice, or I’ll try it out and then after experiencing pain I will decide not to follow the advice again. I might tell you that, too, but I don’t need to use the word “true” or any equivalent to do that. I can just say it didn’t work.
People have been known to follow really bad advice, sometimes to their detriment and suffering a lot of pain along the way.
Some people have followed excessively stringent diets to the point of malnutrition or death. (This isn’t intended as a swipe at CR—people have been known to go a lot farther than that.)
People have attempted (for years or decades) to shut down their sexual feelings because they think their God wants it.
Any word can be eliminated in favour of a definitions or paraphrase. Not coming out with an equivalent—showing that you have dispensed with the concept—is harder. Why didn’t it work? You’re going to have to paraphrase “Because it wasn’t true” or refuse to answer.
The concept of truth is for utility, not utility for truth. To get them backwards is to merely be confused by the words themselves. It’s impossible to show you’ve dispensed with any concept, except to show that it isn’t useful for what you’re doing. That is what I’ve done. I’m non-cognitive to God, truth, and objective value (except as recently defined). Usually they all sound like religion, though they all are or were at one time useful approximate means of expressing things in English.
Truth is useful for whatever you want to do with it. If people can collect stamps for the sake of collecting stamps, they can collect truths for the sake of collecting truths.
Sounding like religion would not render something incomprehensible...but it could easilly provoke an “I don’t like it” reaction, which is then dignified with the label “incoherent” or whatever.
I agree, if you mean things like, “If I now believe that she is really a he, I don’t want to take ‘her’ home anymore.”
Neither can I. I just don’t draw the same conclusion. There’s a difference between disagreeing with something and not knowing what it means, and I do seriously not know what you mean. I’m not sure why you would think it is veiled disagreement, seeing as lukeprog’s whole post was making this very same point about incoherence. (But incoherence also only has meaning in the sense of “incoherent to me” or someone else, so it’s not some kind of damning word. It simply means the message is not getting through to me. That could be your fault, my fault, or English’s fault, and I don’t really care which it is, but it would be preferable for something to actually make it across the inferential gap.)
EDIT: Oops, posted too soon.
So basically you are saying that preferences can change because of facts/beliefs, right? And I agree with that. To give a more mundane example, if I learn Safeway doesn’t carry egg nog and I want egg nog, I may no longer want to go to Safeway. If I learn that egg nog is bad for my health, I may no longer want egg nog. If I believe health doesn’t matter because the Singularity is near, I may want egg nog again. If I believe that egg nog is actually made of human brains, I may not want it anymore.
At bottom, I act to get enjoyment and/or avoid pain, that is, to win. What actions I believe will bring me enjoyment will indeed vary depending on my beliefs. But it is always ultimately that winning/happiness/enjoyment/fun//deliciousness/pleasure that I am after, and no change in belief can change that. I could take short-term pain for long-term gain, but that would be because I feel better doing that than not.
But it seems to me that just because what I want can be influenced by what could be called objective or factual beliefs doesn’t make my want for deliciousness “uninfluenced by personal feelings.”
In summary, value/preferences can either be defined to include (1) only personal feelings (though they may be universal or semi-universal), or to also include (2) beliefs about what would or wouldn’t lead to such personal feelings. I can see how you mean that 2 could be objective, and then would want to call them thus “objective values.” But not for 1, because personal feelings are, well, personal.
If so, then it seems I am back to my initial response to lukeprog and ensuing brief discussion. In short, if it is only the belief in objective facts that is wrong, then I wouldn’t want to call that morality, but more just self-help, or just what the whole rest of LW is. It is not that someone could be wrong about their preferences/values 1, but preferences/values 2.
“incoherence” means several things. Some of them, such a self-contradiction are as objective as anything. You seem to find morality meaningless in some personal sense. Looking at dictionaries doesn’t seem to work for you. Dictionaries tend to define the moral as the good.It is hard to believe that anyone can grow up not hearing the word “good” used a lot, unless they were raised by wolves. So that’s why I see complaints of incoherence as being disguised disagreement.
If you say so. That doesn’t make morality false, meaningless or subjective. It makes you an amoral hedonist.
Perhaps not completley, but that sill leaves some things as relatively more objective than others.
Then your categories aren’t exhaustive, because preferences can also be defined to include universalisable values alongside personal whims. You may be making the classic of error of taking “subjective” to mean “believed by a subject”
The problem isn’t that I don’t know what it means. The problem is that it means many different things and I don’t know which of those you mean by it.
I have moral sentiments (empathy, sense of justice, indignation, etc.), so I’m not amoral. And I am not particularly high time-preference, so I’m not a hedonist.
If you mean preferences that everyone else shares, sure, but there’s no stipulation in my definitions that other people can’t share the preferences. In fact, I said, “(though they may be universal or semi-universal).”
It’d be a “classic error” to assume you meant one definition of subjective rather than another, when you haven’t supplied one yourself? This is about the eight time in this discussion that I’ve thought that I can’t imagine what you think language even is.
I doubt we have any disagreement, to be honest. I think we only view language very, radically differently. (You could say we have a disagreement about language.)
What “moral” means or what “good” means/?
No, that isn’t the problem. It has one basic meaning, but there are a lot of different theories about it. Elsewhere you say that utilitarianism renders objective morality meaningful. A theory of X cannot render X meaningful, but it can render X plausible.
But you theorise that you only act on them(and that nobody ever acts but) toincrea se your pleasure.
I don’t see the point in stipulating that preferences can’t be shared. People who believe they can be just have to find another word. Nothing is proven.
I’ve quoted the dictionary derfinition, and that’s what I mean.
“existing in the mind; belonging to the thinking subject rather than to the object of thought ( opposed to objective). 2. pertaining to or characteristic of an individual; personal; individual: a subjective evaluation. 3. placing excessive emphasis on one’s own moods, attitudes, opinions, etc.; unduly egocentric”
I think language is public, I think (genuine) disagreements about meaning can be resolved with dictionaries, and I think you shouldn’t assume someone is using idiosyncratic definitions unless they give you good reason.
Objective truth is what you should believe even if you don’t. Objective values are the values you should have even if you have different values.
Where the groundwork is about 90% of the job...
That has been answered several times. You are assuming that instrumental value is ultimate value, and it isn’t.
Imagine you are arguing with someone who doesn’t “get” rationality. If they believe in instrumental values, you can persuade they they should care about rationality because it will enable them to achieve their aims. If they don’t, you can’t. Even good arguments will fail to work on some people.
You should care about morality because it is morality. Morality defines (the ultimate kind of) “should”.
“What I should do” =def “what is moral”.
Nor everyone does get that , which is why “don’t care” is “made to care” by various sanctions.
“Should” for what purpose?
I certainly agree there. The question is whether it is more useful to assign the label “philosophy” to groundwork+theory or just the theory. A third possibility is that doing enough groundwork will make it clear to all discussants that there are no (or almost no) actually theories in what is now called “philosophy,” only groundwork, meaning we would all be in agreement and there is nothing to argue except definitions.
I may not be able to convince them, but at least I would be trying to convince them on the grounds of helping them achieve their aims. It seems you’re saying that, in the present argument, you are not trying to help me achieve my aims (correct me if I’m wrong). This is what makes me curious about why you think I would care. The reasons I do participate, by the way, are that I hold out the chance that you have a reason why I would care (which maybe you are not articulating in a way that makes sense to me yet), that you or others will come to see my view that it’s all semantic confusion, and because I don’t want to sound dismissive or obstinate in continuing to say, “So what?”
Believing in truth is what rational people do.
Which is good because...?
Correct.
I can argue that your personal aims are not the ultimate value, and I can suppose you might care about that just because it is true. That is how arguments work: one rational agent tries topersuade another that something is true. If one of the participants doesn’t care about truth at all, the process probably isn’t going to work.
I think that horse has bolted. Inasmuch as you don’t care about truth per se. you have advertised yourself as being irrational.
Winning is what rational people do. We can go back and forth like this.
It benefits me, because I enjoy helping people. See, I can say, “So what?” in response to “You’re wrong.” Then you say, “You’re still wrong.” And I walk away feeling none the worse. Usually when someone claims I am wrong I take it seriously, but only because I know how it could ever, possibly, potentially ever affect me negatively. In this case you are saying it is different, and I can safely walk away with no terror ever to befall me for “being wrong.”
Sure, people usually argue whether something is “true or false” because such status makes a difference (at least potentially) to their pain or pleasure, happiness, utility, etc. As this is almost always the case, it is customarily unusual for someone to say they don’t care about something being true or false. But in a situation where, ex hypothesi, the thing being discussed—very unusually—is claimed to not have any effect on such things, “true” and “false” become pointless labels. I only ever use such labels because they can help me enjoy life more. When they can’t, I will happily discard them.
So you say. I can think of two arguments against that: people acquire true beliefs that aren’t immediately useful, and untrue beliefs can be pleasing.
I never said they had to be “immediately useful” (hardly anything ever is). Untrue beliefs might be pleasing, but when people are arguing truth and falsehood it is not in order to prove that the beliefs they hold are untrue so that they can enjoy believing them, so it’s not an objection either.
You still don’t have a good argument to the effect that no one cares about truth per se.
A lot of people care about truth, even when (I suspect) they diminish their enjoyment needlessly by doing so, so no argument there. In the parent I’m just continuing to try to explain why my stance might sound weird. My point from farther above, though, is just that I don’t/wouldn’t care about “truth” in those rare and odd cases where it is already part of the premises that truth or falsehood will not affect me in any way.
I think ‘usually” is enough qualification, especially considering that he says ‘makes a difference’ and not ’completely determines”
Hmm. I sounds to me like a kind of methodological twist on logical positivism...just don’t bother with things that don’t have empirical consequences.