Some thoughts on this and related LW discussions. They come a bit late—apols to you and commentators if they’ve already been addressed or made in the commentary:
1) Definitions (this is a biggie).
There is a fair bit of confusion on LW, it seems to me, about just what definitions are and what their relevance is to philosophical and other discussion. Here’s my understanding—please say if you think I’ve gone wrong.
If in the course of philosophical discussion, I explicitly define a familiar term, my aim in doing so is to remove the term from debate—I fix the value of a variable to restrict the problem. It’d be good to find a real example here, but I’m not convinced defining terms happens very often in philosophical or other debate. By way of a contrived example, one might want to consider, in evaluating some theory, the moral implications of actions made under duress (a gun held to the head) but not physically initiated by an external agent (a jostle to the arm). One might say, “Define ‘coerced action’ to mean any action not physically initiated but made under duress” (or more precise words to the effect). This done, it wouldn’t make sense simply to object that my conclusion regarding coerced actions doesn’t apply to someone physically pushed from behind—I have stuipulated for the sake of argument I’m not talking about such cases. (in this post, you distinguish stipulation and definition—do you have in mind a distinction I’m glossing over?)
Contrast this to the usual case for conceptual analyses, where it’s assumed there’s a shared concept (‘good’, ‘right’, ‘possible’, ‘knows’, etc), and what is produced is meant to be a set of necessary and sufficient conditions meant to capture the concept. Such an analysis is not a definition. Regarding such analyses, typically one can point to a particular thing and say, eg, “Our shared concept includes this specimen, it lacks a necessary condition, therefore your analysis is mistaken”—or, maybe “Intuitively, this specimen falls under our concept, it lacks...”. Such a response works only if there is broad agreement that the specimen falls under the concept. Usually this works out to be the case.
I haven’t read the Jackson book, so please do correct me if you think I’ve misunderstood, but I take it something like this is his point in the paragraphs you quote. Tom and Jack can define ‘right action’ to mean whatever they want it to. In so doing, however, we cease to have any reason to think they mean by the term what we intuitively do. Rather, Jackson is observing, what Tom and Jack should be doing is saying that rightness is that thing (whatever exactly it is) which our folk concepts roughly converge on, and taking up the task of refining our understanding from there—no defining involved.
You say,
… Jackson supposes that we can pick out which platitudes of moral discourse matter, and how much they matter, for determining the meaning of moral terms
Well, not quite. The point I take it is rather that there simply are ‘folk’ platitudes which pick-out the meanings of moral terms—this is the starting point. ‘Killing people for fun is wrong’, ‘Helping elderly ladies across the street is right’ etc, etc. These are the data (moral intuitions, as usually understood). If this isn’t the case, there isn’t even a subject to discuss. Either way, it has nothing to do with definitions.
Confusion about definitions is evident in the quote from the post you link to. To re-quote:
...the first person is speaking as if ‘sound’ means acoustic vibrations in the air; the second person is speaking as if ‘sound’ means an auditory experience in a brain. If you ask “Are there acoustic vibrations?” or “Are there auditory experiences?”, the answer is at once obvious. And so the argument is really about the definition of the word ‘sound’.
Possibly the problem is that ‘sound’ has two meanings, and the disputants each are failing to see that the other means something different. Definitions are not relevant here, meanings are. (Gratuitous digression: what is “an auditory experience in a brain”? If this means something entirely characterizable in terms of neural events, end of story, then plausibly one of the disputants would say this does not capture what he means by ‘sound’ - what he means is subjective and ineffable, something neural events aren’t. He might go on to wonder whether that subjective, ineffable thing, given that it is apparently created by the supposedly mind-independent event of the falling of a tree, has any existence apart from his self (not to be confused with his brain!). I’m not defending this view, just saying that what’s offered is not a response but rather a simple begging of the question against it. End of digression.)
2) In your opening section you produce an example meant to show conceptual analysis is silly. Looks to me more like a silly attempt at an example of conceptual analysis. If you really want to make your case, why not take a real example of a philosophical argument -preferably one widely held in high regard at least by philosophers? There’s lots of ’em around.
3) In your section The trouble with conceptual analysis, you finally explain,
The trouble is that philosophers often take this “what we mean by” question so seriously that thousands of pages of debate concern which definition to use… .
As explained above, philosophical discussion is not about “which definition to use” -it’s about (roughly, and among other things) clarifying our concepts. The task is difficult but worthwhile because the concepts in question are important but subtle.
Within 20 seconds of arguing about the definition of ‘desire’, someone will say, “Screw it. Taboo ‘desire’ so we can argue about facts and anticipations, not definitions.”
If you don’t have the patience to do philosophy, or you don’t think it’s of any value, by all means do something else -argue about facts and anticipations, whatever precisely that may involve. Just don’t think that in doing this latter thing you’ll address the question philosophy is interested in, or that you’ve said anything at all so far to show philosophy isn’t worth doing. In this connection, one of the real benefits of doing philosophy is that it encourages precision and attention to detail in thinking. You say Eliezer Yudkowsky ”...advises against reading mainstream philosophy because he thinks it will ‘teach very bad habits of thought that will lead people to be unable to do real work.‘” The original quote continues, ”...assume naturalism! Move on! NEXT!” Unfortunately Eliezer has a bad habit of making unclear and undefended or question-begging assertions, and this is one of them. What are the bad habits, and how does philosophy encourage them? And what precisely is meant by ‘naturalism’? To make the latter assertion and simultaneously to eschew the responsibility of articulating what this commits you to is to presume you can both have your cake and eat it too. This may work in blog posts -it wouldn’t pass in serious discussion.
(Unlike some on this blog, I have not slavishly pored through Eliezer’s every post. If there is somewhere a serious discussion of the meaning of ‘naturalism’ which shows how the usual problems with normative concepts like ‘rational’ can successfully be navigated, I will withdraw this remark).
Within 20 seconds of arguing about the definition of ‘desire’, someone will say, “Screw it. Taboo ‘desire’ so we can argue about facts and anticipations, not definitions.”
If you don’t have the patience to do philosophy, or you don’t think it’s of any value, by all means do something else -argue about facts and anticipations, whatever precisely that may involve. Just don’t think that in doing this latter thing you’ll address the question philosophy is interested in, or that you’ve said anything at all so far to show philosophy isn’t worth doing.
You’re tacitly defining philosophy as an endeavor that “doesn’t involve facts or anticipations,” that is, as something not worth doing in the most literal sense. Such “philosophy” would be a field defined to be useless for guiding one’s actions. Anything that is useless for guiding my actions is, well, useless.
The question of what is worth doing is of course profoundly philosophical. You have
just assumed an answer.: that what is worth doing is achieving your aims efficiently
and what is not worth doing is thinking about whether you have good aims, or
which different aims you should have. (And anything that influences your goals
will most certainly influence your expected experiences).
We’ve been over this: either “good aims” and “aims you should have” imply some kind of objective value judgment, which is incoherent, or they merely imply ways to achieve my final aims more efficiently, and we are back to my claim above as that is included under the umbrella of “guiding my actions.”
I think Peterdjones’s answer hits it on the head. I understand you’ve thrashed-out related issues elsewhere, but it seems to me your claim that the idea of an objective value judgment is incoherent would again require doing quite a bit of philosophy to justify.
Really I meant to be throwing the ball back to lukeprog to give us an idea of what the ‘arguing about facts and anticipations’ alternative is, if not just philosophy pretending not to be. I could have been more clear about this. Part of my complaint is the wanting to have it both ways. For example, the thinking in the post anticipations would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism. If LWers are serious about this idea, they really should look into its implications if they want to avoid inadvertent contradictions in the world-views. That means doing some philosophy.
As far as objective value, I simply don’t understand what anyone means by the term. And I think lukeprog’s point could be summed up as, “Trying to figure out how each discussant is defining their terms is not really ‘doing philosophy’; it’s just the groundwork necessary for people not to talk past each other.”
As far as making beliefs pay rent, a simpler way to put it is: If you say I should believe X but I can’t figure out what anticipations X entails, I will just respond, “So what?”
To unite the two themes: The ultimate definition would tell me why to care.
The ultimate definition would tell me why to care.
In the space of all possible meta-ethics, some meta-ethics are cooperative, and other meta-ethics are not so. This means that if you can choose which metaethics to spread to society, you stand a better chance at your own goals, if you spread cooperative metaethics. And cooperative metaethics is what we call “morality”, by and large.
It’s “Do unto others...”, but abstracted a bit, so that we really mean “Use the reasoning to determine what to do unto others, that you would rather they used when deciding how to do unto you.”
Omega puts you in a room with a big red button.
“Press this button and you get ten dollars but another person will be poisoned to slowly die. If you don’t press it I punch you on the nose and you get no money.
They have a similar button which they can use to kill you and get 10 dollars. You can’t communicate with them. In fact they think they’re the only person being given the option of a button, so this problem isn’t exactly like Prisoner’s dilemma. They don’t even know you exist or that their own life is at stake.”
“But here’s the offer I’m making just to you, not them. I can imprint you both with the decision theory of your choice, Amanojack; ofcourse if you identify yourself in your decision theory, they’ll be identifying themself.
“Careful though: This is a one time offer, and then I may put both of you to further different tests. So choose the decision theory that you want both of you to have, and make it abstract enough to help you survive, regardless of specific circumstances.”
Given the above scenario, you’ll end up wanting people to choose protecting the life of strangers more than than picking 10 dollars.
Not quite my point. I’m not talking about what your preferences would be. That would be subjective, personal. I’m talking about what everyone’s meta-ethical preferences would be, if self-consistent, and abstracted enough.
My argument is essentially that objective morality can be considered the position in meta-ethical-space which if occupied by all agents would lead to the maximization of utility.
That makes it objectively (because it refers to all the agents, not some of them, or one of them) different from other points in meta-ethical-space, and so it can be considered to lead to an objectively better morality.
Yeah, because calling it that makes it pretty hard to understand. If you just mean Collective Greatest Happiness Utilitarianism, then that would be a good name. Objective morality can mean way too many different things. This way at least you’re saying in what sense it’s supposed to be objective.
As for this collectivism, though, I don’t go for it. There is no way to know another’s utility function, no way to compare utility functions among people, etc. other than subjectively. And who’s going to be the person or group that decides? SIAI? I personally think all this collectivism is a carryover from the idea of (collective) democracy and other silly ideas. But that’s a debate for another day.
I’m getting a bad vibe here, and no longer feel we’re having the same conversation
“Person or group that decides”? Who said anything about anyone deciding anything? And my point was that this perhaps this is the meta-ethical position that every rational agent individually converges to. So nobody “decides”, or everyone does. And if they don’t reach the same decision, then there’s no single objective morality—but even i
so perhaps there’s a limited set of coherent metaethical positions, like two or three of them.
I personally think all this collectivism is a carryover from the idea of (collective) democracy and other silly ideas.
I think my post was inspired more by TDT solutions to Prisoner’s dilemma and Newcomb’s box, a decision theory that takes into account the copies/simulations of its own self, or other problems that involve humans getting copied and needing to make a decision in blind coordination with their copies.
I imagined system that are not wholly copied, but rather just the module that determines the meta-ethical constraints, and tried to figure out to which directions would such system try to modify themselves, in the knowledge that other such system would similarly modify themselves.
You’re right, I think I’m confused about what you were talking about, or I inferred too much. I’m not really following at this point either.
One thing, though, is that you’re using meta-ethics to mean ethics. Meta-ethics is basically the study of what people mean by moral language, like whether ought is interpreted as a command, as God’s will, as a way to get along with others, etc. That’ll tend to cause some confusion. A good heuristic is, “Ethics is about what people ought to do, whereas meta-ethics is about what ought means (or what people intend by it).”
One thing, though, is that you’re using meta-ethics to mean ethics.
I’m not.
An ethic may say:
I should support same-sex marriage. (SSM-YES) or perhaps:
I should oppose same-sex marraige (SSM-NO)
The reason for this position is the meta-ethic: e.g.
Because I should act to increase average utility. (UTIL-AVERAGE)
Because I should act to increase total utility. (UTIL-TOTAL)
Because I should act to increase total amount of freedom (FREEDOM-GOOD)
Because I should act to increase average societal happiness. (SOCIETAL-HAPPYGOOD-AVERAGE)
Because I should obey the will of our voters (DEMOCRACY-GOOD)
Because I should do what God commands. (OBEY-GOD).
But some metaethical positions are invalid because of false assumptions (e.g. God’s existence). Other positions may not be abstract enough that they could possibly become universal or apply to all situations. Some combinations of ethics and metaethics may be the result of other factual or reasoning mistakes (e.g. someone thinks SSM will harm society, but it ends up helping it, even by the person’s own measuring).
So, NO, I don’t speak necessarily about Collective Greatest Happiness Utilitarianism. I’m NOT talking about a specific metaethic, not even necessarily a consequentialistic metaethic (let alone a “Greatest happiness utilitarianism”) I’m speaking about the hypothetical point in metaethical space that everyone would hypothetically prefer everyone to have—an Attractor of metaethical positions.
As for this collectivism, though, I don’t go for it. There is no way to know another’s utility function, no way to compare utility functions among people, etc. other than subjectively.
That’s very contestable. It has frequently argued here that preferences can be inferred from behaviour; it’s also been argued that introspection (if that is what you mean by “subjectively”) is not a reliable guide to motivation.
This is the whole demonstrated preference thing. I don’t buy it myself, but that’s a debate for another time. What I mean by subjectively is that I will value one person’s life more than another person’s life, or I could think that I want that $1,000,000 more than a rich person wants it, but that’s just all in my head. To compare utility functions and work from demonstrated preference usually—not always—is a precursor to some kind of authoritarian scheme. I can’t say there is anything like that coming, but it does set off some alarm bells. Anyway, this is not something I can substantiate right now.
Attempts to reduce real, altrusitic ethics back down to selfish/instrumental ethics tend not to work that well, because the gains from co-operation are remote, and there
are many realistic instances where selfish action produces immediate rewards (cd the Prudent Predatory objection Rand’s egoistic ethics).
OTOH, since many people are selfish, they are made to care by having legal and social sanctions against excessively selfish behaviour.
Attempts to reduce real, altrusitic ethics back down to selfish/instrumental ethics tend not to work that well,
I wasn’t talking about altruistic ethics, which can lead someone to sacrifice their lifes to prevent someone else getting a bruise; and thus would be almost as disastrous as selfishness if widespread. I was talking about cooperative ethics—which overlaps with but doesn’t equal altruism, same as it overlaps but doesn’t equal selfishness.
The difference between morality and immorality, is that morality can at its most abstract possible level be cooperative, and immorality can’t.
This by itself isn’t a reason that can force someone to care—you can’t make a rock care about anything, but that’s not a problem with your argument. But it’s something that leads to different expectations about the world, namely what Amanojack was asking for.
In a world populated by beings whose beliefs approach objective morality, I expect more cooperation and mutual well-being, all other things being equal. In a world whose beliefs don’t approach it, i expect more war and other devastation.
I wasn’t talking about altruistic ethics, which can lead someone to sacrifice their lifes to prevent someone else getting a bruise;
Although it usually doesn’t.
and thus would be almost as disastrous as selfishness if widespread. I was talking about cooperative ethics—which overlaps with but doesn’t equal altruism, same as it overlaps but doesn’t equal selfishness.
I think that you version of altruism is a straw man, and that what most people
mean by altruism isn’t very different from co operation.
The difference between morality and immorality, is that morality can at its most abstract possible level be cooperative, and immorality can’t.
Or, as I call it, universalisability.
But it’s something that leads to different expectations about the world, namely what Amanojack was asking for.
That argument doesn’t have to be made at all. Morality can stand as a refutation of the claim that anticipiation of experience is of ultimate importance. And it can be made differently: if you rejig your values, you can expect to antipate different experiences—it can be a self-fulffilling prophecy and not merely passive anticipation.
In a world populated by beings whose beliefs approach objective morality, I expect more cooperation and mutual well-being, all other things being equal. In a world whose beliefs don’t approach it, i expect more war and other devastation.
There is an argument from self interest, but it is tertiary to the two arguments I mentioned above.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist:
I should disclose that I don’t find ultimately any kind of objectivism coherent, including “objective reality”.
-though the apparent tension in being a solipsist who argues gets to the root of the issue.
For what it may be worth:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.
I took a slightly different tack, which is maybe moot given your admission to being a solipsist
Solipsism is an ontological stance: in short, “there is nothing out there but my own mind.” I am saying something slightly different: “To speak of there being something/nothing out there is meaningless to me unless I can see why to care.” Then again, I’d say this is tautological/obvious in that “meaning” just is “why it matters to me.”
My “position” (really a meta-position about philosophical positions) is just that language obscures what is going on. It may take a while to make this clear, but if we continue I’m sure it will be.
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view
I’m not a naturalist. I’m not skeptical of “objective” because of such reasons; I am skeptical of it merely because I don’t know what the word refers to (unless it means something like “in accordance with consensus”). In the end, I engage in intellectual discourse in order to win, be happier, get what I want, get pleasure, maximize my utility, or whatever you’ll call it (I mean them all synonymously).
If after engaging in such discourse I am not able to do that, I will eventually want to ask, “So what? What difference does it make to my anticipations? How does this help me get what I want and/or avoid what I don’t want?”
Solipsism is an ontological stance: in short, “there is nothing out there but my own mind.” I am saying something slightly different: “To speak of there being something/nothing out there is meaningless to me unless I can see why to care.” Then again, I’d say this is tautological/obvious in that “meaning” just is “why it matters to me.”
Do you cross the road with your eyes shut? If not, you are assuming, like everyone else, that there are things out there which are terminally disutiilitous.
My “position” (really a meta-position about philosophical positions) is just that language obscures what is going on.
Whose language ? What language? If you think all language is a problem, what do you intend to replace it with?
I’m not a naturalist. I’m not skeptical of “objective” because of such reasons; I am skeptical of it merely because I don’t know what the word refers to
It refers to the stuff that doesn’t go away when you stop believing in it.
“To speak of there being something/nothing out there is meaningless to me unless I can see why to care.”
Do you cross the road with your eyes shut? If not, you are assuming, like everyone else, that there are things out there which are terminally disutiilitous.
Note the bold.
Whose language ? What language?
English, and all the rest that I know of.
If you think all language is a problem, what do you intend to replace it with?
Something better would be nice, but what of it? I am simply saying that language obscures what is going on. You may or may not find that insight useful.
It refers to the stuff that doesn’t go away when you stop believing in it.
If so, I suggest “permanent” as a clearer word choice.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
I think that is rather drastic. Science may not accept beliefs and values as fundamental, but it can accept that as higher-level descriptions, cf Dennet’s
Intentional Stance.
can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Again, I find it incredible that natural facts have no relation to morality. Morality
would be very different in women laid eggs or men had balls of steel.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
To say that moral values are both objective and disconnected from physical
fact implies that they exist in their own domain, which is where some people,with some justice, tend to balk.
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent.
For some value of “incoherent”. Personally, I find it useful to strike out the word
and replace it with something more precise, such a “semantically meaningless”,
“contradictory”, “self-underminng” etc.
Again, I find it incredible that natural facts have no relation to morality. Morality would be very different in women laid eggs or men had balls of steel.
I take the position that while we may well have evolved with different values, they wouldn’t be morality. “Morality” is subjunctively objective. Nothing to do with natural facts, except insofar as they give us clues about what values we in fact did evolve with.
I take the position that while we may well have evolved with different values, they wouldn’t be morality.
How do you know that the values we have evolved with are moral? (The claim that natural facts are relevant to moral reasoning is different to the claim that natually-evolved behavioural instincts are ipso facto moral)
I’m not sure what you want to know. I feel motivated to be moral, and the things that motivate thinking machines are what I call “values”. Hence, our values are moral.
But of course naturally-evolved values are not moral simply by virtue of being values. Morality isn’t about values, it’s about life and death and happiness and sadness and many other things beside.
I think that is rather drastic. Science may not accept beliefs and values as fundamental, but it can accept that as higher-level descriptions, cf Dennet’s Intentional Stance.
I acknowledge this is a subject of lively debate. Still, I stick to the proposition that you can’t derive an ought from an is, and that this is what’s at stake here. Since you can’t make sense of a person as rational if it’s not the case there’s anything she ought or ought not to do (and I admit you may think this needs defending), natural science lacks the means to ascribe rationality. Now, if we’re talking about the social sciences, that’s another matter. There is a discontinuity between these and the purely natural sciences. I read Dennett many years ago, and thought something like this divide is what his different stances are about, but I’d be open to hear a different view.
Again, I find it incredible that natural facts have no relation to morality.
I didn’t say this—just that from a purely scientific point of view, morality is invisible. From an engaged, subjective point of view, where morality is visible, natural facts are relevant.
To say that moral values are both objective and disconnected from physical fact implies that they exist in their own domain, which is where some people,with some justice, tend to balk.
Here’s another stab at it: natural science can in principle tell us everything there is to know about a person’s inner workings and dispositions, right down to what sounds she is likely to utter in what circumstances. It might tell someone she will make the sounds, eg, ‘I ought to go to class’ in given circs.. But no amount of knowledge of this kind will give her a reason to go to class (would you agree?). To get reasons -not to mention linguistic meaning and any intentional states- you need a subjective -ie, non-scientific- point of view. The two views are incommensurable, but neither is dispensable -people need reasons.
I acknowledge this is a subject of lively debate. Still, I stick to the proposition that you can’t derive an ought from an is, and that this is what’s at stake here.
Since you can’t make sense of a person as rational if it’s not the case there’s anything she ought or ought not to do (and I admit you may think this needs defending), natural science lacks the means to ascribe rationality.
But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible.
Rational agents should win, in short.
That seems to be an analytical truth arrived at by unpacking “rational”. Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn’ts. I have been trying to press that unpacking morality leads to the similar analytical truth: ” a moral agent ought to adopt universalisable goals.”
I didn’t say this—just that from a purely scientific point of view, morality is invisible.
“Oughts” in general appear wherever you have rules, which are often abstractly
defined so that they apply to physal systems as well as anything else.
Here’s another stab at it: natural science can in principle tell us everything there is to know about a person’s inner workings and dispositions, right down to what sounds she is likely to utter in what circumstances. It might tell someone she will make the sounds, eg, ‘I ought to go to class’ in given circs.. But no amount of knowledge of this kind will give her a reason to go to class (would you agree?).
I think LWers would say there are facts about her utility function from which
conclusions can be drawn about how she should maximise it (and how she
would if the were rational).
To get reasons -not to mention linguistic meaning and any intentional states- you need a subjective -ie, non-scientific- point of view.
I don’t see why. If a person or other system has goals and is acting to achieve
those goals in an effective way, then their goals can be inferred from their actions.
But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible. Rational agents should win, in short. That seems to be an analytical truth arrived at by unpacking “rational”. Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn’ts. I have been trying to press that unpacking morality leads to the similar analytical truth: ” a moral agent ought to adopt universalisable goals.”
I expressed myself badly. I agree entirely with this.
“Oughts” in general appear wherever you have rules, which are often abstractly defined so that they apply to physal systems as well as anything else.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
I think LWers would say there are facts about her utility function from which conclusions can be drawn about how she should maximise it (and how she would if the were rational).
And I want to persuade LWers
1) that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are,
and
2) that this is ok—these are still respectable facts, notwithstanding.
I don’t see why. If a person or other system has goals and is acting to achieve those goals in an effective way, then their goals can be inferred from their actions.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
1) that facts about her utility functions aren’t naturalistic facts, as facts about her > cholesterol level or about neural activity in different parts of her cortex, are,
And they are likely to riposte that facts about her UF are naturalistic just because they
can be inferred from her behaviour. You seem to be in need of a narrow,
sipulative definition of naturalistic.
Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
You introduced the word “basic” there. It might be the case that goals disappear on
a very fine-grained atomistic view of things (along with rules and structures and various other things). But that would mean that goals aren’t basic physical facts. Naturalism tends to be defined more epistemically than physicalism, so the inferrabilty of UFs (or goals or intentions) from coarse-grained physical behaviour
is a good basis for supposing them to be natural by that usage.
And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No collection of merely naturalistic facts will constrain these. There have been lots of theories advanced which try, but the concensus, I think, is that there is no easy naturalistic solution.
And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No collection of merely naturalistic facts will constrain these. There have been lots of theories advances which try, but the concensus, I think, is that there is no easy naturalistic solution.
And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No number of merely naturalistic facts will constrain these. There have been any number of theories advances which try, but the concensus, I think, is that all fail.
And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’ (is this what you have in mind?) First, notice that X may also prefer his/her philosophy TA to his/her EM Fields and Waves TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No number of merely naturalistic facts will constrain these. There have been lots of theories advanced which try, but the concensus, I think, is that there’s no easy naturalistic solution.
But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible. Rational agents should win, in short. That seems to be an analytical truth arrived at by unpacking “rational”. Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn’ts. I have been trying to press that unpacking morality leads to the similar analytical truth: ” a moral agent ought to adopt universalisable goals.”
I expressed myself badly. I agree entirely with this.
“Oughts” in general appear wherever you have rules, which are often abstractly defined so that they apply to physal systems as well as anything else.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
I think LWers would say there are facts about her utility function from which conclusions can be drawn about how she should maximise it (and how she would if the were rational).
And I want to persuade LWers
*that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are,
and
*that this is ok—these are still respectable facts, notwithstanding.
I don’t see why. If a person or other system has goals and is acting to achieve those goals in an effective way, then their goals can be inferred from their actions.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible. Rational agents should win, in short. That seems to be an analytical truth arrived at by unpacking “rational”. Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn’ts. I have been trying to press that unpacking morality leads to the similar analytical truth: ” a moral agent ought to adopt universalisable goals.”
I expressed myself badly. I agree entirely with this.
“Oughts” in general appear wherever you have rules, which are often abstractly defined so that they apply to physal systems as well as anything else.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
I think LWers would say there are facts about her utility function from which conclusions can be drawn about how she should maximise it (and how she would if the were rational).
And I want to persuade LWers
that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are,
and
*that this is ok—these are still respectable facts, notwithstanding.
I don’t see why. If a person or other system has goals and is acting to achieve those goals in an effective way, then their goals can be inferred from their actions.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist:
I should disclose that I don’t find ultimately any kind of objectivism coherent, including “objective reality”.
-though the apparent tension in being a solipsist who argues gets to the root of the issue.
For what it may be worth:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist
I should disclose that I don’t find ultimately any kind of objectivism coherent, including “objective reality”.
-though the apparent contradiction in being a solipsist who argues gets to the root of the issue.
For what it may be worth:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist:
I should disclose that I don’t find ultimately any kind of objectivism coherent, including ‘objective reality’.
-though the apparent contradiction in being a solipsist who argues gets to the root of the issue.
For what it may be worth:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says is more incisive and clear than what I came up with. I took a different tack, which is maybe moot given your admission to being a solipsist:
I should disclose that I don’t find ultimately any kind of objectivism coherent, including ‘objective reality’.”
-though the apparent tension in being a solipsist who argues gets to the root of the issue.
For what it may be worth, here’s what I had:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but anyway the idea at least is not incoherent. And showing otherwise would take doing some philosophy.
What they generally mean is “not subjective”. You might object that non-subjective value is contradictory, but that is not the same as objecting that it is incomprehensible, since
one has to understand the meanings of individual terms to see a contradiction.
As for anticipations: believing morality is objective entails that some of your beliefs
may be wrong by objective standards, and believing it is subjective does not entail that. So the belief in moral objectivity could lead to a revision of your aims and goals, which will in turn lead to different experiences.
I’m not saying non-subjective value is contradictory, just that I don’t know what it could mean. To me “value” is a verb, and the noun form is just a nominalization of the verb, like the noun “taste” is a nominalization of the verb “taste.” Ayn Rand tried to say there was such a thing as objectively good taste, even of foods, music, etc. I didn’t understand what she meant either.
As for anticipations: believing morality is objective entails that some of your beliefs may be wrong by objective standards, and believing it is subjective does not entail that. So the belief in moral objectivity could lead to a revision of your aims and goals, which will in turn lead to different experiences.
But before I would even want to revise my aims and goals, I’d have to anticipate something different than I do now. What does “some of your beliefs may be wrong by objective standards” make me anticipate that would motivate me to change my goals? (This is the same as the question in the other comment: What penalty do I suffer by having the “wrong” moral sentiments?)
value” is a verb, and the noun form is just a nominalization of the verb,
I don’t see the force to that argument.
“Believe” is a verb and “belief” is a nominalisation. But beliefs can be objectively
right or wrong—if they belong to the appropriate subject area.
Ayn Rand tried to say there was such a thing as objectively good taste, even of foods, music,
It is possible for aesthetics(and various other things) to be un-objectifiable whilst morality (and various other things) are objectifiable.
But before I would even want to revise my aims and goals, I’d have to anticipate something different than I do now.
Why?
What does “some of your beliefs may be wrong by objective standards” make me anticipate that would motivate me to change my goals?
You should be motivated by a desire to get things right in general. The anticipation
thing is just a part of that. It’s not an ultimate. But morality is an ultimate because
there is no more important value than a moral value.
(This is the same as the question in the other comment: What penalty do I suffer by having the “wrong” moral sentiments?)
If there is no personal gain from morality, that doesn’t mean you shouldn’t be moral. You should be moral by the definition of “moral”and “should”. It’s an analytical truth.
It is for selfishness to justify itself in the face of morality, not vice versa.
First of all, I should disclose that I don’t find ultimately any kind of objectivism coherent, including “objective reality.” It is useful to talk about objective reality and objectively right or wrong beliefs most of the time, but when you really drill down there are only beliefs that predict my experience more reliably or less reliably. In the end, nothing else matters to me (nor, I expect, anyone else—if they understand what I’m getting at here).
You should be motivated by a desire to get things right in general. The anticipation thing is just a part of that. It’s not an ultimate
So you disagree with EY about making beliefs pay rent? Like, maybe some beliefs don’t pay rent but are still important? I just don’t see how that makes sense.
You should be moral by the definition of “moral”and “should”.
This seems circular.
If there is no personal gain from morality, that doesn’t mean you shouldn’t be moral.
First of all, I should disclose that I don’t find ultimately any kind of objectivism coherent, including “objective reality.” It is useful to talk about objective reality and objectively right or wrong beliefs most of the time, but when you really drill down there are only beliefs that predict my experience more reliably or less reliably
How do you know that?
So you disagree with EY about making beliefs pay rent?
If disagreeing mean it is good to entertain useless beliefs, then no. If disagreeing means that instrumental utility is not the ultimate value , then yes.
You should be moral by the definition of “moral”and “should”.
This seems circular.
You say that like that’s a bad thing. I said it was analytical and analytical truths would be expected to sound tautologous or circular.
If there is no personal gain from morality, that doesn’t mean you shouldn’t be moral.
It is useful to talk about objective reality and objectively right or wrong beliefs most of the time, but when you really drill down there are only beliefs that predict my experience more reliably or less reliably
How do you know that?
Why do I think that is a useful phrasing? That would be a long post, but EY got the essential idea in Making Beliefs Pay Rent.
If disagreeing mean it is good to entertain useless beliefs, then no. If disagreeing means that instrumental utility is not the ultimate value , then yes.
Well, what use is your belief in “objective value”?
So it’s still true. Not caring is not refutation.
Ultimately, that is to say at a deep level of analysis, I am non-cognitive to words like “true” and “refute.” I would substitute “useful” and “show people why it is not useful,” respectively.
Why do I think that is a useful phrasing? That would be a long post, but EY got the essential idea in Making Beliefs Pay Rent.
I meant the second part: “but when you really drill down there are only beliefs that predict my experience more reliably or less reliably” How do you know that?
Well, what use is your belief in “objective value”?
What objective value are your instrumental beliefs? You keep assuming
useful-to-me is the ultimate value and it isn’t: Morality is, by definition.
Ultimately, that is to say at a deep level of analysis, I am non-cognitive to words like “true” and “refute.”
Then I have a bridge to sell you.
I would substitute “useful” and “show people why it is not useful,” respectively.
And would it be true that it is non-useful? Since to assert P is to assert “P is true”,
truth is a rather hard thing to eliminate. One would have to adopt the silence
of Diogenes.
Why do I think that is a useful phrasing? That would be a long post, but EY got the essential idea in Making Beliefs Pay Rent.
I meant the second part: “but when you really drill down there are only beliefs that predict my experience more reliably or less reliably” How do you know that?
That’s what I was responding to.
What objective value are your instrumental beliefs? You keep assuming useful-to-me is the ultimate value and it isn’t: Morality is, by definition.
Zorg: And what pan-galactic value are your objective values? Pan-galactic value is the ultimate value, dontcha know.
And would it be true that it is non-useful? Since to assert P is to assert “P is true”, truth is a rather hard thing to eliminate.
You just eliminated it: If to assert P is to assert “P is true,” then to assert “P is true” is to assert P. We could go back and forth like this for hours.
But you still haven’t defined objective value.
Dictionary says, “Not influenced by personal feelings, interpretations, or prejudice; based on facts; unbiased.”
How can a value be objective? ---EDIT: Especially since a value is a personal feeling. If you are defining “value” differently, how?
I meant the second part: “but when you really drill down there are only beliefs that predict my experience more reliably or less reliably” How do you know that?
That’s what I was responding to.
It is not the case that all beliefs can do is predict experience based on existing preferences. Beliefs can also set and modify preferences. I have given that counterargument several times.
Z org: And what pan-galactic value are your objective values? Pan-galactic value is the ultimate value, dontcha know.
I think moral values are ultimate because I can;t think of a valid argument
of the form “I should do because ”. Please
give an example of a pangalactic value that can be substituted for ,
You just eliminated it: If to assert P is to assert “P is true,” then to assert “P is true” is to assert P. We could go back and forth like this for hours.
Yeah,. but it sitll comes back to truth. If I tell you it will increase your happiness
to hit yourself on the head with a hammer, your response is going to have to
amount to “no, that’s not true”.
Dictionary says, [objective[ “Not influenced by personal feelings, interpretations, or prejudice; based on facts; unbiased.”
How can a value be objective?
By being (relatively) uninfluenced by personal feelings, interpretations, or prejudice; based on facts; unbiased.
Especially since a value is a personal feeling.
You haven’t remotely established that as an identity. It is true that some people
some of the time arrive at values through feelings. Others arrive at them (or revise
them) through facts and thinking.
you are defining “value” differently, how?
“Values can be defined as broad preferences concerning appropriate courses of action or outcomes”
If I tell you it will increase your happiness to hit yourself on the head with a hammer, your response is going to have to amount to “no, that’s not true”.
I’ll just decide not to follow the advice, or I’ll try it out and then after experiencing pain I will decide not to follow the advice again. I might tell you that, too, but I don’t need to use the word “true” or any equivalent to do that. I can just say it didn’t work.
People have been known to follow really bad advice, sometimes to their detriment and suffering a lot of pain along the way.
Some people have followed excessively stringent diets to the point of malnutrition or death. (This isn’t intended as a swipe at CR—people have been known to go a lot farther than that.)
People have attempted (for years or decades) to shut down their sexual feelings because they think their God wants it.
I’ll just decide not to follow the advice, or I’ll try it out and then after experiencing pain I will decide not to follow the advice again. I might tell you that, too, but I don’t need to use the word “true” or any equivalent to do that. I can just say it didn’t work.
Any word can be eliminated in favour of a definitions or paraphrase. Not coming out with an equivalent—showing that you have dispensed with the concept—is harder. Why didn’t it work? You’re going to have to paraphrase “Because it wasn’t true” or refuse to answer.
The concept of truth is for utility, not utility for truth. To get them backwards is to merely be confused by the words themselves. It’s impossible to show you’ve dispensed with any concept, except to show that it isn’t useful for what you’re doing. That is what I’ve done. I’m non-cognitive to God, truth, and objective value (except as recently defined). Usually they all sound like religion, though they all are or were at one time useful approximate means of expressing things in English.
The concept of truth is for utility, not utility for truth.
Truth is useful for whatever you want to do with it. If people can collect stamps for the sake of collecting stamps, they can collect truths for the sake of collecting truths.
I’m non-cognitive to God, truth, and objective value (except as recently defined). Usually they all sound like religion
Sounding like religion would not render something incomprehensible...but it could
easilly provoke an “I don’t like it” reaction, which is then dignified with the label “incoherent” or whatever.
It is not the case that all beliefs can do is predict experience based on existing preferences. Beliefs can also set and modify preferences.
I agree, if you mean things like, “If I now believe that she is really a he, I don’t want to take ‘her’ home anymore.”
I think moral values are ultimate because I can;t think of a valid argument of the form “I should do because ”.
Neither can I. I just don’t draw the same conclusion. There’s a difference between disagreeing with something and not knowing what it means, and I do seriously not know what you mean. I’m not sure why you would think it is veiled disagreement, seeing as lukeprog’s whole post was making this very same point about incoherence. (But incoherence also only has meaning in the sense of “incoherent to me” or someone else, so it’s not some kind of damning word. It simply means the message is not getting through to me. That could be your fault, my fault, or English’s fault, and I don’t really care which it is, but it would be preferable for something to actually make it across the inferential gap.)
EDIT: Oops, posted too soon.
“Values can be defined as broad preferences concerning appropriate courses of action or outcomes”
So basically you are saying that preferences can change because of facts/beliefs, right? And I agree with that. To give a more mundane example, if I learn Safeway doesn’t carry egg nog and I want egg nog, I may no longer want to go to Safeway. If I learn that egg nog is bad for my health, I may no longer want egg nog. If I believe health doesn’t matter because the Singularity is near, I may want egg nog again. If I believe that egg nog is actually made of human brains, I may not want it anymore.
At bottom, I act to get enjoyment and/or avoid pain, that is, to win. What actions I believe will bring me enjoyment will indeed vary depending on my beliefs. But it is always ultimately that winning/happiness/enjoyment/fun//deliciousness/pleasure that I am after, and no change in belief can change that. I could take short-term pain for long-term gain, but that would be because I feel better doing that than not.
But it seems to me that just because what I want can be influenced by what could be called objective or factual beliefs doesn’t make my want for deliciousness “uninfluenced by personal feelings.”
In summary, value/preferences can either be defined to include (1) only personal feelings (though they may be universal or semi-universal), or to also include (2) beliefs about what would or wouldn’t lead to such personal feelings. I can see how you mean that 2 could be objective, and then would want to call them thus “objective values.” But not for 1, because personal feelings are, well, personal.
If so, then it seems I am back to my initial response to lukeprog and ensuing brief discussion. In short, if it is only the belief in objective facts that is wrong, then I wouldn’t want to call that morality, but more just self-help, or just what the whole rest of LW is. It is not that someone could be wrong about their preferences/values 1, but preferences/values 2.
There’s a difference between disagreeing with something and not knowing what it means, and I do seriously not know what you mean. I’m not sure why you would think it is veiled disagreement, seeing as lukeprog’s whole post was making this very same point about incoherence. (But incoherence also only has meaning in the sense of “incoherent to me” or someone else,
“incoherence” means several things. Some of them, such a self-contradiction are
as objective as anything. You seem to find morality meaningless in some personal
sense. Looking at dictionaries doesn’t seem to work for you. Dictionaries tend
to define the moral as the good.It is hard to believe that anyone can grow up
not hearing the word “good” used a lot, unless they were raised by wolves. So
that’s why I see complaints of incoherence as being disguised disagreement.
At bottom, I act to get enjoyment and/or avoid pain, that is, to win.
If you say so. That doesn’t make morality false, meaningless or subjective. It makes
you an amoral hedonist.
But it seems to me that just because what I want can be influenced by what could be called objective or factual beliefs doesn’t make my want for deliciousness “uninfluenced by personal feelings.”
Perhaps not completley, but that sill leaves some things as relatively more
objective than others.
In summary, value/preferences can either be defined to include (1) only personal feelings (though they may be universal or semi-universal), or to also include (2) beliefs about what would or wouldn’t lead to such personal feelings. I can see how you mean that 2 could be objective, and then would want to call them thus “objective values.” But not for 1, because personal feelings are, well, personal.
Then your categories aren’t exhaustive, because preferences can also
be defined to include universalisable values alongside personal whims.
You may be making the classic of error of taking “subjective” to mean
“believed by a subject”
Dictionaries tend to define the moral as the good.It is hard to believe that anyone can grow up not hearing the word “good” used a lot, unless they were raised by wolves
The problem isn’t that I don’t know what it means. The problem is that it means many different things and I don’t know which of those you mean by it.
an amoral hedonist
I have moral sentiments (empathy, sense of justice, indignation, etc.), so I’m not amoral. And I am not particularly high time-preference, so I’m not a hedonist.
preferences can also be defined to include universalisable values alongside personal whims
If you mean preferences that everyone else shares, sure, but there’s no stipulation in my definitions that other people can’t share the preferences. In fact, I said, “(though they may be universal or semi-universal).”
You may be making the classic of error of taking “subjective” to mean “believed by a subject”
It’d be a “classic error” to assume you meant one definition of subjective rather than another, when you haven’t supplied one yourself? This is about the eight time in this discussion that I’ve thought that I can’t imagine what you think language even is.
I doubt we have any disagreement, to be honest. I think we only view language very, radically differently. (You could say we have a disagreement about language.)
Dictionaries tend to define the moral as the good.It is hard to believe that anyone can grow up not hearing the word “good” used a lot, unless they were raised by wolves
The problem isn’t that I don’t know what it means.
What “moral” means or what “good” means/?
The problem is that it means many different things and I don’t know which of those you mean by it.
No, that isn’t the problem. It has one basic meaning, but there are a lot of different theories about it. Elsewhere you say that utilitarianism renders objective morality meaningful. A theory of X cannot render X meaningful, but it can render X plausible.
I have moral sentiments (empathy, sense of justice, indignation, etc.), so I’m not amoral. And I am not particularly high time-preference, so I’m not a hedonist.
But you theorise that you only act on them(and that nobody ever acts but) toincrea se your pleasure.
If you mean preferences that everyone else shares, sure, but there’s no stipulation in my definitions that other people can’t share the preferences.
I don’t see the point in stipulating that preferences can’t be shared. People who
believe they can be just have to find another word. Nothing is proven.
You may be making the classic of error of taking “subjective” to mean “believed by a subject”
It’d be a “classic error” to assume you meant one definition of subjective rather than another, when you haven’t supplied one yourself?
I’ve quoted the dictionary derfinition, and that’s what I mean.
“existing in the mind; belonging to the thinking subject rather than to the object of thought ( opposed to objective).
2.
pertaining to or characteristic of an individual; personal; individual: a subjective evaluation.
3.
placing excessive emphasis on one’s own moods, attitudes, opinions, etc.; unduly egocentric”
This is about the eight time in this discussion that I’ve thought that I can’t imagine what you think language even is.
I think language is public, I think (genuine) disagreements about meaning can
be resolved with dictionaries, and I think you shouldn’t assume someone is using
idiosyncratic definitions unless they give you good reason.
As far as objective value, I simply don’t understand what anyone means by the term.
Objective truth is what you should believe even if you don’t. Objective values
are the values you should have even if you have different values.
And I think lukeprog’s point could be summed up as, “Trying to figure out how each discussant is defining their terms is not really ‘doing philosophy’; it’s just the groundwork necessary for people not to talk past each other.”
Where the groundwork is about 90% of the job...
As far as making beliefs pay rent, a simpler way to put it is: If you say I should believe X but I can’t figure out what anticipations X entails, I will just respond, “So what?”
That has been answered several times. You are assuming that instrumental value
is ultimate value, and it isn’t.
To unite the two themes: The ultimate definition would tell me why to care.
Imagine you are arguing with someone who doesn’t “get” rationality. If they
believe in instrumental values, you can persuade they they should care about
rationality because it will enable them to achieve their aims. If they don’t, you can’t.
Even good arguments will fail to work on some people.
You should care about morality because it is morality. Morality defines (the ultimate kind of) “should”.
“What I should do” =def “what is moral”.
Nor everyone does get that , which is why “don’t care” is “made to care” by various sanctions.
As far as objective value, I simply don’t understand what anyone means by the term.
Objective truth is what you should believe even if you don’t.
“Should” for what purpose?
Where the groundwork is about 90% of the job...
I certainly agree there. The question is whether it is more useful to assign the label “philosophy” to groundwork+theory or just the theory. A third possibility is that doing enough groundwork will make it clear to all discussants that there are no (or almost no) actually theories in what is now called “philosophy,” only groundwork, meaning we would all be in agreement and there is nothing to argue except definitions.
Imagine you are arguing with someone who doesn’t “get” rationality. If they believe in instrumental values, you can persuade they they should care about rationality because it will enable them to achieve their aims. If they don’t, you can’t.
I may not be able to convince them, but at least I would be trying to convince them on the grounds of helping them achieve their aims. It seems you’re saying that, in the present argument, you are not trying to help me achieve my aims (correct me if I’m wrong). This is what makes me curious about why you think I would care. The reasons I do participate, by the way, are that I hold out the chance that you have a reason why I would care (which maybe you are not articulating in a way that makes sense to me yet), that you or others will come to see my view that it’s all semantic confusion, and because I don’t want to sound dismissive or obstinate in continuing to say, “So what?”
Objective truth is what you should believe even if you don’t.
“Should” for what purpose?
Believing in truth is what rational people do.
Imagine you are arguing with someone who doesn’t “get” rationality. If they believe in instrumental values, you can persuade they they should care about rationality because it will enable them to achieve their aims. If they don’t, you can’t.
I may not be able to convince them, but at least I would be trying to convince them on the grounds of helping them achieve their aims.
Which is good because...?
It seems you’re saying that, in the present argument, you are not trying to help me achieve my aims (correct me if I’m wrong).
Correct.
This is what makes me curious about why you think I would care.
I can argue that your personal aims are not the ultimate value, and I can
suppose you might care about that just because it is true. That is how
arguments work: one rational agent tries topersuade another that something
is true. If one of the participants doesn’t care about truth at all, the process
probably isn’t going to work.
The reasons I do participate, by the way, are that I hold out the chance that you have a reason why I would care (which maybe you are not articulating in a way that makes sense to me yet), that you or others will come to see my view that it’s all semantic confusion, and because I don’t want to sound dismissive or obstinate in continuing to say, “So what?”
I think that horse has bolted. Inasmuch as you don’t care about truth per se. you have advertised yourself as being irrational.
Winning is what rational people do. We can go back and forth like this.
Which is good because...?
It benefits me, because I enjoy helping people. See, I can say, “So what?” in response to “You’re wrong.” Then you say, “You’re still wrong.” And I walk away feeling none the worse. Usually when someone claims I am wrong I take it seriously, but only because I know how it could ever, possibly, potentially ever affect me negatively. In this case you are saying it is different, and I can safely walk away with no terror ever to befall me for “being wrong.”
I can argue that your personal aims are not the ultimate value, and I can suppose you might care about that just because it is true. That is how arguments work: one rational agent tries topersuade another that something is true. If one of the participants doesn’t care about truth at all, the process probably isn’t going to work.
Sure, people usually argue whether something is “true or false” because such status makes a difference (at least potentially) to their pain or pleasure, happiness, utility, etc. As this is almost always the case, it is customarily unusual for someone to say they don’t care about something being true or false. But in a situation where, ex hypothesi, the thing being discussed—very unusually—is claimed to not have any effect on such things, “true” and “false” become pointless labels. I only ever use such labels because they can help me enjoy life more. When they can’t, I will happily discard them.
Sure, people usually argue whether something is “true or false” because such status makes a difference (at least potentially) to their pain or pleasure, happiness, utility, etc.
So you say. I can think of two arguments against that: people acquire true beliefs that
aren’t immediately useful, and untrue beliefs can be pleasing.
I never said they had to be “immediately useful” (hardly anything ever is). Untrue beliefs might be pleasing, but when people are arguing truth and falsehood it is not in order to prove that the beliefs they hold are untrue so that they can enjoy believing them, so it’s not an objection either.
A lot of people care about truth, even when (I suspect) they diminish their enjoyment needlessly by doing so, so no argument there. In the parent I’m just continuing to try to explain why my stance might sound weird. My point from farther above, though, is just that I don’t/wouldn’t care about “truth” in those rare and odd cases where it is already part of the premises that truth or falsehood will not affect me in any way.
For example, the thinking in the post anticipations would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism.
Hmm. I sounds to me like a kind of methodological twist on logical positivism...just don’t bother with things that don’t have empirical consequences.
I think Peterdjones’s answer hits it on the head. I understand you’ve thrashed-out related issues elsewhere, but it seems to me your claim that the idea of an objective value judgment is incoherent would again require doing quite a bit of philosophy to justify.
Really I meant to be throwing the ball back to lukeprog to give us an idea of what the ‘arguing about facts and anticipations’ alternative is, if not just philosophy pretending not to be. I could have been more clear about this. Part of my complaint is the wanting to have it both ways. For example, the thinking in the post [anticipations] (http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/) would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism. If LWers are serious about this idea, they really should look into its implications if they want to avoid inadvertent contradictions in the world-views. That means doing some philosophy.
I think Peterdjones’s answer hits it on the head. I understand you’ve thrashed-out related issues elsewhere, but it seems to me your claim that the idea of an objective value judgment is incoherent would again require doing quite a bit of philosophy to justify.
Really I meant to be throwing the ball back to lukeprog to give us an idea of what the ‘arguing about facts and anticipations’ alternative is, if not just philosophy pretending not to be. I could have been more clear about this. Part of my complaint is the wanting to have it both ways. For example, the thinking in the post [anticipations] (http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/) would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism. If LWers are serious about this idea, they really should look into its implications if they want to avoid inadvertent contradictions in the world-views. That means doing some philosophy.
I think Peterdjones’s answer hits it on the head. I understand you’ve thrashed-out related issues elsewhere, but here too it seems you have to do quite a bit of philosophy to get the conclusion that the idea of an objective value judgement is incoherent.
Really I meant to be throwing the ball back to lukeprog to give us an idea of what the ‘arguing about facts and anticipations’ alternative is, if not just philosophy pretending not to be. I could have been more clear about this. Part of the complaint is the wanting to have it both ways. For example, the thinking in the post anticipations would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism. If LWers are serious about the idea, they really should look into its implications if they want to avoid inadvertent contradictions in their world-views. That means doing some philosophy.
You say that objective values are incoherent, but you offer no argument for it. Presenting philosophical claims without justification isn’t something different to philosophy, or
something better. It isn’t good rationality either. Rationality is as rationality does.
By incoherent I simply mean “I don’t know how to interpret the words.” So far no one seems to want to help me do that, so I can only await a coherent definition of objective ethics and related terms. Then possibly an argument could start. (But this is all like deja vu from the recent metaethics threads.)
“Morality is subjective”: Each person has their own moral sentiments.
“Morality is not subjective”: Each person does not have their own moral sentiments. Or there is something more than each person’s moral sentiments that is worth calling “moral.” <--- But I ask, what is that “something more”?
OK. That is not what “subjective” means. What it means is that if something is subjective, an opinion is guaranteed to be correct or the last word on the matter just because it is the person’s opinion. And “objective” therefore means that it is possible for someone to be wrong in their opinion.
I don’t claim moral sentiments are correct, but simply that a person’s moral sentiment is their moral sentiment. They feel some emotions, and that’s all I know. You are seeming to say there is some way those emotions can be correct or incorrect, but in what sense? Or probably a clearer way to ask the question is, “What disadvantage can I anticipate if my emotions are incorrect?”
An emotion, such as a feeling of elation or disgust, is not correct or incorrect per se; but an emotion per se is no basis for a moral sentiment, because moral sentiment has to
be about something. You could think gay marriage is wrong because homosexuality disgusts you, or you could feel serial-killing is good because it elates you, but that
doesn’t mean the conclusions you are coming to are right. It may be a cast iron fact that you have those particular sentiments, but that says nothing about the correctness of their
content, any more than any opinion you entertain is automatically correct.
ETA
The disadvantages you can expect if your emotions are incorrect include being
in the wrong whilst feeling you are in the right. Much as if you are entertaining
incorrect opinions.
Then you are, or are likely to be, morally in the wrong. That is of course possible. You can choose to do wrong. But it doesn’t constitute any kind of argument. Someone can elect to ignore the roundness
of the world for some perverse reason, but that doesn’t make ”!he world is round” false
or meaningless or subjective.
You can choose to do wrong. But it doesn’t constitute any kind of argument.
Indeed it is not an argument. Yet I can still say, “So what?” I am not going to worry about something that has no effect on my happiness. If there is some way it would have an effect, then I’d care about it.
Someone can elect to ignore the roundness of the world for some perverse reason, but that doesn’t make ”!he world is round” false or meaningless or subjective.
The difference is, believing “The world is round” affects whether I win or not, whereas believing “I’m morally in the wrong” does not.
The difference is, believing “The world is round” affects whether I win or not, whereas believing “I’m morally in the wrong” does not.
That is apparently true in your hypothetical, but it’s not true in the real world. Just as the roundness of the world has consequences, the wrongness of an action has consequences. For example, if you kill someone, then your fate is going to depend (probabilistically) on whether you were in the right (e.g. he attacked and you were defending your life) or in the wrong (e.g. you murdered him when he caught you burgling his house). The more in the right you were, then, ceteris paribus, the better your chances are.
For example, if you kill someone, then your fate is going to depend (probabilistically) on whether you were in the right (e.g. he attacked and you were defending your life) or in the wrong (e.g. you murdered him when he caught you burgling his house).
You’re interpreting “I’m morally in the wrong” to mean something like, “Other people will react badly to my actions,” in which case I fully agree with you that it would affect my winning. Peterdjones apparently does not mean it that way, though.
You’re interpreting “I’m morally in the wrong” to mean something like, “Other people will react badly to my actions,” in which case I fully agree with you that it would affect my winning.
Actually I am not. I am interpreting “I’m morally wrong” to mean something like, “I made an error of arithmetic in an area where other people depend on me.”
An error of arithmetic is an error of arithmetic regardless of whether any other people catch it, and regardless of whether any other people react badly to it. It is not, however, causally disconnected from their reaction, because, even though an error of arithmetic is what it is regardless of people’s reaction to it, nevertheless people will probably react badly to it if you’ve made it in an area where other people depend on you. For example, if you made an error of arithmetic in taking a test, it is probably the case that the test-grader did not make the same error of arithmetic and so it is probably the case that he will react badly to your error. Nevertheless, your error of arithmetic is an error and is not merely getting-a-different-answer-from-the-grader. Even in the improbable case where you luck out and the test grader makes exactly the same error as you and so you get full marks, nevertheless, you did still make that error.
Even if everyone except you wakes up tomorrow and believes that 3+4=6, whereas you still remember that 3+4=7, nevertheless in many contexts you had better not switch to what the majority believe. For example, if you are designing something that will stand up, like a building or a bridge, you had better get your math right, you had better correctly add 3+4=7 in the course of designing the edifice if that sum is ever called on calculating whether the structure will stand up.
If humanity divides into two factions, one faction of which believes that 3+4=6 and the other of which believes that 3+4=7, then the latter faction, the one that adds correctly, will in all likelihood over time prevail on account of being right. This is true even if the latter group starts out in the minority. Just imagine what sort of tricks you could pull on people who believe that 3+4=6. Because of the truth of 3+4=7, eventually people who are aware of this truth will succeed and those who believe that 3+4=6 will fail, and over time the vast majority of society will once again come to accept that 3+4=7.
Just imagine what sort of tricks you could pull on people who believe that 3+4=6.
Nothing’s jumping out at me that would seriously impact a group’s effectiveness from day to day. I rarely find myself needing to add three and four in particular, and even more rarely in high-stakes situations. What did you have in mind?
I offer you the following deal: give me $3 today and $4 tomorrow, and I will give you a 50 cent profit the day after tomorrow, by returning to you $6.50. You can take as much advantage of this as you want. In fact, if you like, you can give me $3 this second, $4 in one second, and in the following second I will give you back all your money plus 50 cents profit—that is, I will give you $6.50 in two seconds.
Since you think that 3+4=6, you will jump at this amazing deal.
I find that most people who believe absurd things still have functioning filters for “something is fishy about this”. I talked to a person who believed that the world was going to end in 2012, and I offered to give them a dollar right then in exchange for a hundred after the world didn’t end, but of course they didn’t take it: something was fishy about that.
Also, dollars are divisible: someone who believes that 3+4=6 may not believe that 300+400=600.
If he isn’t willing to take your trade, then his alleged belief that the world will end in 2012 is weak at best. In contrast, if you offer to give me $6.50 in exchange for $3 plus $3, then I will take your offer, because I really do believe that 3+3=6.
On the matter of divisibility, you are essentially proposing that someone with faulty arithmetic can effectively repair the gap by translating arithmetic problems away from the gap (e.g. by realizing that 3 dollars is 300 pennies and doing arithmetic on the pennies). But in order for them to do this consistently they need to know where the gap is, and if they know that, then it’s not a genuine gap. If they realize that their belief that 3+4=6 is faulty, then they don’t really believe it. In contrast, if they don’t realize that their belief that 3+4=6 is faulty, then they won’t consistently translate arithmetic problems away from the gap, and so my task becomes a simple matter of finding areas where they don’t translate problems away from the gap, but instead fall in.
Are you saying that you would not be even a little suspicious and inclined to back off if someone said they’d give you $6.50 in exchange for $3+$3? Not because your belief in arithmetic is shaky, but because your trust that people will give you fifty cents for no obvious reason is nonexistent and there is probably something going on?
I’m not denying that in a thought experiment, agents that are wrong about arithmetic can be money-pumped. I’m skeptical that in reality, human beings that are wrong about arithmetic can be money-pumped on an interesting scale.
Are you saying that you would not be even a little suspicious and inclined to back off if someone said they’d give you $6.50 in exchange for $3+$3? Not because your belief in arithmetic is shaky, but because your trust that people will give you fifty cents for no obvious reason is nonexistent and there is probably something going on?
In my hypothetical, we can suppose that they are perfectly aware of the existence of the other group. That is, the people who think that 3+4=7 are aware of the people who think that 3+4=6, and vice versa. This will provide them with all the explanation they need for the offer. They will think, “this person is one of those people who think that 3+4=7”, and that will explain to them the deal. They will see that the others are trying to profit off them, but they will believe that the attempt will fail, because after all, 3+4=6.
As a matter of fact, in my hypothetical the people who believe that 3+4=6 would be just as likely to offer those who believe that 3+4=7 a deal in an attempt to money-pump them. Since they believe that 3+4=6, and are aware of the belief of the others, they might offer the others the following deal: “give us $6.50, and then the next day we will give you $3 and the day after $4.” Since they believe that 3+4=6, they will think they are ripping the others off.
I’m not denying that in a thought experiment, agents that are wrong about arithmetic can be money-pumped. I’m skeptical that in reality, human beings that are wrong about arithmetic can be money-pumped on an interesting scale.
The thought experiment wasn’t intended to be applied to humans as they really are. It was intended to explain humans as they really are by imagining a competition between two kinds of humans—a group that is like us, and a group that is not like us. In the hypothetical scenario, the group like us wins.
And I think you completely missed my point, by the way. My point was that arithmetic is not merely a matter of agreement. The truth of a sum is not merely a matter of the majority of humanity agreeing on it. If more than half of humans believed that 3+4=6, this would not make 3+4=6 be true. Arithmetic truth is independent of majority opinion (call the view that arithmetic truth is a matter of consensus within a human group “arithmetic relativism” or “the consensus theory of arithmetic truth”). I argued for this as follows: suppose that half of humanity—nay, more than half—believed that 3+4=6, and a minority believed that 3+4=7. I argued that the minority with the latter belief would have the advantage. But if consensus defined arithmetic truth, that should not be the case. Therefore consensus does not define arithmetic truth.
My point is this: that arithmetic relativism is false. In your response, you actually assumed this point, because you’ve been assuming all along that 3+4=6 is false, even though in my hypothetical scenario a majority of humanity believed it is true.
So you’ve actually assumed my conclusion but questioned the argument that I used to argue for the conclusion.
And this, in turn, was to illustrate a more general point about consensus theories and relativism. The context was a discussion of morality. I had been interpreted as advocating what amounts to a consensus theory of morality, and I was trying to explain why may specific claims do not entail a consensus theory of morality, but are also compatible with a theory of morality as independent of consensus.
In sum, you seem to be saying that morality involves arithmetic, and being wrong about arithmetic can hurt me, so being wrong about morality can hurt me.
There’s no particular connection between morality and arithmetic that I’m aware of. I brought up arithmetic to illustrate a point. My hope was that arithmetic is less problematic, less apt to lead us down philosophical blind allies, so that by using it to illustrate a point I wasn’t opening up yet another can of worms.
Whether someone is judged right and wrong by others has consequences, but the people doing the judging might be wrong. It is still an error to make morality justify itself in terms of instrumental utility, since there are plenty of examples of things that are
instrumentally right but ethically wrong, like improved gas chambers.
Whether someone is judged right and wrong by others has consequences, but the people doing the judging might be wrong.
Actually being in the right increases your probability of being judged to be in the right. Yes, the people doing the judging may be wrong, and that is why I made the statement probabilistic. This can be made blindingly obvious with an example. Go to a random country and start gunning down random people in the street. The people there will, with probability so close to 1 as makes no real difference, judge you to be in the wrong, because you of course will be in the wrong.
There is a reason why people’s judgment is not far off from right. It’s the same reason that people’s ability to do basic arithmetic when it comes to money is not far off from right. Someone who fails to understand that $10 is twice $5 (or rather the equivalent in the local currency) is going to be robbed blind and his chances of reproduction are slim to none. Similarly, someone whose judgment of right and wrong is seriously defective is in serious trouble. If someone witnesses a criminal lunatic gun down random people in the street and then walks up to him and says, “nice day”, he’s a serious candidate for a Darwin Award. Correct recognition of evil is a basic life skill, and any human who does not have it will be cut out of the gene pool. And so, if you go to a random country and start killing people randomly, you will be neutralized by the locals quickly. That’s a prediction. Moral thought has predictive power.
It is still an error to make morality justify itself in terms of instrumental utility, since there are plenty of examples of things that are instrumentally right but ethically wrong, like improved gas chambers.
The only reason anyone can get away with the mass murder that you allude to is that they have overwhelming power on their side. And even they did it in secret, as I recall learning, which suggests that powerful as they were, they were not so powerful that they felt safe murdering millions openly.
Morality is how a human society governs itself in which no single person or organized group has overwhelming power over the rest of society. It is the spontaneous self-regulation of humanity. Its scope is therefore delimited by the absence of a person or organization with overwhelming power. Even though just about every place on Earth has a state, since it is not a totalitarian state there are many areas of life in which the state does not interfere, and which are therefore effectively free of state influence. In these areas of life humanity spontaneously self-regulates, and the name of the system of spontaneous self-regulation is morality.
Similarly, someone whose judgment of right and wrong is seriously defective is in serious trouble. If someone witnesses a criminal lunatic gun down random people in the street and then walks up to him and says, “nice day”, he’s a serious candidate for a Darwin Award. Correct recognition of evil is a basic life skill, and any human who does not have it will be cut out of the gene pool.
It sounds to me like you’re describing the ability to recognize danger, not evil, there.
Say that your hypothetical criminal lunatic manages to avoid the police, and goes about his life. Later that week, he’s at a buffet restaurant, acting normally. Is he still evil? Assuming nobody recognizes him from the shooting, do you expect the other people using the buffet to react unusually to him in any way?
It sounds to me like you’re describing the ability to recognize danger, not evil, there.
It’s not either/or. There is no such thing as a bare sense of danger. For example, if you are about to drive your car off a cliff, hopefully you notice in time and stop. In that case, you’ve sensed danger—but you also sensed the edge of a cliff, probably with your eyes. Or if you are about to drink antifreeze, hopefully you notice in time and stop. In that case, you’ve sensed danger—but you’ve also sensed antifreeze, probably with your nose.
And so on. It’s not either/or. You don’t either sense danger or sense some specific thing which happens to be dangerous. Rather, you sense something that happens to be dangerous, and because you know it’s dangerous, you sense danger.
Say that your hypothetical criminal lunatic manages to avoid the police, and goes about his life. Later that week, he’s at a buffet restaurant, acting normally. Is he still evil?
Chances are higher than average that if he was a criminal lunatic a few days ago, he is still a criminal lunatic today.
Assuming nobody recognizes him from the shooting, do you expect the other people using the buffet to react unusually to him in any way?
Obviously not, because if you assume that people fail to perceive something, then it follows that they will behave in a way that is consistent with their failure to perceive it. Similarly, if you fail to notice that the antifreeze that you’re drinking is anything other than fruit punch, then you can be expected to drink it just as if it were fruit punch.
My point was that in the shooting case, the perception of danger is sufficient to explain bystanders’ behavior. They may perceive other things, but that seems mostly irrelevant.
You said:
Correct recognition of evil is a basic life skill, and any human who does not have it will be cut out of the gene pool.
This claim appears to be incompatible with your expectation that people will not notice your hypothetical murderer when they encounter him acting according to social norms after committing a murder, given that he’s supposedly still evil.
My point was that in the shooting case, the perception of danger is sufficient to explain bystanders’ behavior.
People perceive danger because they perceive evil, and evil is dangerous.
They may perceive other things, but that seems mostly irrelevant.
It is not irrelevant that they perceive a specific thing (such as evil) which is dangerous. Take away the perception of the specific thing, and they have no basis upon which to perceive danger. Only Spiderman directly perceives danger, without perceiving some specific thing which is dangerous. And he’s fictional.
Correct recognition of evil is a basic life skill, and any human who does not have it will be cut out of the gene pool.
This claim appears to be incompatible with your expectation that people will not notice your hypothetical murderer when they encounter him acting according to social norms after committing a murder, given that he’s supposedly still evil.
I was referring to the standard, common ability to recognize evil. I was saying that someone who does not have that ability will be cut out of the gene pool (not definitely—probabilistically, his chances of surviving and reproducing are reduced, and over the generations the effect of this disadvantage compounds).
People who fail to recognize that the guy is that same guy from before are not thereby missing the standard human ability to recognize evil.
If someone witnesses a criminal lunatic gun down random people in the street and then walks up to him and says, “nice day”, he’s a serious candidate for a Darwin Award.
Except when the evil guys take over, Then you are in trouble if you oppose them.
The only reason anyone can get away with the mass murder that you allude to is that they have overwhelming power on their side.
That doesn’t affect my point. If there are actual or conceptual circumstances where instrumental good diverges from moral good, the two cannot be equated.
Morality is how a human society governs itself in which no single person or organized group has overwhelming power over the rest of society.
Why would it be wrong if they do? You theory of morality seems to be in
need of another theory of morality to justify it.
Except when the evil guys take over, Then you are in trouble if you oppose them.
Which is why the effective scope of morality is limited by concentrated power, as I said.
That doesn’t affect my point. If there are actual or conceptual circumstances where instrumental good diverges from moral good, the two cannot be equated.
I did not equate moral good with instrumental good in the first place.
Why would it be wrong if they do?
I didn’t say it would be wrong. I was talking about making predictions. The usefulness of morality in helping you to predict outcomes is limited by concentrated power.
You theory of morality seems to be in need of another theory of morality to justify it.
On the contrary, my theory of morality is confirmed by the evidence. You yourself supplied some of the evidence. You pointed out that a concentration of power creates an exception to the prediction that someone who guns down random people will be neutralized. But this exception fits with my theory of morality, since my theory of morality is that it is the spontaneous self-regulation of humanity. Concentrated power interferes with self-regulation.
I did not equate moral good with instrumental good in the first place.
...but you also say...
The usefulness of morality in helping you to predict outcomes
..which seems to imply that you are still thinking of morality as something that has
to pay its way instrumentally, by making useful predictions.
On the contrary, my theory of morality is confirmed by the evidence[..]But this exception fits with my theory of morality, since my theory of morality is that it is the spontaneous self-regulation of humanity. Concentrated power interferes with self-regulation.
It’s a conceptual truth that power interferes with spontaneous self-regulation: but that isn’t the point. The point is not that you have a theory that makes predictions, but
whether it is a theory of morality.
It is dubious to say of any society that the way it is organised is ipso facto moral. You have forestalled the relativistic problem by saying that socieites must self organise for equality and justice, not any old way, which
takes it as read that equality and justice are Good Things. But an ethical theory
must explain why they are good, not rest on them as a given.
..which seems to imply that you are still thinking of morality as something that has to pay its way instrumentally, by making useful predictions.
“Has to”? I don’t remember saying “has to”. I remember saying “does”, or words to that effect. I was disputing the following claim:
The difference is, believing “The world is round” affects whether I win or not, whereas believing “I’m morally in the wrong” does not.
This is factually false, considered as a claim about the real world.
It is dubious to say of any society that the way it is organised is ipso facto moral. You have forestalled the relativistic problem by saying that socieites must self organise for equality and justice, not any old way, which takes it as read that equality and justice are Good Things. But an ethical theory must explain why they are good, not rest on them as a given.
I am presenting the hypothesis that, under certain constraints, there is no way for humanity to organize itself but morally or close to morally and that it does organize itself morally or close to morally. The most important constraint is that the organization is spontaneous, that is to say, that it does not rely on a central power forcing everyone to follow the same rules invented by that same central power. Another constraint is absence of war, though I think this constraint is already implicit in the idea of “spontaneous order” that I am making use of, since war destroys order and prevents order.
Because humans organize themselves morally, it is possible to make predictions. However, because of the “no central power” constraint, the scope of those predictions is limited to areas outside the control of the central power.
Fortunately for those of us who seek to make predictions on the basis of morality, and also fortunately for people in general, even though the planet is covered with centralized states, much of life still remains largely outside of their control.
I am presenting the hypothesis that, under certain constraints, there is no way for humanity to organize itself but morally or close to morally and that it does organize itself morally or close to morally.
is that a stipulative definition(“morality” =def “spontaneous organisation”) or
is there some independent standard of morality on which it based?
The most important constraint is that the organization is spontaneous, that is to say, that it does not rely on a central power forcing everyone to follow the same rules invented by that same central power.
What about non-centralised power? What if one fairly large group—the gentry, men, citizens, some racial group, have power over another in a decentralised way?
And what counts as a society? Can an Athenian slave-owner state that all citizens in their society are equal, and, as for slaves, they are not members of their society.
ETA:
Actually, it’s worse than that. Not only are there examples of non-centralised power,there are cases where centralised power is on the side of angels
and spontaneous self-organisation on the the other side; for instance
the Civil Rights struggle, where the federal government backed equality,
and the opposition was from the grassroots.
ETA: Actually, it’s worse than that. Not only are there examples of non-centralised power,there are cases where centralised power is on the side of angels and spontaneous self-organisation on the the other side; for instance the Civil Rights struggle, where the federal government backed equality, and the opposition was from the grassroots.
The Civil Rights struggle was national government versus state government, not government versus people. The Jim Crow laws were laws created by state legislatures, not spontaneous laws created by the people.
There is, by the way, such a thing as spontaneous law created by the people even under the state. The book Order Without Law is about this. The “order” it refers to is the spontaneous law—that is, the spontaneous self-government of the people acting privately, without help from the state. This spontaneous self-government ignores and in some cases contradicts the state’s official, legislated law.
Jim Crow was an example of official state law, and not an example of spontaneous order.
The Civil Rights struggle was national government versus state government, not government versus people. The Jim Crow laws were laws created by state legislatures, not spontaneous laws created by the people.
Plenty of things that happened weren’t sanctioned by state legislatures, such as discrimination by private lawyers, hassling of voters during registration drives,
and the assassination of MLK
There is, by the way, such a thing as spontaneous law created by the people even under the state.
But law isn’t morality. There is such a thing as a laws that apply only to certain
people, and which support privilege and the status quo rather than equality and
justice.
Plenty of things that happened weren’t sanctioned by state legislatures, such as discrimination by private lawyers, hassling of voters during registration drives, and the assassination of MLK
Legislation distorts society and the distortion ripples outward. As for the assassination, that was a single act. Order is a statistical regularity.
But law isn’t morality.
I didn’t say it was. I pointed out an example of spontaneous order. It is my thesis that spontaneous order tends to be moral. Much order is spontaneous, so much order is moral, so you can make predictions on the basis of what is moral. That should not be confused with a claim that all order is morality, that all law is morality, which is the claim that you are disputing and a claim I did not make.
From it’s primordial state of equality...? I can see how a society that starts equal might self organise to stay that way. But I don’t think they start equal that often.
Indeed it is not an argument. Yet I can still say, “So what?” I am not going to worry about something that has no effect on my happiness. If there is some way it would have an effect, then I’d care about it.
The fact that you are amoral does not mean there is anything wrong with morality, and
is not an argument against it. You might as well be saying “there is a perfectly good rational argument that the world is round, but I prefer to be irrational”.
The difference is, believing “The world is round” affects whether I win or not, whereas believing “I’m morally in the wrong” does not.
That doesn’t constitute an argument unless you can explain why your winning
is the only thing that should matter.
Yeah, I said it’s not an argument. Yet again I can only ask, “So what?” (And this doesn’t make me amoral in the sense of not having moral sentiments. If you tell me me it is wrong to kill a dog for no reason, I will agree because I will interpret that as, “We both would be disgusted at the prospect of killing a dog for no reason.” But you seem to be saying there is something more.)
That doesn’t constitute an argument unless you can explain why your winning is the only thing that should matter.
The wordings “affect my winning” and “matter” mean the same thing to me. I take “The world is round” seriously because it matters for my actions. I do not see how “I’m morally in the wrong”* matters for my actions. (Nor how “I’m pan-galactically in the wrong” matters. )
*EDIT: in the sense that you seem to be using it (quite possibly because I don’t know what that sense even is!).
Yeah, I said it’s not an argument. Yet again I can only ask, “So what?”
So being wrong and not caring you are in the wrong is not the same as being right.
(And this doesn’t make me amoral in the sense of not having moral sentiments. If you tell me me it is wrong to kill a dog for no reason, I will agree because I will interpret that as, “We both would be disgusted at the prospect of killing a dog for no reason.” But you seem to be saying there is something more.)
Yes. I am saying that moral sentiments can be wrong, and that that can be realised through reason, and that getting morality right matters more than anything.
The wordings “affect my winning” and “matter” mean the same thing to me.
But they don’t mean the same thing. Morality matters more than anything else by definition. You don’t prove anything by adopting an idiosyncratic private language.
I take “The world is round” seriously because it matters for my actions. I do not see how “I’m morally in the wrong”* matters for my actions. (Nor how “I’m pan-galactically in the wrong” matters. )
The question is whether mattering for your actions is morally justifiable.
So being wrong and not caring you are in the wrong is not the same as being right.
Yet I still don’t care, and by your own admission I suffer not in the slightest from my lack of caring.
I am saying that moral sentiments can be wrong, and that that can be realised through reason, and that getting morality right matters more than anything.
Zorg says that getting pangalacticism right matters more than anything. He cannot tell us why it matters, but boy it really does matter.
Morality matters more than anything else by definition.
Which would be? If you refer me to the dictionary again, I think we’re done here.
The fact that you are not going to worry about morality, does not make morality a) false b) meaningless or c) subjective. Can I take it you are no longer arguing for any of claims a) b) or c) ?
The difference is, believing “The world is round” affects whether I win or not, whereas believing “I’m morally in the wrong” does not.
You have not succeeded in showing that winning is the most important thing.
The fact that you are not going to worry about morality, does not make morality a) false b) meaningless or c) subjective. Can I take it you are no longer arguing for any of claims a) b) or c) ?
I’ve never argued (a), I’m still arguing (actually just informing you) that the words “objective morality” are meaningless to me, and I’m still arguing (c) but only in the sense that it is equivalent to (b): in other words, I can only await some argument that morality is objective. (But first I’d need a definition!)
You have not succeeded in showing that winning is the most important thing.
I’m using the word winning as a synonym for “getting what I want,” and I understand the most important thing to mean “what I care about most.” And I mean “want” and “care about” in a way that makes it tautological. Keep in mind I want other people to be happy, not suffer, etc. Nothing either of us have argued so far indicates we would necessarily have different moral sentiments about anything.
The fact that you are not going to worry about morality, does not make morality a) false b) meaningless or c) subjective. Can I take it you are no longer arguing for any of claims a) b) or c) ?
I’ve never argued (a), I’m still arguing (actually just informing you) that the words “objective morality” are meaningless to me
You are not actually being all that informative, since there remains a distinct supsicion that when you say some X is meaningless-to-you, that is a
proxy for I-don’t-agree-with-it. I notice throughout these discussions that
you never reference accepted dictiionary definitions as a basis for meaningfullness,
but instead always offer some kind of idiosyncratic personal testimony.
and I’m still arguing (c) but only in the sense that it is equivalent to (b): in other words, I can only await some argument that morality is objective. (But first I’d need a definition!)
What is wrong with dictionary definitions?
You have not succeeded in showing that winning is the most important thing.
I’m using the word winning as a synonym for “getting what I want,” and I understand the most important thing to mean “what I care about most.”
That doesn’t affect anything. You still have no proof for the revised version.
And I mean “want” and “care about” in a way that makes it tautological. Keep in mind I want other people to be happy
Other people out there in the non-existent Objective World?
, not suffer, etc. Nothing either of us have argued so far indicates we would necessarily have different moral sentiments about anything.
I don’t think moral anti-realists are generally immoral people. I do think it is an intellectual mistake, whether or not you care about that.
You are not actually being all that informative, since there remains a distinct supsicion that when you say some X is meaningless-to-you, that is a proxy for I-don’t-agree-with-it.
Zorg said the same thing about his pan-galactic ethics.
I notice throughout these discussions that you never reference accepted dictiionary definitions as a basis for meaningfullness, but instead always offer some kind of idiosyncratic personal testimony.
Did you even read the post we’re commenting on?
That doesn’t affect anything. You still have no proof for the revised version.
Wait, you want proof that getting what I want is what I care about most?
Other people out there in the non-existent Objective World?
Read what I wrote again.
I don’t think moral anti-realists are generally immoral peopl
(in this post, you distinguish stipulation and definition—do you have in mind a distinction I’m glossing over?)
I’m using ‘definition’ in the common sense: “the formal statement of the meaning or significance of a word, phrase, etc.” A stipulative definition is a kind of definition “in which a new or currently-existing term is given a specific meaning for the purposes of argument or discussion in a given context.”
A conceptual analysis of a term using necessary and sufficient conditions is another type of definition, in the common sense of ‘definition’ given above. Normally, a conceptual analysis seeks to arrive at a “formal statement of the meaning or significance of a word, phrase, etc.” in terms of necessary and sufficient conditions.
Jackson is observing, what Tom and Jack should be doing is saying that rightness is that thing (whatever exactly it is) which our folk concepts roughly converge on, and taking up the task of refining our understanding from there—no defining involved.
Using my dictionary usage of the term ‘define’, I would speak (in my language) of conceptual analysis as a particular way of defining a term, since the end result of a conceptual analysis is meant to be a “formal statement of the meaning or significance of a word, phrase, etc.”
In your opening section you produce an example meant to show conceptual analysis is silly. Looks to me more like a silly attempt at an example of conceptual analysis.
I opened with a debate that everybody knew was silly, and tried to show that it was analagous to popular forms of conceptual analysis. I didn’t want to start with a popular example of conceptual analysis because philosophy-familiar people will have been trained not to find those examples silly. I gave at least three examples of actual philosophical analysis in my post (Schroeder on desire, Gettier on knowledge, Jackson on morality).
And I do think my opening offers an accurate example of conceptual analysis. Albert and Barry’s arguments about the computer microphone and hypothetical aliens are meant to argue about their intuitive concepts of ‘sound’, and what set of necessary and sufficient conditions they might converge upon. That’s standard conceptual analysis method.
The reason this process looks silly to us (when using a non-standard example like ‘sound’) is that it is so unproductive. Why think Albert and Barry have the same concept in mind? Words mean slightly different things in different cultures, subcultures, and small communities. We develop different intuitions about their meaning based on divergent life experiences. Our intuitions differ from each other’s due to the specifics of unconscious associative learning and attribution substitution heuristics. What is the point of bashing our intuitions about the meaning of terms against each other for thousands of pages, in the hopes that we’ll converge on a precise set of necessary and sufficient conditions? Even if we can get Albert and Barry to agree, what happens when Susan wants to use the same term, but has slightly differing intuitions about its meaning? And, let’s say we arrive at a messy set of 6 necessary and sufficient conditions for the intuitive meaning of the term. Is that going to be as useful for communication as one we consciously chose because it carved-up thingspace well? I doubt it. The IAU’s definition of ‘planet’ is more useful than the messy ‘folk’ definition of ‘planet’. Folk intuitions about ‘planet’ evolved over thousands of years and different people have different intuitions which may not always converge. In 2006, the IAU used modern astronomical knowledge to carve up thingspace in a more useful and informed way than our intuitions do.
Vague, intuitively-defined concepts are useful enough for daily conversation in many cases, and wherever they break down due to divergent intuitions and uses, we can just switch to stipulation/tabooing.
If you don’t have the patience to do philosophy, or you don’t think it’s of any value, by all means do something else -argue about facts and anticipations, whatever precisely that may involve. Just don’t think that in doing this latter thing you’ll address the question philosophy is interested in, or that you’ve said anything at all so far to show philosophy isn’t worth doing.
Yes. I’m going to argue about facts and anticipations. I’ve tried to show (a bit) in this post and in this comment about why doing (certain kinds of) conceptual analysis aren’t worth it. I’m curious to hear your answers to my many-questions paragraph about the use of conceptual analysis, above.
I’ve skipped responding to many parts of your comment because I wanted to ‘get on the same page’ about a few things first. Please re-raise any issues you’d like a response on.
You are surely right that there is no point in arguing over definitions in at least one sense—esp the definition of “definition”. Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. I hope the following engages constructively with your comments.
Suppose
we have two people, Albert and Barry
we have one thing, a car, X, of determinate interior volume
we have one sentence, S: “X is a subcompact”.
Albert affirms S, Barry denies S.
Scenario (1): Albert and Barry agree on the standard definition of ‘subcompact’ - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.
Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of ‘subcompact’ (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn’t anything people should engage in for long, I agree.
Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn’t be classified as subcompact -ie, X isn’t really subcompact, notwithstanding the received definition. This doesn’t have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter—a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of ‘subcompact car’. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).
Argument in scenarios 1 and 2 is futile—there is an acknowledged objective answer, and a way to get it—the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don’t arrive at an uncontroversial end point, you often learn a lot about the concepts (‘good’, knowledge’, ‘desires’, etc) in the process. Your example of the re-definition of ‘planet’ fits this model, I think.
This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don’t typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis. This is how I see it, anyway. I’d be interested to know if this seems wrong.
I opened with a debate that everybody knew was silly, and tried to show that it was analagous to popular forms of conceptual analysis. I didn’t want to start with a popular example of conceptual analysis because philosophy-familiar people will have been trained not to find those examples silly. I gave at least three examples of actual philosophical analysis in my post (Schroeder on desire, Gettier on knowledge, Jackson on morality).
You may think it’s obvious, but I don’t see you’ve shown any of these 3 examples is silly. I don’t see that Schroeder’s project is silly (I haven’t read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept—helps us think about what a desire -and hence in part a rational agent- is.
As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition—that’s why the paper was so successful, and this is often how these debates go. Part of what’s interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit—conceptual analysis—is elusive.
I objected to your example because I didn’t see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments—not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle’s Chinese Room argument. His argument is multiply flawed, as far as I’m concerned -could get into that another time. But I still think it’s interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.
Words mean slightly different things in different cultures, subcultures, and small communities. We develop different intuitions about their meaning based on divergent life experiences. Our intuitions differ from each other’s due to the specifics of unconscious associative learning and attribution substitution heuristics. What is the point of bashing our intuitions about the meaning of terms against each other for thousands of pages, in the hopes that we’ll converge on a precise set of necessary and sufficient conditions? Even if we can get Albert and Barry to agree, what happens when Susan wants to use the same term, but has slightly differing intuitions about its meaning?
This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of ‘planet’ demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress—we really do come to a better understand of things.
As I see it, your central point is that conceptual analysis is useful because it results in a particular kind of process: the clarification of our intuitive concepts. Because our intuitive concepts are so muddled and not as clear-cut and useful as a stipulated definition such as the IAU’s definition for ‘planet’, I fail to see why clarifying our intuitive concepts is a good use of all that brain power. Such work might theoretically have some value for the psychology of concepts and for linguistics, and yet I suspect neither science would miss philosophy if philosophy went away. Indeed, scientific psychology is often said to have ‘debunked’ conceptual analysis because concepts are not processed in our brains in terms of necessary and sufficient conditions.
But I’m not sure I’m reading you correctly. Why do you think its useful to devote all that brainpower to clarifying our intuitive concepts of things?
As I see it, your central point is that conceptual analysis is useful because it results in a particular kind of process: the clarification of our intuitive concepts. Because our intuitive concepts are so muddled and not as clear-cut and useful as a stipulated definition such as the IAU’s definition for ‘planet’, I fail to see why clarifying our intuitive concepts is a good use of all that brain power.
I think that where we differ is on ‘intuitive concepts’ -what I would want to call just ‘concepts’. I don’t see that stipulative definitions replace them. Scenario (3), and even the IAU’s definition, illustrate this. It is coherent for an astronomer to argue that the IAU’s definition is mistaken. This implies that she has a more basic concept -which she would strive to make explicit in arguing her case- different than the IAU’s. For her to succeed in making her case -which is imaginable- people would have to agree with her, in which case we would have at least partially to share her concept. The IAU’s definition tries to make explicit our shared concept -and to some extent legislates, admittedly- but it is a different sort of animal than what we typically use in making judgements.
Philosophy doesn’t impact non-philosophical activities often, but when it does the impact is often quite big. Some examples: the influence of Mach on Einstein, of Rousseau and others on the French and American revolutions, Mill on the emancipation of women and freedom of speech, Adam Smith’s influence on economic thinking.
I consider though that the clarification is an end in itself. This site proves -what’s obvious anyway- that philosophical questions naturally have a grip on thinking people. People usually suppose the answer to any given philosophical question to be self-evident, but equally we typically disagree about what the obvious answer is. Philosophy is about elucidating those disagreements.
Keeping people busy with activities which don’t turn the planet into more non-biodegradeable consumer durables is fine by me. More productivity would not necessarily be a good thing (...to end with a sweeping undefended assertion.).
Because our intuitive concepts are so muddled and not as clear-cut and useful as a stipulated definition such as the IAU’s definition for ‘planet’, I fail to see why clarifying our intuitive concepts is a good use of all that brain power.
OTOH, there is a class of fallacies (the True Scotsman argument, tendentious redefinition, etc),which are based on getting stipulative definitions wrong. Getting them right means formalisation of intution or common usage or something like that.
You are surely right that there is no point in arguing over definitions in at least one sense—esp the definition of “definition”. Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. Hopefully the following will illuminate rather than obfuscate. Suppose
we have two people, Albert and Barry
we have one thing, a car, X, of determinate interior volume
we have one sentence, S: “X is a subcompact”.
Albert affirms S, Barry denies S.
Scenario (1): Albert and Barry agree on the standard definition of ‘subcompact’ - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.
Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of ‘subcompact’ (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn’t anything people should engage in for long, I agree.
Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn’t be classified as subcompact -ie, X isn’t really subcompact, notwithstanding the received definition. This doesn’t have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter—a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of ‘subcompact car’. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).
Argument in scenarios 1 and 2 is futile—there is an acknowledged objective answer, and a way to get it—the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don’t arrive at an uncontroversial end point, you often learn a lot about the concepts (‘good’, knowledge’, ‘desires’, etc) in the process. Your example of the re-definition of ‘planet’ fits this model, I think.
This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don’t typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis. This is how I see it, anyway. I would be interested to hear if this seems wrong.
I opened with a debate that everybody knew was silly, and tried to show that it was analagous to popular forms of conceptual analysis. I didn’t want to start with a popular example of conceptual analysis because philosophy-familiar people will have been trained not to find those examples silly. I gave at least three examples of actual philosophical analysis in my post (Schroeder on desire, Gettier on knowledge, Jackson on morality).
You may think it’s obvious, but I don’t see you’ve shown any of these 3 examples is silly. I don’t see that Schroeder’s project is silly (I haven’t read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept—helps us think about what a desire -and hence in part a rational agent- is.
As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition—that’s why the paper was so successful, and this is often how these debates go. Part of what’s interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit—conceptual analysis—is elusive.
I objected to your example because I didn’t see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments—not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle’s Chinese Room argument. His argument is multiply flawed, as far as I’m concerned -could get into that another time. But I still think it’s interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.
Words mean slightly different things in different cultures, subcultures, and small communities. We develop different intuitions about their meaning based on divergent life experiences. Our intuitions differ from each other’s due to the specifics of unconscious associative learning and attribution substitution heuristics. What is the point of bashing our intuitions about the meaning of terms against each other for thousands of pages, in the hopes that we’ll converge on a precise set of necessary and sufficient conditions? Even if we can get Albert and Barry to agree, what happens when Susan wants to use the same term, but has slightly differing intuitions about its meaning?
This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of ‘planet’ demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress—we really do come to a better understand of things.
You are surely right that there is no point in arguing over definitions in at least one sense—esp the definition of “definition”. Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. Hopefully the following will illuminate rather than obfuscate.
Suppose
we have two people, Albert and Barry
we have one thing, a car, X, of determinate interior volume
we have one sentence, S: “X is a subcompact”.
Albert affirms S, Barry denies S.
Scenario (1): Albert and Barry agree on the standard definition of ‘subcompact’ - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.
Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of ‘subcompact’ (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn’t anything people should engage in for long, I agree.
Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn’t be classified as subcompact -ie, X isn’t really subcompact, notwithstanding the received definition. This doesn’t have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter—a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of ‘subcompact car’. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).
Argument in scenarios 1 and 2 is futile—there is an acknowledged objective answer, and a way to get it—the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don’t arrive at an uncontroversial end point, you often learn a lot about the concepts (‘good’, knowledge’, ‘desires’, etc) in the process. Your example of the re-definition of ‘planet’ fits this model, I think.
This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don’t typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis.
I opened with a debate that everybody knew was silly, and tried to show that it was analagous to popular forms of conceptual analysis. I didn’t want to start with a popular example of conceptual analysis because philosophy-familiar people will have been trained not to find those examples silly. I gave at least three examples of actual philosophical analysis in my post (Schroeder on desire, Gettier on knowledge, Jackson on morality).
You may think it’s obvious, but I don’t see you’ve shown any of these 3 examples is silly. I don’t see that Schroeder’s project is silly (I haven’t read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept—helps us think about what a desire -and hence in part a rational agent- is.
As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition—that’s why the paper was so successful, and this is often how these debates go. Part of what’s interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit—conceptual analysis—is elusive.
I objected to your example because I didn’t see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments—not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle’s Chinese Room argument. His argument is multiply flawed, as far as I’m concerned -could get into that another time. But I still think it’s interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.
Words mean slightly different things in different cultures, subcultures, and small communities. We develop different intuitions about their meaning based on divergent life experiences. Our intuitions differ from each other’s due to the specifics of unconscious associative learning and attribution substitution heuristics. What is the point of bashing our intuitions about the meaning of terms against each other for thousands of pages, in the hopes that we’ll converge on a precise set of necessary and sufficient conditions? Even if we can get Albert and Barry to agree, what happens when Susan wants to use the same term, but has slightly differing intuitions about its meaning?
This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of ‘planet’ demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress—we really do come to a better understand of things.
You are surely right that there is no point in arguing over definitions in at least one sense—esp the definition of “definition”. Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. Hopefully the following will illuminate rather than obfuscate. Suppose
we have two people, Albert and Barry
we have one thing, a car, X, of determinate interior volume
we have one sentence, S: “X is a subcompact”.
Albert affirms S, Barry denies S.
Scenario (1): Albert and Barry agree on the standard definition of ‘subcompact’ - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.
Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of ‘subcompact’ (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn’t anything people should engage in for long, I agree.
Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn’t be classified as subcompact -ie, X isn’t really subcompact, notwithstanding the received definition. This doesn’t have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter—a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of ‘subcompact car’. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).
Argument in scenarios 1 and 2 is futile—there is an acknowledged objective answer, and a way to get it—the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don’t arrive at an uncontroversial end point, you often learn a lot about the concepts (‘good’, knowledge’, ‘desires’, etc) in the process. Your example of the re-definition of ‘planet’ fits this model, I think.
This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don’t typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis.
I opened with a debate that everybody knew was silly, and tried to show that it was analagous to popular forms of conceptual analysis. I didn’t want to start with a popular example of conceptual analysis because philosophy-familiar people will have been trained not to find those examples silly. I gave at least three examples of actual philosophical analysis in my post (Schroeder on desire, Gettier on knowledge, Jackson on morality).
You may think it’s obvious, but I don’t see you’ve shown any of these 3 examples is silly. I don’t see that Schroeder’s project is silly (I haven’t read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept—helps us think about what a desire -and hence in part a rational agent- is.
As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition—that’s why the paper was so successful, and this is often how these debates go. Part of what’s interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit—conceptual analysis—is elusive.
I objected to your example because I didn’t see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments—not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle’s Chinese Room argument. His argument is multiply flawed, as far as I’m concerned -could get into that another time. But I still think it’s interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.
Words mean slightly different things in different cultures, subcultures, and small communities. We develop different intuitions about their meaning based on divergent life experiences. Our intuitions differ from each other’s due to the specifics of unconscious associative learning and attribution substitution heuristics. What is the point of bashing our intuitions about the meaning of terms against each other for thousands of pages, in the hopes that we’ll converge on a precise set of necessary and sufficient conditions? Even if we can get Albert and Barry to agree, what happens when Susan wants to use the same term, but has slightly differing intuitions about its meaning?
This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of ‘planet’ demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress—we really do come to a better understand of things.
You are surely right that there is no point in arguing over definitions in at least one sense—esp the definition of “definition”. Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. Hopefully the following will illuminate rather than obfuscate. Suppose
we have two people, Albert and Barry
we have one thing, a car, X, of determinate interior volume
we have one sentence, S: “X is a subcompact”.
Albert affirms S, Barry denies S.
Scenario (1): Albert and Barry agree on the standard definition of ‘subcompact’ - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.
Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of ‘subcompact’ (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn’t anything people should engage in for long, I agree.
Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn’t be classified as subcompact -ie, X isn’t really subcompact, notwithstanding the received definition. This doesn’t have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter—a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of ‘subcompact car’. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).
Argument in scenarios 1 and 2 is futile—there is an acknowledged objective answer, and a way to get it—the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don’t arrive at an uncontroversial end point, you often learn a lot about the concepts (‘good’, knowledge’, ‘desires’, etc) in the process. Your example of the re-definition of ‘planet’ fits this model, I think.
This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don’t typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis.
I opened with a debate that everybody knew was silly, and tried to show that it was analagous to popular forms of conceptual analysis. I didn’t want to start with a popular example of conceptual analysis because philosophy-familiar people will have been trained not to find those examples silly. I gave at least three examples of actual philosophical analysis in my post (Schroeder on desire, Gettier on knowledge, Jackson on morality).
You may think it’s obvious, but I don’t see you’ve shown any of these 3 examples is silly. I don’t see that Schroeder’s project is silly (I haven’t read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept—helps us think about what a desire -and hence in part a rational agent- is.
As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition—that’s why the paper was so successful, and this is often how these debates go. Part of what’s interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit—conceptual analysis—is elusive.
I objected to your example because I didn’t see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments—not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle’s Chinese Room argument. His argument is multiply flawed, as far as I’m concerned -could get into that another time. But I still think it’s interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.
Words mean slightly different things in different cultures, subcultures, and small communities. We develop different intuitions about their meaning based on divergent life experiences. Our intuitions differ from each other’s due to the specifics of unconscious associative learning and attribution substitution heuristics. What is the point of bashing our intuitions about the meaning of terms against each other for thousands of pages, in the hopes that we’ll converge on a precise set of necessary and sufficient conditions? Even if we can get Albert and Barry to agree, what happens when Susan wants to use the same term, but has slightly differing intuitions about its meaning?
This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of ‘planet’ demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress—we really do come to a better understand of things.
You are surely right that there is no point in arguing over definitions in at least one sense—esp the definition of “definition”. Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. I hope the following engages constructively with what you’re saying. Suppose
we have two people, Albert and Barry
we have one thing, a car, X, of determinate interior volume
we have one sentence, S: “X is a subcompact”.
Albert affirms S, Barry denies S.
Scenario (1): Albert and Barry agree on the standard definition of ‘subcompact’ - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.
Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of ‘subcompact’ (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn’t anything people should engage in for long, I agree.
Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn’t be classified as subcompact -ie, X isn’t really subcompact, notwithstanding the received definition. This doesn’t have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter—a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of ‘subcompact car’. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).
Argument in scenarios 1 and 2 is futile—there is an acknowledged objective answer, and a way to get it—the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don’t arrive at an uncontroversial end point, you often learn a lot about the concepts (‘good’, knowledge’, ‘desires’, etc) in the process. Your example of the re-definition of ‘planet’ fits this model, I think.
This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don’t typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis.
I opened with a debate that everybody knew was silly, and tried to show that it was analagous to popular forms of conceptual analysis. I didn’t want to start with a popular example of conceptual analysis because philosophy-familiar people will have been trained not to find those examples silly. I gave at least three examples of actual philosophical analysis in my post (Schroeder on desire, Gettier on knowledge, Jackson on morality).
You may think it’s obvious, but I don’t see you’ve shown any of these 3 examples is silly. I don’t see that Schroeder’s project is silly (I haven’t read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept—helps us think about what a desire -and hence in part a rational agent- is.
As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition—that’s why the paper was so successful, and this is often how these debates go. Part of what’s interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit—conceptual analysis—is elusive.
I objected to your example because I didn’t see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments—not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle’s Chinese Room argument. His argument is multiply flawed, as far as I’m concerned -could get into that another time. But I still think it’s interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.
Words mean slightly different things in different cultures, subcultures, and small communities. We develop different intuitions about their meaning based on divergent life experiences. Our intuitions differ from each other’s due to the specifics of unconscious associative learning and attribution substitution heuristics. What is the point of bashing our intuitions about the meaning of terms against each other for thousands of pages, in the hopes that we’ll converge on a precise set of necessary and sufficient conditions? Even if we can get Albert and Barry to agree, what happens when Susan wants to use the same term, but has slightly differing intuitions about its meaning?
This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of ‘planet’ demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress—we really do come to a better understand of things.
To point people to some additional references on conceptual analysis in philosophy. Audi’s (1983, p. 90) “rough characterization” of conceptual analysis is, I think, standard: “Let us simply construe it as an attempt to provide an illuminating set of necessary and sufficient conditions for the (correct) application of a concept.”
Or, Ramsey’s (1992) take on conceptual analysis: “philosophers propose and reject definitions for a given abstract concept by thinking hard about intuitive instances of the concept and trying to determine what their essential properties might be.”
Sandin (2006) gives an example:
Enter Freddie, philosopher, who has set out to analyse the concept of knowledge. Freddie sits back in his armchair and thinks hard about knowledge and the ‘‘what-we-would-say-when’’ of the term knowledge. He tentatively proposes and either rejects or accepts necessary and sufficient conditions for (his) correct use of the term knowledge. After a while, he feels he has succeeded, writes down his analysis and publishes it. End of part 1. Part 2: Enter a second philosopher, Eddie. Eddie reads Freddie’s paper about knowledge. Eddie’s room is also furnished with an appropriate armchair, in which he sits back and tries to concoct a counterexample to Freddie’s proposed analysis. He feels he has succeeded, writes down his counterexample and publishes it. End of part 2.
This is precisely what Albert and Barry are doing with regard to ‘sound’.
Audi (1983). The Applications of Conceptual Analysis. Metaphilosophy
14: 87-106.
Ramsey (1992). Prototypes and Conceptual Analysis. Topoi, 11: 59-70.
Sandin (2006). Has psychology debunked conceptual analysis? Metaphilosophy, 37: 26-33.
Eliezer does have a post in which he talks about doing what you call conceptual analysis more-or-less as you describe and why it’s worthwhile. Unfortunately, since that’s just one somewhat obscure post whereas he talks about tabooing words in many of his posts, when LWrongers encounter conceptual analysis, their cached thought is to say “taboo your words” and dismiss the whole analysis as useless.
The ‘taboo X’ reply does seem overused. It is something that is sometimes best to just ignore when you don’t think it aids in conveying the point you were making.
When I try that, I tend to get down-votes and replies complaining that I’m not responding to their arguments.
I don’t know the specific details of the instances in question. One thing I am sure about, however, is that people can’t downvote comments that you don’t make. Sometimes a thread is just a lost cause. Once things get polarized it often makes no difference at all what you say. Which is not to say I am always wise enough to steer clear of arguments. Merely that I am wise enough to notice when I do make that mistake. ;)
In the example he does start with a word, namely ‘art’, then uses our intuition to get a set of examples. This is more-or-less how conceptual analysis works.
I disagree. Suppose after proposing a definition of art based to the listed examples, someone produced another example that clearly satisfied our intuitions of what constituted art but didn’t satisfy the definitions. Would Eliezer:
a) say “sorry despite our intuitions that example isn’t art by definition”, or
b) conclude that the example was art and there was a problem with the definition?
He’s not trying to define art in accord with on our collective intuitions, he’s trying to find the simplest boundary around a list of examples based on an individual’s intuitions.
I would argue that the list of examples in the article is abbreviated for simplicity. If there is no single clear simple boundary between the two sets, one can always ask for more examples. But one asks an individual and not all of humanity.
He’s not trying to define art in accord with on our collective intuitions, he’s trying to find the simplest boundary around a list of examples based on an individual’s intuitions.
I would argue he’s trying to find the simplest coherent extrapolation of our intuitions.
Why do we even care about what specifically Eliezer Yudkowsky was trying to do in that post? Isn’t “is it more helpful to try to find the simplest boundary around a list or the simplest coherent explanation of intuitions?” a much better question?
Focus on what matters, work on actually solving problems instead of trying to just win arguments.
The answer to your question is “it depends on the situation”. There are some situations in which are intuitions contain some useful, hidden information which we can extract with this method. There are some situation in which our intuitions differ and it makes sense to consider a bunch of separate lists.
But, regardless, it is simply the case that when Eliezer says
“Perhaps you come to me with a long list of the things that you call “art” and “not art”″
and
“It feels intuitive to me to draw this boundary, but I don’t know why—can you find me an intension that matches this extension? Can you give me a simple description of this boundary?”
he is not talking about “our intuitions”, but a single list provided by a single person.
(It is also the case that I would rather talk about that than whatever useless thing I would instead be doing with my time.)
Eliezer’s point in that post was that there are more and less natural ways to “carve reality at the joints.” That however much we might say that a definition is just a matter of preference, there are useful definitions and less useful ones. The conceptual analysis lukeprog is talking about does call for the rationalist taboo, in my opinion, but simply arguing about which definition is more useful as Eliezer does (if we limit conceptual analysis to that) does not.
Some thoughts on this and related LW discussions. They come a bit late—apols to you and commentators if they’ve already been addressed or made in the commentary:
1) Definitions (this is a biggie).
There is a fair bit of confusion on LW, it seems to me, about just what definitions are and what their relevance is to philosophical and other discussion. Here’s my understanding—please say if you think I’ve gone wrong.
If in the course of philosophical discussion, I explicitly define a familiar term, my aim in doing so is to remove the term from debate—I fix the value of a variable to restrict the problem. It’d be good to find a real example here, but I’m not convinced defining terms happens very often in philosophical or other debate. By way of a contrived example, one might want to consider, in evaluating some theory, the moral implications of actions made under duress (a gun held to the head) but not physically initiated by an external agent (a jostle to the arm). One might say, “Define ‘coerced action’ to mean any action not physically initiated but made under duress” (or more precise words to the effect). This done, it wouldn’t make sense simply to object that my conclusion regarding coerced actions doesn’t apply to someone physically pushed from behind—I have stuipulated for the sake of argument I’m not talking about such cases. (in this post, you distinguish stipulation and definition—do you have in mind a distinction I’m glossing over?)
Contrast this to the usual case for conceptual analyses, where it’s assumed there’s a shared concept (‘good’, ‘right’, ‘possible’, ‘knows’, etc), and what is produced is meant to be a set of necessary and sufficient conditions meant to capture the concept. Such an analysis is not a definition. Regarding such analyses, typically one can point to a particular thing and say, eg, “Our shared concept includes this specimen, it lacks a necessary condition, therefore your analysis is mistaken”—or, maybe “Intuitively, this specimen falls under our concept, it lacks...”. Such a response works only if there is broad agreement that the specimen falls under the concept. Usually this works out to be the case.
I haven’t read the Jackson book, so please do correct me if you think I’ve misunderstood, but I take it something like this is his point in the paragraphs you quote. Tom and Jack can define ‘right action’ to mean whatever they want it to. In so doing, however, we cease to have any reason to think they mean by the term what we intuitively do. Rather, Jackson is observing, what Tom and Jack should be doing is saying that rightness is that thing (whatever exactly it is) which our folk concepts roughly converge on, and taking up the task of refining our understanding from there—no defining involved.
You say,
Well, not quite. The point I take it is rather that there simply are ‘folk’ platitudes which pick-out the meanings of moral terms—this is the starting point. ‘Killing people for fun is wrong’, ‘Helping elderly ladies across the street is right’ etc, etc. These are the data (moral intuitions, as usually understood). If this isn’t the case, there isn’t even a subject to discuss. Either way, it has nothing to do with definitions.
Confusion about definitions is evident in the quote from the post you link to. To re-quote:
Possibly the problem is that ‘sound’ has two meanings, and the disputants each are failing to see that the other means something different. Definitions are not relevant here, meanings are. (Gratuitous digression: what is “an auditory experience in a brain”? If this means something entirely characterizable in terms of neural events, end of story, then plausibly one of the disputants would say this does not capture what he means by ‘sound’ - what he means is subjective and ineffable, something neural events aren’t. He might go on to wonder whether that subjective, ineffable thing, given that it is apparently created by the supposedly mind-independent event of the falling of a tree, has any existence apart from his self (not to be confused with his brain!). I’m not defending this view, just saying that what’s offered is not a response but rather a simple begging of the question against it. End of digression.)
2) In your opening section you produce an example meant to show conceptual analysis is silly. Looks to me more like a silly attempt at an example of conceptual analysis. If you really want to make your case, why not take a real example of a philosophical argument -preferably one widely held in high regard at least by philosophers? There’s lots of ’em around.
3) In your section The trouble with conceptual analysis, you finally explain,
As explained above, philosophical discussion is not about “which definition to use” -it’s about (roughly, and among other things) clarifying our concepts. The task is difficult but worthwhile because the concepts in question are important but subtle.
If you don’t have the patience to do philosophy, or you don’t think it’s of any value, by all means do something else -argue about facts and anticipations, whatever precisely that may involve. Just don’t think that in doing this latter thing you’ll address the question philosophy is interested in, or that you’ve said anything at all so far to show philosophy isn’t worth doing. In this connection, one of the real benefits of doing philosophy is that it encourages precision and attention to detail in thinking. You say Eliezer Yudkowsky ”...advises against reading mainstream philosophy because he thinks it will ‘teach very bad habits of thought that will lead people to be unable to do real work.‘” The original quote continues, ”...assume naturalism! Move on! NEXT!” Unfortunately Eliezer has a bad habit of making unclear and undefended or question-begging assertions, and this is one of them. What are the bad habits, and how does philosophy encourage them? And what precisely is meant by ‘naturalism’? To make the latter assertion and simultaneously to eschew the responsibility of articulating what this commits you to is to presume you can both have your cake and eat it too. This may work in blog posts -it wouldn’t pass in serious discussion.
(Unlike some on this blog, I have not slavishly pored through Eliezer’s every post. If there is somewhere a serious discussion of the meaning of ‘naturalism’ which shows how the usual problems with normative concepts like ‘rational’ can successfully be navigated, I will withdraw this remark).
You’re tacitly defining philosophy as an endeavor that “doesn’t involve facts or anticipations,” that is, as something not worth doing in the most literal sense. Such “philosophy” would be a field defined to be useless for guiding one’s actions. Anything that is useless for guiding my actions is, well, useless.
The question of what is worth doing is of course profoundly philosophical. You have just assumed an answer.: that what is worth doing is achieving your aims efficiently and what is not worth doing is thinking about whether you have good aims, or which different aims you should have. (And anything that influences your goals will most certainly influence your expected experiences).
We’ve been over this: either “good aims” and “aims you should have” imply some kind of objective value judgment, which is incoherent, or they merely imply ways to achieve my final aims more efficiently, and we are back to my claim above as that is included under the umbrella of “guiding my actions.”
I think Peterdjones’s answer hits it on the head. I understand you’ve thrashed-out related issues elsewhere, but it seems to me your claim that the idea of an objective value judgment is incoherent would again require doing quite a bit of philosophy to justify.
Really I meant to be throwing the ball back to lukeprog to give us an idea of what the ‘arguing about facts and anticipations’ alternative is, if not just philosophy pretending not to be. I could have been more clear about this. Part of my complaint is the wanting to have it both ways. For example, the thinking in the post anticipations would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism. If LWers are serious about this idea, they really should look into its implications if they want to avoid inadvertent contradictions in the world-views. That means doing some philosophy.
As far as objective value, I simply don’t understand what anyone means by the term. And I think lukeprog’s point could be summed up as, “Trying to figure out how each discussant is defining their terms is not really ‘doing philosophy’; it’s just the groundwork necessary for people not to talk past each other.”
As far as making beliefs pay rent, a simpler way to put it is: If you say I should believe X but I can’t figure out what anticipations X entails, I will just respond, “So what?”
To unite the two themes: The ultimate definition would tell me why to care.
In the space of all possible meta-ethics, some meta-ethics are cooperative, and other meta-ethics are not so. This means that if you can choose which metaethics to spread to society, you stand a better chance at your own goals, if you spread cooperative metaethics. And cooperative metaethics is what we call “morality”, by and large.
It’s “Do unto others...”, but abstracted a bit, so that we really mean “Use the reasoning to determine what to do unto others, that you would rather they used when deciding how to do unto you.”
Omega puts you in a room with a big red button. “Press this button and you get ten dollars but another person will be poisoned to slowly die. If you don’t press it I punch you on the nose and you get no money. They have a similar button which they can use to kill you and get 10 dollars. You can’t communicate with them. In fact they think they’re the only person being given the option of a button, so this problem isn’t exactly like Prisoner’s dilemma. They don’t even know you exist or that their own life is at stake.”
“But here’s the offer I’m making just to you, not them. I can imprint you both with the decision theory of your choice, Amanojack; ofcourse if you identify yourself in your decision theory, they’ll be identifying themself.
“Careful though: This is a one time offer, and then I may put both of you to further different tests. So choose the decision theory that you want both of you to have, and make it abstract enough to help you survive, regardless of specific circumstances.”
Given the above scenario, you’ll end up wanting people to choose protecting the life of strangers more than than picking 10 dollars.
I would indeed it prefer if other people had certain moral sentiments. I don’t think I ever suggested otherwise.
Not quite my point. I’m not talking about what your preferences would be. That would be subjective, personal. I’m talking about what everyone’s meta-ethical preferences would be, if self-consistent, and abstracted enough.
My argument is essentially that objective morality can be considered the position in meta-ethical-space which if occupied by all agents would lead to the maximization of utility.
That makes it objectively (because it refers to all the agents, not some of them, or one of them) different from other points in meta-ethical-space, and so it can be considered to lead to an objectively better morality.
Then why not just call it “universal morality”?
It’s called that too. Are you just objecting as to what we are calling it?
Yeah, because calling it that makes it pretty hard to understand. If you just mean Collective Greatest Happiness Utilitarianism, then that would be a good name. Objective morality can mean way too many different things. This way at least you’re saying in what sense it’s supposed to be objective.
As for this collectivism, though, I don’t go for it. There is no way to know another’s utility function, no way to compare utility functions among people, etc. other than subjectively. And who’s going to be the person or group that decides? SIAI? I personally think all this collectivism is a carryover from the idea of (collective) democracy and other silly ideas. But that’s a debate for another day.
I’m getting a bad vibe here, and no longer feel we’re having the same conversation
“Person or group that decides”? Who said anything about anyone deciding anything? And my point was that this perhaps this is the meta-ethical position that every rational agent individually converges to. So nobody “decides”, or everyone does. And if they don’t reach the same decision, then there’s no single objective morality—but even i so perhaps there’s a limited set of coherent metaethical positions, like two or three of them.
I think my post was inspired more by TDT solutions to Prisoner’s dilemma and Newcomb’s box, a decision theory that takes into account the copies/simulations of its own self, or other problems that involve humans getting copied and needing to make a decision in blind coordination with their copies.
I imagined system that are not wholly copied, but rather just the module that determines the meta-ethical constraints, and tried to figure out to which directions would such system try to modify themselves, in the knowledge that other such system would similarly modify themselves.
You’re right, I think I’m confused about what you were talking about, or I inferred too much. I’m not really following at this point either.
One thing, though, is that you’re using meta-ethics to mean ethics. Meta-ethics is basically the study of what people mean by moral language, like whether ought is interpreted as a command, as God’s will, as a way to get along with others, etc. That’ll tend to cause some confusion. A good heuristic is, “Ethics is about what people ought to do, whereas meta-ethics is about what ought means (or what people intend by it).”
I’m not.
An ethic may say:
I should support same-sex marriage. (SSM-YES)
or perhaps:
I should oppose same-sex marraige (SSM-NO)
The reason for this position is the meta-ethic:
e.g.
Because I should act to increase average utility. (UTIL-AVERAGE)
Because I should act to increase total utility. (UTIL-TOTAL)
Because I should act to increase total amount of freedom (FREEDOM-GOOD)
Because I should act to increase average societal happiness. (SOCIETAL-HAPPYGOOD-AVERAGE)
Because I should obey the will of our voters (DEMOCRACY-GOOD)
Because I should do what God commands. (OBEY-GOD).
But some metaethical positions are invalid because of false assumptions (e.g. God’s existence). Other positions may not be abstract enough that they could possibly become universal or apply to all situations. Some combinations of ethics and metaethics may be the result of other factual or reasoning mistakes (e.g. someone thinks SSM will harm society, but it ends up helping it, even by the person’s own measuring).
So, NO, I don’t speak necessarily about Collective Greatest Happiness Utilitarianism. I’m NOT talking about a specific metaethic, not even necessarily a consequentialistic metaethic (let alone a “Greatest happiness utilitarianism”) I’m speaking about the hypothetical point in metaethical space that everyone would hypothetically prefer everyone to have—an Attractor of metaethical positions.
That’s very contestable. It has frequently argued here that preferences can be inferred from behaviour; it’s also been argued that introspection (if that is what you mean by “subjectively”) is not a reliable guide to motivation.
This is the whole demonstrated preference thing. I don’t buy it myself, but that’s a debate for another time. What I mean by subjectively is that I will value one person’s life more than another person’s life, or I could think that I want that $1,000,000 more than a rich person wants it, but that’s just all in my head. To compare utility functions and work from demonstrated preference usually—not always—is a precursor to some kind of authoritarian scheme. I can’t say there is anything like that coming, but it does set off some alarm bells. Anyway, this is not something I can substantiate right now.
Attempts to reduce real, altrusitic ethics back down to selfish/instrumental ethics tend not to work that well, because the gains from co-operation are remote, and there are many realistic instances where selfish action produces immediate rewards (cd the Prudent Predatory objection Rand’s egoistic ethics).
OTOH, since many people are selfish, they are made to care by having legal and social sanctions against excessively selfish behaviour.
I wasn’t talking about altruistic ethics, which can lead someone to sacrifice their lifes to prevent someone else getting a bruise; and thus would be almost as disastrous as selfishness if widespread. I was talking about cooperative ethics—which overlaps with but doesn’t equal altruism, same as it overlaps but doesn’t equal selfishness.
The difference between morality and immorality, is that morality can at its most abstract possible level be cooperative, and immorality can’t.
This by itself isn’t a reason that can force someone to care—you can’t make a rock care about anything, but that’s not a problem with your argument. But it’s something that leads to different expectations about the world, namely what Amanojack was asking for.
In a world populated by beings whose beliefs approach objective morality, I expect more cooperation and mutual well-being, all other things being equal. In a world whose beliefs don’t approach it, i expect more war and other devastation.
Although it usually doesn’t.
I think that you version of altruism is a straw man, and that what most people mean by altruism isn’t very different from co operation.
Or, as I call it, universalisability.
That argument doesn’t have to be made at all. Morality can stand as a refutation of the claim that anticipiation of experience is of ultimate importance. And it can be made differently: if you rejig your values, you can expect to antipate different experiences—it can be a self-fulffilling prophecy and not merely passive anticipation.
There is an argument from self interest, but it is tertiary to the two arguments I mentioned above.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist:
-though the apparent tension in being a solipsist who argues gets to the root of the issue.
For what it may be worth:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.
Solipsism is an ontological stance: in short, “there is nothing out there but my own mind.” I am saying something slightly different: “To speak of there being something/nothing out there is meaningless to me unless I can see why to care.” Then again, I’d say this is tautological/obvious in that “meaning” just is “why it matters to me.”
My “position” (really a meta-position about philosophical positions) is just that language obscures what is going on. It may take a while to make this clear, but if we continue I’m sure it will be.
I’m not a naturalist. I’m not skeptical of “objective” because of such reasons; I am skeptical of it merely because I don’t know what the word refers to (unless it means something like “in accordance with consensus”). In the end, I engage in intellectual discourse in order to win, be happier, get what I want, get pleasure, maximize my utility, or whatever you’ll call it (I mean them all synonymously).
If after engaging in such discourse I am not able to do that, I will eventually want to ask, “So what? What difference does it make to my anticipations? How does this help me get what I want and/or avoid what I don’t want?”
Do you cross the road with your eyes shut? If not, you are assuming, like everyone else, that there are things out there which are terminally disutiilitous.
Whose language ? What language? If you think all language is a problem, what do you intend to replace it with?
It refers to the stuff that doesn’t go away when you stop believing in it.
Note the bold.
English, and all the rest that I know of.
Something better would be nice, but what of it? I am simply saying that language obscures what is going on. You may or may not find that insight useful.
If so, I suggest “permanent” as a clearer word choice.
I think that is rather drastic. Science may not accept beliefs and values as fundamental, but it can accept that as higher-level descriptions, cf Dennet’s Intentional Stance.
Again, I find it incredible that natural facts have no relation to morality. Morality would be very different in women laid eggs or men had balls of steel.
To say that moral values are both objective and disconnected from physical fact implies that they exist in their own domain, which is where some people,with some justice, tend to balk.
For some value of “incoherent”. Personally, I find it useful to strike out the word and replace it with something more precise, such a “semantically meaningless”, “contradictory”, “self-underminng” etc.
I take the position that while we may well have evolved with different values, they wouldn’t be morality. “Morality” is subjunctively objective. Nothing to do with natural facts, except insofar as they give us clues about what values we in fact did evolve with.
How do you know that the values we have evolved with are moral? (The claim that natural facts are relevant to moral reasoning is different to the claim that natually-evolved behavioural instincts are ipso facto moral)
I’m not sure what you want to know. I feel motivated to be moral, and the things that motivate thinking machines are what I call “values”. Hence, our values are moral.
But of course naturally-evolved values are not moral simply by virtue of being values. Morality isn’t about values, it’s about life and death and happiness and sadness and many other things beside.
I acknowledge this is a subject of lively debate. Still, I stick to the proposition that you can’t derive an ought from an is, and that this is what’s at stake here. Since you can’t make sense of a person as rational if it’s not the case there’s anything she ought or ought not to do (and I admit you may think this needs defending), natural science lacks the means to ascribe rationality. Now, if we’re talking about the social sciences, that’s another matter. There is a discontinuity between these and the purely natural sciences. I read Dennett many years ago, and thought something like this divide is what his different stances are about, but I’d be open to hear a different view.
I didn’t say this—just that from a purely scientific point of view, morality is invisible. From an engaged, subjective point of view, where morality is visible, natural facts are relevant.
Here’s another stab at it: natural science can in principle tell us everything there is to know about a person’s inner workings and dispositions, right down to what sounds she is likely to utter in what circumstances. It might tell someone she will make the sounds, eg, ‘I ought to go to class’ in given circs.. But no amount of knowledge of this kind will give her a reason to go to class (would you agree?). To get reasons -not to mention linguistic meaning and any intentional states- you need a subjective -ie, non-scientific- point of view. The two views are incommensurable, but neither is dispensable -people need reasons.
I acknowledge this is a subject of lively debate. Still, I stick to the proposition that you can’t derive an ought from an is, and that this is what’s at stake here.
But much of the material on LW is concerned with rational oughts: a rational agent ought to maximise its utility function (its arbitary set of goals) as efficiently as possible. Rational agents should win, in short. That seems to be an analytical truth arrived at by unpacking “rational”. Generally speaking, where you have rules, your have coulds and shoulds and couldn;t and shouldn’ts. I have been trying to press that unpacking morality leads to the similar analytical truth: ” a moral agent ought to adopt universalisable goals.”
“Oughts” in general appear wherever you have rules, which are often abstractly defined so that they apply to physal systems as well as anything else.
I think LWers would say there are facts about her utility function from which conclusions can be drawn about how she should maximise it (and how she would if the were rational).
I don’t see why. If a person or other system has goals and is acting to achieve those goals in an effective way, then their goals can be inferred from their actions.
I expressed myself badly. I agree entirely with this.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
And I want to persuade LWers
1) that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are,
and
2) that this is ok—these are still respectable facts, notwithstanding.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
And they are likely to riposte that facts about her UF are naturalistic just because they can be inferred from her behaviour. You seem to be in need of a narrow, sipulative definition of naturalistic.
You introduced the word “basic” there. It might be the case that goals disappear on a very fine-grained atomistic view of things (along with rules and structures and various other things). But that would mean that goals aren’t basic physical facts. Naturalism tends to be defined more epistemically than physicalism, so the inferrabilty of UFs (or goals or intentions) from coarse-grained physical behaviour is a good basis for supposing them to be natural by that usage.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No collection of merely naturalistic facts will constrain these. There have been lots of theories advanced which try, but the concensus, I think, is that there is no easy naturalistic solution.
Oh, that’s the philosopher’s definition of naturalistic. OTOH, you could just adopt the scientists version and scan their brain.
Well, alright, please tell me: what is a Utility Function, that it can be inferred from a brain scan? How’s this supposed to work, in broad terms?
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No collection of merely naturalistic facts will constrain these. There have been lots of theories advances which try, but the concensus, I think, is that there is no easy naturalistic solution.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’. First, notice that X may also prefer his/her philosophy TA to his/her chemistry TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No number of merely naturalistic facts will constrain these. There have been any number of theories advances which try, but the concensus, I think, is that all fail.
But this is false, surely. I take it that a fact about X’s UF might be some such as ‘X prefers apples to pears’ (is this what you have in mind?) First, notice that X may also prefer his/her philosophy TA to his/her EM Fields and Waves TA. X has different designs on the TA than on the apple. So, properly stated, preferences are orderings of desires, the objects of which are states of affairs rather than simple things (X desires that X eat an apple more than that X eat a pear). Second, to impute desires such as these requires also imputing beliefs (you observe the apple gathering behaviour -naturalistically unproblematic- but you also need to impute to X the belief that the things gathered are apples. X might be picking the apples thinking they are pears). There’s any number of ways to attribute beliefs and desires in a manner consistent with the behaviour. No number of merely naturalistic facts will constrain these. There have been lots of theories advanced which try, but the concensus, I think, is that there’s no easy naturalistic solution.
I expressed myself badly. I agree entirely with this.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
And I want to persuade LWers
*that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are,
and
*that this is ok—these are still respectable facts, notwithstanding.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
I expressed myself badly. I agree entirely with this.
Again, I agree with this. The position I want to defend is just that if you confine yourself strictly to natural laws, as you should in doing natural science, rules and oughts will not get a grip.
And I want to persuade LWers
that facts about her utility functions aren’t naturalistic facts, as facts about her cholesterol level or about neural activity in different parts of her cortex, are, and *that this is ok—these are still respectable facts, notwithstanding.
But having a goal is not a naturalistic property. Some people might say, eg, that an evolved, living system’s goal is to survive. If this is your thought, my challenge would be to show me what basic physical facts entail that conclusion.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist:
For what it may be worth:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist
-though the apparent contradiction in being a solipsist who argues gets to the root of the issue.
For what it may be worth:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them. From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says in his responses makes a lot of sense to me. I took a slightly different tack, which is maybe moot given your admission to being a solipsist:
For what it may be worth:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but I submit the idea is not incoherent. And showing otherwise would take doing some philosophy.
Wrote a reply off-line and have been lapped several times (as usual). What Peterdjones says is more incisive and clear than what I came up with. I took a different tack, which is maybe moot given your admission to being a solipsist:
For what it may be worth, here’s what I had:
I’m assuming you subscribe to what you consider to be a rigorously scientific world-view, and you consider such a world-view makes no place for objective values—you can’t fit them in, hence no way to understand them.
From a rigorously scientific point of view, a human being is just a very complex, homeostatic electro-chemical system. It rattles about the surface of the earth governed by the laws of nature just like any other physical system. A thing considered thus (ie from a scientific pt of view) is not ‘trying’ to do anything, has no beliefs, no preferences (just varying dispositions), no purposes, is neither rational nor irrational, and has no values. Natural science does not see right or wrong, punkt.
Some people think this is all there is, and that there is nothing useful to say about our conception of ourselves as beings with values (eg, Paul Churchland). I disagree. A person cannot make sense of her/himself with just this scientific understanding, important though it is, because s/he has to make decisions -has to figure out whether to vote left or right, be vegetarian or carnivore, to spend time writing blog responses or mow the lawn, etc.. Values can’t be made sense of from a scientific point of view, but we recognize and need them, so we have to make sense of them otherwise.
Thought of from this point of view, all values are in some sense objective -ie, independent of you. There has to be a gap between value and actual behaviour, for the value to be made sense of as such (if everything you do is right, there is no right).
Presently you are disagreeing with me about values. To me this says you think there’s a right and wrong of the matter, which applies to us both. This is an example of an objective value. It would take some work to spell out a parallel moral example, if this is what you have in mind, but given the right context I submit you would argue with someone about some moral principle (hope so, anyway).
Prima facie, values are objective. Maybe on closer inspection it can be shown in some sense they aren’t, but anyway the idea at least is not incoherent. And showing otherwise would take doing some philosophy.
What they generally mean is “not subjective”. You might object that non-subjective value is contradictory, but that is not the same as objecting that it is incomprehensible, since one has to understand the meanings of individual terms to see a contradiction.
As for anticipations: believing morality is objective entails that some of your beliefs may be wrong by objective standards, and believing it is subjective does not entail that. So the belief in moral objectivity could lead to a revision of your aims and goals, which will in turn lead to different experiences.
I’m not saying non-subjective value is contradictory, just that I don’t know what it could mean. To me “value” is a verb, and the noun form is just a nominalization of the verb, like the noun “taste” is a nominalization of the verb “taste.” Ayn Rand tried to say there was such a thing as objectively good taste, even of foods, music, etc. I didn’t understand what she meant either.
But before I would even want to revise my aims and goals, I’d have to anticipate something different than I do now. What does “some of your beliefs may be wrong by objective standards” make me anticipate that would motivate me to change my goals? (This is the same as the question in the other comment: What penalty do I suffer by having the “wrong” moral sentiments?)
I don’t see the force to that argument. “Believe” is a verb and “belief” is a nominalisation. But beliefs can be objectively right or wrong—if they belong to the appropriate subject area.
It is possible for aesthetics(and various other things) to be un-objectifiable whilst morality (and various other things) are objectifiable.
Why?
You should be motivated by a desire to get things right in general. The anticipation thing is just a part of that. It’s not an ultimate. But morality is an ultimate because there is no more important value than a moral value.
If there is no personal gain from morality, that doesn’t mean you shouldn’t be moral. You should be moral by the definition of “moral”and “should”. It’s an analytical truth. It is for selfishness to justify itself in the face of morality, not vice versa.
First of all, I should disclose that I don’t find ultimately any kind of objectivism coherent, including “objective reality.” It is useful to talk about objective reality and objectively right or wrong beliefs most of the time, but when you really drill down there are only beliefs that predict my experience more reliably or less reliably. In the end, nothing else matters to me (nor, I expect, anyone else—if they understand what I’m getting at here).
So you disagree with EY about making beliefs pay rent? Like, maybe some beliefs don’t pay rent but are still important? I just don’t see how that makes sense.
This seems circular.
What if I say, “So what?”
How do you know that?
If disagreeing mean it is good to entertain useless beliefs, then no. If disagreeing means that instrumental utility is not the ultimate value , then yes.
You say that like that’s a bad thing. I said it was analytical and analytical truths would be expected to sound tautologous or circular.
So it’s still true. Not caring is not refutation.
Why do I think that is a useful phrasing? That would be a long post, but EY got the essential idea in Making Beliefs Pay Rent.
Well, what use is your belief in “objective value”?
Ultimately, that is to say at a deep level of analysis, I am non-cognitive to words like “true” and “refute.” I would substitute “useful” and “show people why it is not useful,” respectively.
I meant the second part: “but when you really drill down there are only beliefs that predict my experience more reliably or less reliably” How do you know that?
What objective value are your instrumental beliefs? You keep assuming useful-to-me is the ultimate value and it isn’t: Morality is, by definition.
Then I have a bridge to sell you.
And would it be true that it is non-useful? Since to assert P is to assert “P is true”, truth is a rather hard thing to eliminate. One would have to adopt the silence of Diogenes.
That’s what I was responding to.
Zorg: And what pan-galactic value are your objective values? Pan-galactic value is the ultimate value, dontcha know.
You just eliminated it: If to assert P is to assert “P is true,” then to assert “P is true” is to assert P. We could go back and forth like this for hours.
But you still haven’t defined objective value.
Dictionary says, “Not influenced by personal feelings, interpretations, or prejudice; based on facts; unbiased.”
How can a value be objective? ---EDIT: Especially since a value is a personal feeling. If you are defining “value” differently, how?
It is not the case that all beliefs can do is predict experience based on existing preferences. Beliefs can also set and modify preferences. I have given that counterargument several times.
I think moral values are ultimate because I can;t think of a valid argument of the form “I should do because ”. Please give an example of a pangalactic value that can be substituted for ,
Yeah,. but it sitll comes back to truth. If I tell you it will increase your happiness to hit yourself on the head with a hammer, your response is going to have to amount to “no, that’s not true”.
By being (relatively) uninfluenced by personal feelings, interpretations, or prejudice; based on facts; unbiased.
You haven’t remotely established that as an identity. It is true that some people some of the time arrive at values through feelings. Others arrive at them (or revise them) through facts and thinking.
“Values can be defined as broad preferences concerning appropriate courses of action or outcomes”
I missed this:
I’ll just decide not to follow the advice, or I’ll try it out and then after experiencing pain I will decide not to follow the advice again. I might tell you that, too, but I don’t need to use the word “true” or any equivalent to do that. I can just say it didn’t work.
People have been known to follow really bad advice, sometimes to their detriment and suffering a lot of pain along the way.
Some people have followed excessively stringent diets to the point of malnutrition or death. (This isn’t intended as a swipe at CR—people have been known to go a lot farther than that.)
People have attempted (for years or decades) to shut down their sexual feelings because they think their God wants it.
Any word can be eliminated in favour of a definitions or paraphrase. Not coming out with an equivalent—showing that you have dispensed with the concept—is harder. Why didn’t it work? You’re going to have to paraphrase “Because it wasn’t true” or refuse to answer.
The concept of truth is for utility, not utility for truth. To get them backwards is to merely be confused by the words themselves. It’s impossible to show you’ve dispensed with any concept, except to show that it isn’t useful for what you’re doing. That is what I’ve done. I’m non-cognitive to God, truth, and objective value (except as recently defined). Usually they all sound like religion, though they all are or were at one time useful approximate means of expressing things in English.
Truth is useful for whatever you want to do with it. If people can collect stamps for the sake of collecting stamps, they can collect truths for the sake of collecting truths.
Sounding like religion would not render something incomprehensible...but it could easilly provoke an “I don’t like it” reaction, which is then dignified with the label “incoherent” or whatever.
I agree, if you mean things like, “If I now believe that she is really a he, I don’t want to take ‘her’ home anymore.”
Neither can I. I just don’t draw the same conclusion. There’s a difference between disagreeing with something and not knowing what it means, and I do seriously not know what you mean. I’m not sure why you would think it is veiled disagreement, seeing as lukeprog’s whole post was making this very same point about incoherence. (But incoherence also only has meaning in the sense of “incoherent to me” or someone else, so it’s not some kind of damning word. It simply means the message is not getting through to me. That could be your fault, my fault, or English’s fault, and I don’t really care which it is, but it would be preferable for something to actually make it across the inferential gap.)
EDIT: Oops, posted too soon.
So basically you are saying that preferences can change because of facts/beliefs, right? And I agree with that. To give a more mundane example, if I learn Safeway doesn’t carry egg nog and I want egg nog, I may no longer want to go to Safeway. If I learn that egg nog is bad for my health, I may no longer want egg nog. If I believe health doesn’t matter because the Singularity is near, I may want egg nog again. If I believe that egg nog is actually made of human brains, I may not want it anymore.
At bottom, I act to get enjoyment and/or avoid pain, that is, to win. What actions I believe will bring me enjoyment will indeed vary depending on my beliefs. But it is always ultimately that winning/happiness/enjoyment/fun//deliciousness/pleasure that I am after, and no change in belief can change that. I could take short-term pain for long-term gain, but that would be because I feel better doing that than not.
But it seems to me that just because what I want can be influenced by what could be called objective or factual beliefs doesn’t make my want for deliciousness “uninfluenced by personal feelings.”
In summary, value/preferences can either be defined to include (1) only personal feelings (though they may be universal or semi-universal), or to also include (2) beliefs about what would or wouldn’t lead to such personal feelings. I can see how you mean that 2 could be objective, and then would want to call them thus “objective values.” But not for 1, because personal feelings are, well, personal.
If so, then it seems I am back to my initial response to lukeprog and ensuing brief discussion. In short, if it is only the belief in objective facts that is wrong, then I wouldn’t want to call that morality, but more just self-help, or just what the whole rest of LW is. It is not that someone could be wrong about their preferences/values 1, but preferences/values 2.
“incoherence” means several things. Some of them, such a self-contradiction are as objective as anything. You seem to find morality meaningless in some personal sense. Looking at dictionaries doesn’t seem to work for you. Dictionaries tend to define the moral as the good.It is hard to believe that anyone can grow up not hearing the word “good” used a lot, unless they were raised by wolves. So that’s why I see complaints of incoherence as being disguised disagreement.
If you say so. That doesn’t make morality false, meaningless or subjective. It makes you an amoral hedonist.
Perhaps not completley, but that sill leaves some things as relatively more objective than others.
Then your categories aren’t exhaustive, because preferences can also be defined to include universalisable values alongside personal whims. You may be making the classic of error of taking “subjective” to mean “believed by a subject”
The problem isn’t that I don’t know what it means. The problem is that it means many different things and I don’t know which of those you mean by it.
I have moral sentiments (empathy, sense of justice, indignation, etc.), so I’m not amoral. And I am not particularly high time-preference, so I’m not a hedonist.
If you mean preferences that everyone else shares, sure, but there’s no stipulation in my definitions that other people can’t share the preferences. In fact, I said, “(though they may be universal or semi-universal).”
It’d be a “classic error” to assume you meant one definition of subjective rather than another, when you haven’t supplied one yourself? This is about the eight time in this discussion that I’ve thought that I can’t imagine what you think language even is.
I doubt we have any disagreement, to be honest. I think we only view language very, radically differently. (You could say we have a disagreement about language.)
What “moral” means or what “good” means/?
No, that isn’t the problem. It has one basic meaning, but there are a lot of different theories about it. Elsewhere you say that utilitarianism renders objective morality meaningful. A theory of X cannot render X meaningful, but it can render X plausible.
But you theorise that you only act on them(and that nobody ever acts but) toincrea se your pleasure.
I don’t see the point in stipulating that preferences can’t be shared. People who believe they can be just have to find another word. Nothing is proven.
I’ve quoted the dictionary derfinition, and that’s what I mean.
“existing in the mind; belonging to the thinking subject rather than to the object of thought ( opposed to objective). 2. pertaining to or characteristic of an individual; personal; individual: a subjective evaluation. 3. placing excessive emphasis on one’s own moods, attitudes, opinions, etc.; unduly egocentric”
I think language is public, I think (genuine) disagreements about meaning can be resolved with dictionaries, and I think you shouldn’t assume someone is using idiosyncratic definitions unless they give you good reason.
Objective truth is what you should believe even if you don’t. Objective values are the values you should have even if you have different values.
Where the groundwork is about 90% of the job...
That has been answered several times. You are assuming that instrumental value is ultimate value, and it isn’t.
Imagine you are arguing with someone who doesn’t “get” rationality. If they believe in instrumental values, you can persuade they they should care about rationality because it will enable them to achieve their aims. If they don’t, you can’t. Even good arguments will fail to work on some people.
You should care about morality because it is morality. Morality defines (the ultimate kind of) “should”.
“What I should do” =def “what is moral”.
Nor everyone does get that , which is why “don’t care” is “made to care” by various sanctions.
“Should” for what purpose?
I certainly agree there. The question is whether it is more useful to assign the label “philosophy” to groundwork+theory or just the theory. A third possibility is that doing enough groundwork will make it clear to all discussants that there are no (or almost no) actually theories in what is now called “philosophy,” only groundwork, meaning we would all be in agreement and there is nothing to argue except definitions.
I may not be able to convince them, but at least I would be trying to convince them on the grounds of helping them achieve their aims. It seems you’re saying that, in the present argument, you are not trying to help me achieve my aims (correct me if I’m wrong). This is what makes me curious about why you think I would care. The reasons I do participate, by the way, are that I hold out the chance that you have a reason why I would care (which maybe you are not articulating in a way that makes sense to me yet), that you or others will come to see my view that it’s all semantic confusion, and because I don’t want to sound dismissive or obstinate in continuing to say, “So what?”
Believing in truth is what rational people do.
Which is good because...?
Correct.
I can argue that your personal aims are not the ultimate value, and I can suppose you might care about that just because it is true. That is how arguments work: one rational agent tries topersuade another that something is true. If one of the participants doesn’t care about truth at all, the process probably isn’t going to work.
I think that horse has bolted. Inasmuch as you don’t care about truth per se. you have advertised yourself as being irrational.
Winning is what rational people do. We can go back and forth like this.
It benefits me, because I enjoy helping people. See, I can say, “So what?” in response to “You’re wrong.” Then you say, “You’re still wrong.” And I walk away feeling none the worse. Usually when someone claims I am wrong I take it seriously, but only because I know how it could ever, possibly, potentially ever affect me negatively. In this case you are saying it is different, and I can safely walk away with no terror ever to befall me for “being wrong.”
Sure, people usually argue whether something is “true or false” because such status makes a difference (at least potentially) to their pain or pleasure, happiness, utility, etc. As this is almost always the case, it is customarily unusual for someone to say they don’t care about something being true or false. But in a situation where, ex hypothesi, the thing being discussed—very unusually—is claimed to not have any effect on such things, “true” and “false” become pointless labels. I only ever use such labels because they can help me enjoy life more. When they can’t, I will happily discard them.
So you say. I can think of two arguments against that: people acquire true beliefs that aren’t immediately useful, and untrue beliefs can be pleasing.
I never said they had to be “immediately useful” (hardly anything ever is). Untrue beliefs might be pleasing, but when people are arguing truth and falsehood it is not in order to prove that the beliefs they hold are untrue so that they can enjoy believing them, so it’s not an objection either.
You still don’t have a good argument to the effect that no one cares about truth per se.
A lot of people care about truth, even when (I suspect) they diminish their enjoyment needlessly by doing so, so no argument there. In the parent I’m just continuing to try to explain why my stance might sound weird. My point from farther above, though, is just that I don’t/wouldn’t care about “truth” in those rare and odd cases where it is already part of the premises that truth or falsehood will not affect me in any way.
I think ‘usually” is enough qualification, especially considering that he says ‘makes a difference’ and not ’completely determines”
Hmm. I sounds to me like a kind of methodological twist on logical positivism...just don’t bother with things that don’t have empirical consequences.
I think Peterdjones’s answer hits it on the head. I understand you’ve thrashed-out related issues elsewhere, but it seems to me your claim that the idea of an objective value judgment is incoherent would again require doing quite a bit of philosophy to justify.
Really I meant to be throwing the ball back to lukeprog to give us an idea of what the ‘arguing about facts and anticipations’ alternative is, if not just philosophy pretending not to be. I could have been more clear about this. Part of my complaint is the wanting to have it both ways. For example, the thinking in the post [anticipations] (http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/) would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism. If LWers are serious about this idea, they really should look into its implications if they want to avoid inadvertent contradictions in the world-views. That means doing some philosophy.
I think Peterdjones’s answer hits it on the head. I understand you’ve thrashed-out related issues elsewhere, but it seems to me your claim that the idea of an objective value judgment is incoherent would again require doing quite a bit of philosophy to justify.
Really I meant to be throwing the ball back to lukeprog to give us an idea of what the ‘arguing about facts and anticipations’ alternative is, if not just philosophy pretending not to be. I could have been more clear about this. Part of my complaint is the wanting to have it both ways. For example, the thinking in the post [anticipations] (http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/) would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism. If LWers are serious about this idea, they really should look into its implications if they want to avoid inadvertent contradictions in the world-views. That means doing some philosophy.
I think Peterdjones’s answer hits it on the head. I understand you’ve thrashed-out related issues elsewhere, but here too it seems you have to do quite a bit of philosophy to get the conclusion that the idea of an objective value judgement is incoherent.
Really I meant to be throwing the ball back to lukeprog to give us an idea of what the ‘arguing about facts and anticipations’ alternative is, if not just philosophy pretending not to be. I could have been more clear about this. Part of the complaint is the wanting to have it both ways. For example, the thinking in the post anticipations would presumably be taken not to be philosophy, but it sounds a whole lot to me like a quick and dirty advocacy of anti-realism. If LWers are serious about the idea, they really should look into its implications if they want to avoid inadvertent contradictions in their world-views. That means doing some philosophy.
You say that objective values are incoherent, but you offer no argument for it. Presenting philosophical claims without justification isn’t something different to philosophy, or something better. It isn’t good rationality either. Rationality is as rationality does.
By incoherent I simply mean “I don’t know how to interpret the words.” So far no one seems to want to help me do that, so I can only await a coherent definition of objective ethics and related terms. Then possibly an argument could start. (But this is all like deja vu from the recent metaethics threads.)
Can you interpret the word “morality is subjective”? How about the the words “morality is not subjective”?
“Morality is subjective”: Each person has their own moral sentiments.
“Morality is not subjective”: Each person does not have their own moral sentiments. Or there is something more than each person’s moral sentiments that is worth calling “moral.” <--- But I ask, what is that “something more”?
OK. That is not what “subjective” means. What it means is that if something is subjective, an opinion is guaranteed to be correct or the last word on the matter just because it is the person’s opinion. And “objective” therefore means that it is possible for someone to be wrong in their opinion.
I don’t claim moral sentiments are correct, but simply that a person’s moral sentiment is their moral sentiment. They feel some emotions, and that’s all I know. You are seeming to say there is some way those emotions can be correct or incorrect, but in what sense? Or probably a clearer way to ask the question is, “What disadvantage can I anticipate if my emotions are incorrect?”
An emotion, such as a feeling of elation or disgust, is not correct or incorrect per se; but an emotion per se is no basis for a moral sentiment, because moral sentiment has to be about something. You could think gay marriage is wrong because homosexuality disgusts you, or you could feel serial-killing is good because it elates you, but that doesn’t mean the conclusions you are coming to are right. It may be a cast iron fact that you have those particular sentiments, but that says nothing about the correctness of their content, any more than any opinion you entertain is automatically correct.
ETA The disadvantages you can expect if your emotions are incorrect include being in the wrong whilst feeling you are in the right. Much as if you are entertaining incorrect opinions.
What if I don’t care about being wrong (if that’s really the only consequence I experience)? What if I just want to win?
Then you are, or are likely to be, morally in the wrong. That is of course possible. You can choose to do wrong. But it doesn’t constitute any kind of argument. Someone can elect to ignore the roundness of the world for some perverse reason, but that doesn’t make ”!he world is round” false or meaningless or subjective.
Indeed it is not an argument. Yet I can still say, “So what?” I am not going to worry about something that has no effect on my happiness. If there is some way it would have an effect, then I’d care about it.
The difference is, believing “The world is round” affects whether I win or not, whereas believing “I’m morally in the wrong” does not.
That is apparently true in your hypothetical, but it’s not true in the real world. Just as the roundness of the world has consequences, the wrongness of an action has consequences. For example, if you kill someone, then your fate is going to depend (probabilistically) on whether you were in the right (e.g. he attacked and you were defending your life) or in the wrong (e.g. you murdered him when he caught you burgling his house). The more in the right you were, then, ceteris paribus, the better your chances are.
You’re interpreting “I’m morally in the wrong” to mean something like, “Other people will react badly to my actions,” in which case I fully agree with you that it would affect my winning. Peterdjones apparently does not mean it that way, though.
Actually I am not. I am interpreting “I’m morally wrong” to mean something like, “I made an error of arithmetic in an area where other people depend on me.”
An error of arithmetic is an error of arithmetic regardless of whether any other people catch it, and regardless of whether any other people react badly to it. It is not, however, causally disconnected from their reaction, because, even though an error of arithmetic is what it is regardless of people’s reaction to it, nevertheless people will probably react badly to it if you’ve made it in an area where other people depend on you. For example, if you made an error of arithmetic in taking a test, it is probably the case that the test-grader did not make the same error of arithmetic and so it is probably the case that he will react badly to your error. Nevertheless, your error of arithmetic is an error and is not merely getting-a-different-answer-from-the-grader. Even in the improbable case where you luck out and the test grader makes exactly the same error as you and so you get full marks, nevertheless, you did still make that error.
Even if everyone except you wakes up tomorrow and believes that 3+4=6, whereas you still remember that 3+4=7, nevertheless in many contexts you had better not switch to what the majority believe. For example, if you are designing something that will stand up, like a building or a bridge, you had better get your math right, you had better correctly add 3+4=7 in the course of designing the edifice if that sum is ever called on calculating whether the structure will stand up.
If humanity divides into two factions, one faction of which believes that 3+4=6 and the other of which believes that 3+4=7, then the latter faction, the one that adds correctly, will in all likelihood over time prevail on account of being right. This is true even if the latter group starts out in the minority. Just imagine what sort of tricks you could pull on people who believe that 3+4=6. Because of the truth of 3+4=7, eventually people who are aware of this truth will succeed and those who believe that 3+4=6 will fail, and over time the vast majority of society will once again come to accept that 3+4=7.
And similarly with morality.
Nothing’s jumping out at me that would seriously impact a group’s effectiveness from day to day. I rarely find myself needing to add three and four in particular, and even more rarely in high-stakes situations. What did you have in mind?
Suppose you think that 3+4=6.
I offer you the following deal: give me $3 today and $4 tomorrow, and I will give you a 50 cent profit the day after tomorrow, by returning to you $6.50. You can take as much advantage of this as you want. In fact, if you like, you can give me $3 this second, $4 in one second, and in the following second I will give you back all your money plus 50 cents profit—that is, I will give you $6.50 in two seconds.
Since you think that 3+4=6, you will jump at this amazing deal.
I find that most people who believe absurd things still have functioning filters for “something is fishy about this”. I talked to a person who believed that the world was going to end in 2012, and I offered to give them a dollar right then in exchange for a hundred after the world didn’t end, but of course they didn’t take it: something was fishy about that.
Also, dollars are divisible: someone who believes that 3+4=6 may not believe that 300+400=600.
If he isn’t willing to take your trade, then his alleged belief that the world will end in 2012 is weak at best. In contrast, if you offer to give me $6.50 in exchange for $3 plus $3, then I will take your offer, because I really do believe that 3+3=6.
On the matter of divisibility, you are essentially proposing that someone with faulty arithmetic can effectively repair the gap by translating arithmetic problems away from the gap (e.g. by realizing that 3 dollars is 300 pennies and doing arithmetic on the pennies). But in order for them to do this consistently they need to know where the gap is, and if they know that, then it’s not a genuine gap. If they realize that their belief that 3+4=6 is faulty, then they don’t really believe it. In contrast, if they don’t realize that their belief that 3+4=6 is faulty, then they won’t consistently translate arithmetic problems away from the gap, and so my task becomes a simple matter of finding areas where they don’t translate problems away from the gap, but instead fall in.
Are you saying that you would not be even a little suspicious and inclined to back off if someone said they’d give you $6.50 in exchange for $3+$3? Not because your belief in arithmetic is shaky, but because your trust that people will give you fifty cents for no obvious reason is nonexistent and there is probably something going on?
I’m not denying that in a thought experiment, agents that are wrong about arithmetic can be money-pumped. I’m skeptical that in reality, human beings that are wrong about arithmetic can be money-pumped on an interesting scale.
In my hypothetical, we can suppose that they are perfectly aware of the existence of the other group. That is, the people who think that 3+4=7 are aware of the people who think that 3+4=6, and vice versa. This will provide them with all the explanation they need for the offer. They will think, “this person is one of those people who think that 3+4=7”, and that will explain to them the deal. They will see that the others are trying to profit off them, but they will believe that the attempt will fail, because after all, 3+4=6.
As a matter of fact, in my hypothetical the people who believe that 3+4=6 would be just as likely to offer those who believe that 3+4=7 a deal in an attempt to money-pump them. Since they believe that 3+4=6, and are aware of the belief of the others, they might offer the others the following deal: “give us $6.50, and then the next day we will give you $3 and the day after $4.” Since they believe that 3+4=6, they will think they are ripping the others off.
The thought experiment wasn’t intended to be applied to humans as they really are. It was intended to explain humans as they really are by imagining a competition between two kinds of humans—a group that is like us, and a group that is not like us. In the hypothetical scenario, the group like us wins.
And I think you completely missed my point, by the way. My point was that arithmetic is not merely a matter of agreement. The truth of a sum is not merely a matter of the majority of humanity agreeing on it. If more than half of humans believed that 3+4=6, this would not make 3+4=6 be true. Arithmetic truth is independent of majority opinion (call the view that arithmetic truth is a matter of consensus within a human group “arithmetic relativism” or “the consensus theory of arithmetic truth”). I argued for this as follows: suppose that half of humanity—nay, more than half—believed that 3+4=6, and a minority believed that 3+4=7. I argued that the minority with the latter belief would have the advantage. But if consensus defined arithmetic truth, that should not be the case. Therefore consensus does not define arithmetic truth.
My point is this: that arithmetic relativism is false. In your response, you actually assumed this point, because you’ve been assuming all along that 3+4=6 is false, even though in my hypothetical scenario a majority of humanity believed it is true.
So you’ve actually assumed my conclusion but questioned the argument that I used to argue for the conclusion.
And this, in turn, was to illustrate a more general point about consensus theories and relativism. The context was a discussion of morality. I had been interpreted as advocating what amounts to a consensus theory of morality, and I was trying to explain why may specific claims do not entail a consensus theory of morality, but are also compatible with a theory of morality as independent of consensus.
I agree with this, if that makes any difference.
In sum, you seem to be saying that morality involves arithmetic, and being wrong about arithmetic can hurt me, so being wrong about morality can hurt me.
There’s no particular connection between morality and arithmetic that I’m aware of. I brought up arithmetic to illustrate a point. My hope was that arithmetic is less problematic, less apt to lead us down philosophical blind allies, so that by using it to illustrate a point I wasn’t opening up yet another can of worms.
Then you basically seem to be saying I should signal a certain morality if I want to get on well in society. Well I do agree.
Whether someone is judged right and wrong by others has consequences, but the people doing the judging might be wrong. It is still an error to make morality justify itself in terms of instrumental utility, since there are plenty of examples of things that are instrumentally right but ethically wrong, like improved gas chambers.
Actually being in the right increases your probability of being judged to be in the right. Yes, the people doing the judging may be wrong, and that is why I made the statement probabilistic. This can be made blindingly obvious with an example. Go to a random country and start gunning down random people in the street. The people there will, with probability so close to 1 as makes no real difference, judge you to be in the wrong, because you of course will be in the wrong.
There is a reason why people’s judgment is not far off from right. It’s the same reason that people’s ability to do basic arithmetic when it comes to money is not far off from right. Someone who fails to understand that $10 is twice $5 (or rather the equivalent in the local currency) is going to be robbed blind and his chances of reproduction are slim to none. Similarly, someone whose judgment of right and wrong is seriously defective is in serious trouble. If someone witnesses a criminal lunatic gun down random people in the street and then walks up to him and says, “nice day”, he’s a serious candidate for a Darwin Award. Correct recognition of evil is a basic life skill, and any human who does not have it will be cut out of the gene pool. And so, if you go to a random country and start killing people randomly, you will be neutralized by the locals quickly. That’s a prediction. Moral thought has predictive power.
The only reason anyone can get away with the mass murder that you allude to is that they have overwhelming power on their side. And even they did it in secret, as I recall learning, which suggests that powerful as they were, they were not so powerful that they felt safe murdering millions openly.
Morality is how a human society governs itself in which no single person or organized group has overwhelming power over the rest of society. It is the spontaneous self-regulation of humanity. Its scope is therefore delimited by the absence of a person or organization with overwhelming power. Even though just about every place on Earth has a state, since it is not a totalitarian state there are many areas of life in which the state does not interfere, and which are therefore effectively free of state influence. In these areas of life humanity spontaneously self-regulates, and the name of the system of spontaneous self-regulation is morality.
It sounds to me like you’re describing the ability to recognize danger, not evil, there.
Say that your hypothetical criminal lunatic manages to avoid the police, and goes about his life. Later that week, he’s at a buffet restaurant, acting normally. Is he still evil? Assuming nobody recognizes him from the shooting, do you expect the other people using the buffet to react unusually to him in any way?
It’s not either/or. There is no such thing as a bare sense of danger. For example, if you are about to drive your car off a cliff, hopefully you notice in time and stop. In that case, you’ve sensed danger—but you also sensed the edge of a cliff, probably with your eyes. Or if you are about to drink antifreeze, hopefully you notice in time and stop. In that case, you’ve sensed danger—but you’ve also sensed antifreeze, probably with your nose.
And so on. It’s not either/or. You don’t either sense danger or sense some specific thing which happens to be dangerous. Rather, you sense something that happens to be dangerous, and because you know it’s dangerous, you sense danger.
Chances are higher than average that if he was a criminal lunatic a few days ago, he is still a criminal lunatic today.
Obviously not, because if you assume that people fail to perceive something, then it follows that they will behave in a way that is consistent with their failure to perceive it. Similarly, if you fail to notice that the antifreeze that you’re drinking is anything other than fruit punch, then you can be expected to drink it just as if it were fruit punch.
My point was that in the shooting case, the perception of danger is sufficient to explain bystanders’ behavior. They may perceive other things, but that seems mostly irrelevant.
You said:
This claim appears to be incompatible with your expectation that people will not notice your hypothetical murderer when they encounter him acting according to social norms after committing a murder, given that he’s supposedly still evil.
People perceive danger because they perceive evil, and evil is dangerous.
It is not irrelevant that they perceive a specific thing (such as evil) which is dangerous. Take away the perception of the specific thing, and they have no basis upon which to perceive danger. Only Spiderman directly perceives danger, without perceiving some specific thing which is dangerous. And he’s fictional.
I was referring to the standard, common ability to recognize evil. I was saying that someone who does not have that ability will be cut out of the gene pool (not definitely—probabilistically, his chances of surviving and reproducing are reduced, and over the generations the effect of this disadvantage compounds).
People who fail to recognize that the guy is that same guy from before are not thereby missing the standard human ability to recognize evil.
Except when the evil guys take over, Then you are in trouble if you oppose them.
That doesn’t affect my point. If there are actual or conceptual circumstances where instrumental good diverges from moral good, the two cannot be equated.
Why would it be wrong if they do? You theory of morality seems to be in need of another theory of morality to justify it.
Which is why the effective scope of morality is limited by concentrated power, as I said.
I did not equate moral good with instrumental good in the first place.
I didn’t say it would be wrong. I was talking about making predictions. The usefulness of morality in helping you to predict outcomes is limited by concentrated power.
On the contrary, my theory of morality is confirmed by the evidence. You yourself supplied some of the evidence. You pointed out that a concentration of power creates an exception to the prediction that someone who guns down random people will be neutralized. But this exception fits with my theory of morality, since my theory of morality is that it is the spontaneous self-regulation of humanity. Concentrated power interferes with self-regulation.
You say:
...but you also say...
..which seems to imply that you are still thinking of morality as something that has to pay its way instrumentally, by making useful predictions.
It’s a conceptual truth that power interferes with spontaneous self-regulation: but that isn’t the point. The point is not that you have a theory that makes predictions, but whether it is a theory of morality.
It is dubious to say of any society that the way it is organised is ipso facto moral. You have forestalled the relativistic problem by saying that socieites must self organise for equality and justice, not any old way, which takes it as read that equality and justice are Good Things. But an ethical theory must explain why they are good, not rest on them as a given.
“Has to”? I don’t remember saying “has to”. I remember saying “does”, or words to that effect. I was disputing the following claim:
This is factually false, considered as a claim about the real world.
I am presenting the hypothesis that, under certain constraints, there is no way for humanity to organize itself but morally or close to morally and that it does organize itself morally or close to morally. The most important constraint is that the organization is spontaneous, that is to say, that it does not rely on a central power forcing everyone to follow the same rules invented by that same central power. Another constraint is absence of war, though I think this constraint is already implicit in the idea of “spontaneous order” that I am making use of, since war destroys order and prevents order.
Because humans organize themselves morally, it is possible to make predictions. However, because of the “no central power” constraint, the scope of those predictions is limited to areas outside the control of the central power.
Fortunately for those of us who seek to make predictions on the basis of morality, and also fortunately for people in general, even though the planet is covered with centralized states, much of life still remains largely outside of their control.
is that a stipulative definition(“morality” =def “spontaneous organisation”) or is there some independent standard of morality on which it based?
What about non-centralised power? What if one fairly large group—the gentry, men, citizens, some racial group, have power over another in a decentralised way?
And what counts as a society? Can an Athenian slave-owner state that all citizens in their society are equal, and, as for slaves, they are not members of their society.
ETA: Actually, it’s worse than that. Not only are there examples of non-centralised power,there are cases where centralised power is on the side of angels and spontaneous self-organisation on the the other side; for instance the Civil Rights struggle, where the federal government backed equality, and the opposition was from the grassroots.
The Civil Rights struggle was national government versus state government, not government versus people. The Jim Crow laws were laws created by state legislatures, not spontaneous laws created by the people.
There is, by the way, such a thing as spontaneous law created by the people even under the state. The book Order Without Law is about this. The “order” it refers to is the spontaneous law—that is, the spontaneous self-government of the people acting privately, without help from the state. This spontaneous self-government ignores and in some cases contradicts the state’s official, legislated law.
Jim Crow was an example of official state law, and not an example of spontaneous order.
Plenty of things that happened weren’t sanctioned by state legislatures, such as discrimination by private lawyers, hassling of voters during registration drives, and the assassination of MLK
But law isn’t morality. There is such a thing as a laws that apply only to certain people, and which support privilege and the status quo rather than equality and justice.
Legislation distorts society and the distortion ripples outward. As for the assassination, that was a single act. Order is a statistical regularity.
I didn’t say it was. I pointed out an example of spontaneous order. It is my thesis that spontaneous order tends to be moral. Much order is spontaneous, so much order is moral, so you can make predictions on the basis of what is moral. That should not be confused with a claim that all order is morality, that all law is morality, which is the claim that you are disputing and a claim I did not make.
From it’s primordial state of equality...? I can see how a society that starts equal might self organise to stay that way. But I don’t think they start equal that often.
The fact that you are amoral does not mean there is anything wrong with morality, and is not an argument against it. You might as well be saying “there is a perfectly good rational argument that the world is round, but I prefer to be irrational”.
That doesn’t constitute an argument unless you can explain why your winning is the only thing that should matter.
Yeah, I said it’s not an argument. Yet again I can only ask, “So what?” (And this doesn’t make me amoral in the sense of not having moral sentiments. If you tell me me it is wrong to kill a dog for no reason, I will agree because I will interpret that as, “We both would be disgusted at the prospect of killing a dog for no reason.” But you seem to be saying there is something more.)
The wordings “affect my winning” and “matter” mean the same thing to me. I take “The world is round” seriously because it matters for my actions. I do not see how “I’m morally in the wrong”* matters for my actions. (Nor how “I’m pan-galactically in the wrong” matters. )
*EDIT: in the sense that you seem to be using it (quite possibly because I don’t know what that sense even is!).
So being wrong and not caring you are in the wrong is not the same as being right.
Yes. I am saying that moral sentiments can be wrong, and that that can be realised through reason, and that getting morality right matters more than anything.
But they don’t mean the same thing. Morality matters more than anything else by definition. You don’t prove anything by adopting an idiosyncratic private language.
The question is whether mattering for your actions is morally justifiable.
Yet I still don’t care, and by your own admission I suffer not in the slightest from my lack of caring.
Zorg says that getting pangalacticism right matters more than anything. He cannot tell us why it matters, but boy it really does matter.
Which would be? If you refer me to the dictionary again, I think we’re done here.
The fact that you are not going to worry about morality, does not make morality a) false b) meaningless or c) subjective. Can I take it you are no longer arguing for any of claims a) b) or c) ?
You have not succeeded in showing that winning is the most important thing.
I’ve never argued (a), I’m still arguing (actually just informing you) that the words “objective morality” are meaningless to me, and I’m still arguing (c) but only in the sense that it is equivalent to (b): in other words, I can only await some argument that morality is objective. (But first I’d need a definition!)
I’m using the word winning as a synonym for “getting what I want,” and I understand the most important thing to mean “what I care about most.” And I mean “want” and “care about” in a way that makes it tautological. Keep in mind I want other people to be happy, not suffer, etc. Nothing either of us have argued so far indicates we would necessarily have different moral sentiments about anything.
You are not actually being all that informative, since there remains a distinct supsicion that when you say some X is meaningless-to-you, that is a proxy for I-don’t-agree-with-it. I notice throughout these discussions that you never reference accepted dictiionary definitions as a basis for meaningfullness, but instead always offer some kind of idiosyncratic personal testimony.
What is wrong with dictionary definitions?
That doesn’t affect anything. You still have no proof for the revised version.
Other people out there in the non-existent Objective World?
I don’t think moral anti-realists are generally immoral people. I do think it is an intellectual mistake, whether or not you care about that.
Zorg said the same thing about his pan-galactic ethics.
Did you even read the post we’re commenting on?
Wait, you want proof that getting what I want is what I care about most?
Read what I wrote again.
Read.
“Changing your aims” is an action, presumably available for guiding with philosophy.
Upvoted for thoughtfulness and thoroughness.
I’m using ‘definition’ in the common sense: “the formal statement of the meaning or significance of a word, phrase, etc.” A stipulative definition is a kind of definition “in which a new or currently-existing term is given a specific meaning for the purposes of argument or discussion in a given context.”
A conceptual analysis of a term using necessary and sufficient conditions is another type of definition, in the common sense of ‘definition’ given above. Normally, a conceptual analysis seeks to arrive at a “formal statement of the meaning or significance of a word, phrase, etc.” in terms of necessary and sufficient conditions.
Using my dictionary usage of the term ‘define’, I would speak (in my language) of conceptual analysis as a particular way of defining a term, since the end result of a conceptual analysis is meant to be a “formal statement of the meaning or significance of a word, phrase, etc.”
I opened with a debate that everybody knew was silly, and tried to show that it was analagous to popular forms of conceptual analysis. I didn’t want to start with a popular example of conceptual analysis because philosophy-familiar people will have been trained not to find those examples silly. I gave at least three examples of actual philosophical analysis in my post (Schroeder on desire, Gettier on knowledge, Jackson on morality).
And I do think my opening offers an accurate example of conceptual analysis. Albert and Barry’s arguments about the computer microphone and hypothetical aliens are meant to argue about their intuitive concepts of ‘sound’, and what set of necessary and sufficient conditions they might converge upon. That’s standard conceptual analysis method.
The reason this process looks silly to us (when using a non-standard example like ‘sound’) is that it is so unproductive. Why think Albert and Barry have the same concept in mind? Words mean slightly different things in different cultures, subcultures, and small communities. We develop different intuitions about their meaning based on divergent life experiences. Our intuitions differ from each other’s due to the specifics of unconscious associative learning and attribution substitution heuristics. What is the point of bashing our intuitions about the meaning of terms against each other for thousands of pages, in the hopes that we’ll converge on a precise set of necessary and sufficient conditions? Even if we can get Albert and Barry to agree, what happens when Susan wants to use the same term, but has slightly differing intuitions about its meaning? And, let’s say we arrive at a messy set of 6 necessary and sufficient conditions for the intuitive meaning of the term. Is that going to be as useful for communication as one we consciously chose because it carved-up thingspace well? I doubt it. The IAU’s definition of ‘planet’ is more useful than the messy ‘folk’ definition of ‘planet’. Folk intuitions about ‘planet’ evolved over thousands of years and different people have different intuitions which may not always converge. In 2006, the IAU used modern astronomical knowledge to carve up thingspace in a more useful and informed way than our intuitions do.
Vague, intuitively-defined concepts are useful enough for daily conversation in many cases, and wherever they break down due to divergent intuitions and uses, we can just switch to stipulation/tabooing.
Yes. I’m going to argue about facts and anticipations. I’ve tried to show (a bit) in this post and in this comment about why doing (certain kinds of) conceptual analysis aren’t worth it. I’m curious to hear your answers to my many-questions paragraph about the use of conceptual analysis, above.
I’ve skipped responding to many parts of your comment because I wanted to ‘get on the same page’ about a few things first. Please re-raise any issues you’d like a response on.
You are surely right that there is no point in arguing over definitions in at least one sense—esp the definition of “definition”. Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. I hope the following engages constructively with your comments.
Suppose
we have two people, Albert and Barry
we have one thing, a car, X, of determinate interior volume
we have one sentence, S: “X is a subcompact”.
Albert affirms S, Barry denies S.
Scenario (1): Albert and Barry agree on the standard definition of ‘subcompact’ - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.
Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of ‘subcompact’ (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn’t anything people should engage in for long, I agree.
Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn’t be classified as subcompact -ie, X isn’t really subcompact, notwithstanding the received definition. This doesn’t have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter—a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of ‘subcompact car’. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).
Argument in scenarios 1 and 2 is futile—there is an acknowledged objective answer, and a way to get it—the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don’t arrive at an uncontroversial end point, you often learn a lot about the concepts (‘good’, knowledge’, ‘desires’, etc) in the process. Your example of the re-definition of ‘planet’ fits this model, I think.
This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don’t typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis. This is how I see it, anyway. I’d be interested to know if this seems wrong.
You may think it’s obvious, but I don’t see you’ve shown any of these 3 examples is silly. I don’t see that Schroeder’s project is silly (I haven’t read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept—helps us think about what a desire -and hence in part a rational agent- is.
As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition—that’s why the paper was so successful, and this is often how these debates go. Part of what’s interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit—conceptual analysis—is elusive.
I objected to your example because I didn’t see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments—not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle’s Chinese Room argument. His argument is multiply flawed, as far as I’m concerned -could get into that another time. But I still think it’s interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.
This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of ‘planet’ demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress—we really do come to a better understand of things.
As I see it, your central point is that conceptual analysis is useful because it results in a particular kind of process: the clarification of our intuitive concepts. Because our intuitive concepts are so muddled and not as clear-cut and useful as a stipulated definition such as the IAU’s definition for ‘planet’, I fail to see why clarifying our intuitive concepts is a good use of all that brain power. Such work might theoretically have some value for the psychology of concepts and for linguistics, and yet I suspect neither science would miss philosophy if philosophy went away. Indeed, scientific psychology is often said to have ‘debunked’ conceptual analysis because concepts are not processed in our brains in terms of necessary and sufficient conditions.
But I’m not sure I’m reading you correctly. Why do you think its useful to devote all that brainpower to clarifying our intuitive concepts of things?
I think that where we differ is on ‘intuitive concepts’ -what I would want to call just ‘concepts’. I don’t see that stipulative definitions replace them. Scenario (3), and even the IAU’s definition, illustrate this. It is coherent for an astronomer to argue that the IAU’s definition is mistaken. This implies that she has a more basic concept -which she would strive to make explicit in arguing her case- different than the IAU’s. For her to succeed in making her case -which is imaginable- people would have to agree with her, in which case we would have at least partially to share her concept. The IAU’s definition tries to make explicit our shared concept -and to some extent legislates, admittedly- but it is a different sort of animal than what we typically use in making judgements.
Philosophy doesn’t impact non-philosophical activities often, but when it does the impact is often quite big. Some examples: the influence of Mach on Einstein, of Rousseau and others on the French and American revolutions, Mill on the emancipation of women and freedom of speech, Adam Smith’s influence on economic thinking.
I consider though that the clarification is an end in itself. This site proves -what’s obvious anyway- that philosophical questions naturally have a grip on thinking people. People usually suppose the answer to any given philosophical question to be self-evident, but equally we typically disagree about what the obvious answer is. Philosophy is about elucidating those disagreements.
Keeping people busy with activities which don’t turn the planet into more non-biodegradeable consumer durables is fine by me. More productivity would not necessarily be a good thing (...to end with a sweeping undefended assertion.).
OTOH, there is a class of fallacies (the True Scotsman argument, tendentious redefinition, etc),which are based on getting stipulative definitions wrong. Getting them right means formalisation of intution or common usage or something like that.
You are surely right that there is no point in arguing over definitions in at least one sense—esp the definition of “definition”. Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. Hopefully the following will illuminate rather than obfuscate. Suppose
we have two people, Albert and Barry
we have one thing, a car, X, of determinate interior volume
we have one sentence, S: “X is a subcompact”.
Albert affirms S, Barry denies S.
Scenario (1): Albert and Barry agree on the standard definition of ‘subcompact’ - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.
Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of ‘subcompact’ (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn’t anything people should engage in for long, I agree.
Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn’t be classified as subcompact -ie, X isn’t really subcompact, notwithstanding the received definition. This doesn’t have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter—a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of ‘subcompact car’. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).
Argument in scenarios 1 and 2 is futile—there is an acknowledged objective answer, and a way to get it—the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don’t arrive at an uncontroversial end point, you often learn a lot about the concepts (‘good’, knowledge’, ‘desires’, etc) in the process. Your example of the re-definition of ‘planet’ fits this model, I think.
This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don’t typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis. This is how I see it, anyway. I would be interested to hear if this seems wrong.
You may think it’s obvious, but I don’t see you’ve shown any of these 3 examples is silly. I don’t see that Schroeder’s project is silly (I haven’t read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept—helps us think about what a desire -and hence in part a rational agent- is.
As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition—that’s why the paper was so successful, and this is often how these debates go. Part of what’s interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit—conceptual analysis—is elusive.
I objected to your example because I didn’t see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments—not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle’s Chinese Room argument. His argument is multiply flawed, as far as I’m concerned -could get into that another time. But I still think it’s interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.
This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of ‘planet’ demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress—we really do come to a better understand of things.
You are surely right that there is no point in arguing over definitions in at least one sense—esp the definition of “definition”. Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. Hopefully the following will illuminate rather than obfuscate. Suppose
we have two people, Albert and Barry
we have one thing, a car, X, of determinate interior volume
we have one sentence, S: “X is a subcompact”.
Albert affirms S, Barry denies S.
Scenario (1): Albert and Barry agree on the standard definition of ‘subcompact’ - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.
Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of ‘subcompact’ (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn’t anything people should engage in for long, I agree.
Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn’t be classified as subcompact -ie, X isn’t really subcompact, notwithstanding the received definition. This doesn’t have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter—a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of ‘subcompact car’. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).
Argument in scenarios 1 and 2 is futile—there is an acknowledged objective answer, and a way to get it—the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don’t arrive at an uncontroversial end point, you often learn a lot about the concepts (‘good’, knowledge’, ‘desires’, etc) in the process. Your example of the re-definition of ‘planet’ fits this model, I think.
This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don’t typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis.
You may think it’s obvious, but I don’t see you’ve shown any of these 3 examples is silly. I don’t see that Schroeder’s project is silly (I haven’t read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept—helps us think about what a desire -and hence in part a rational agent- is.
As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition—that’s why the paper was so successful, and this is often how these debates go. Part of what’s interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit—conceptual analysis—is elusive.
I objected to your example because I didn’t see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments—not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle’s Chinese Room argument. His argument is multiply flawed, as far as I’m concerned -could get into that another time. But I still think it’s interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.
This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of ‘planet’ demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress—we really do come to a better understand of things.
You are surely right that there is no point in arguing over definitions in at least one sense—esp the definition of “definition”. Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. Hopefully the following will illuminate rather than obfuscate. Suppose
we have two people, Albert and Barry
we have one thing, a car, X, of determinate interior volume
we have one sentence, S: “X is a subcompact”.
Albert affirms S, Barry denies S.
Scenario (1): Albert and Barry agree on the standard definition of ‘subcompact’ - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.
Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of ‘subcompact’ (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn’t anything people should engage in for long, I agree.
Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn’t be classified as subcompact -ie, X isn’t really subcompact, notwithstanding the received definition. This doesn’t have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter—a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of ‘subcompact car’. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).
Argument in scenarios 1 and 2 is futile—there is an acknowledged objective answer, and a way to get it—the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don’t arrive at an uncontroversial end point, you often learn a lot about the concepts (‘good’, knowledge’, ‘desires’, etc) in the process. Your example of the re-definition of ‘planet’ fits this model, I think.
This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don’t typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis.
You may think it’s obvious, but I don’t see you’ve shown any of these 3 examples is silly. I don’t see that Schroeder’s project is silly (I haven’t read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept—helps us think about what a desire -and hence in part a rational agent- is.
As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition—that’s why the paper was so successful, and this is often how these debates go. Part of what’s interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit—conceptual analysis—is elusive.
I objected to your example because I didn’t see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments—not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle’s Chinese Room argument. His argument is multiply flawed, as far as I’m concerned -could get into that another time. But I still think it’s interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.
This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of ‘planet’ demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress—we really do come to a better understand of things.
You are surely right that there is no point in arguing over definitions in at least one sense—esp the definition of “definition”. Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. Hopefully the following will illuminate rather than obfuscate. Suppose
we have two people, Albert and Barry
we have one thing, a car, X, of determinate interior volume
we have one sentence, S: “X is a subcompact”.
Albert affirms S, Barry denies S.
Scenario (1): Albert and Barry agree on the standard definition of ‘subcompact’ - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.
Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of ‘subcompact’ (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn’t anything people should engage in for long, I agree.
Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn’t be classified as subcompact -ie, X isn’t really subcompact, notwithstanding the received definition. This doesn’t have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter—a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of ‘subcompact car’. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).
Argument in scenarios 1 and 2 is futile—there is an acknowledged objective answer, and a way to get it—the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don’t arrive at an uncontroversial end point, you often learn a lot about the concepts (‘good’, knowledge’, ‘desires’, etc) in the process. Your example of the re-definition of ‘planet’ fits this model, I think.
This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don’t typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis.
You may think it’s obvious, but I don’t see you’ve shown any of these 3 examples is silly. I don’t see that Schroeder’s project is silly (I haven’t read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept—helps us think about what a desire -and hence in part a rational agent- is.
As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition—that’s why the paper was so successful, and this is often how these debates go. Part of what’s interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit—conceptual analysis—is elusive.
I objected to your example because I didn’t see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments—not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle’s Chinese Room argument. His argument is multiply flawed, as far as I’m concerned -could get into that another time. But I still think it’s interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.
This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of ‘planet’ demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress—we really do come to a better understand of things.
You are surely right that there is no point in arguing over definitions in at least one sense—esp the definition of “definition”. Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. I hope the following engages constructively with what you’re saying. Suppose
we have two people, Albert and Barry we have one thing, a car, X, of determinate interior volume we have one sentence, S: “X is a subcompact”. Albert affirms S, Barry denies S.
Scenario (1): Albert and Barry agree on the standard definition of ‘subcompact’ - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.
Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of ‘subcompact’ (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn’t anything people should engage in for long, I agree.
Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn’t be classified as subcompact -ie, X isn’t really subcompact, notwithstanding the received definition. This doesn’t have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter—a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of ‘subcompact car’. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).
Argument in scenarios 1 and 2 is futile—there is an acknowledged objective answer, and a way to get it—the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don’t arrive at an uncontroversial end point, you often learn a lot about the concepts (‘good’, knowledge’, ‘desires’, etc) in the process. Your example of the re-definition of ‘planet’ fits this model, I think.
This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don’t typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis.
You may think it’s obvious, but I don’t see you’ve shown any of these 3 examples is silly. I don’t see that Schroeder’s project is silly (I haven’t read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept—helps us think about what a desire -and hence in part a rational agent- is.
As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition—that’s why the paper was so successful, and this is often how these debates go. Part of what’s interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit—conceptual analysis—is elusive.
I objected to your example because I didn’t see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments—not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle’s Chinese Room argument. His argument is multiply flawed, as far as I’m concerned -could get into that another time. But I still think it’s interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.
This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of ‘planet’ demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress—we really do come to a better understand of things.
To point people to some additional references on conceptual analysis in philosophy. Audi’s (1983, p. 90) “rough characterization” of conceptual analysis is, I think, standard: “Let us simply construe it as an attempt to provide an illuminating set of necessary and sufficient conditions for the (correct) application of a concept.”
Or, Ramsey’s (1992) take on conceptual analysis: “philosophers propose and reject definitions for a given abstract concept by thinking hard about intuitive instances of the concept and trying to determine what their essential properties might be.”
Sandin (2006) gives an example:
This is precisely what Albert and Barry are doing with regard to ‘sound’.
Audi (1983). The Applications of Conceptual Analysis. Metaphilosophy 14: 87-106.
Ramsey (1992). Prototypes and Conceptual Analysis. Topoi, 11: 59-70.
Sandin (2006). Has psychology debunked conceptual analysis? Metaphilosophy, 37: 26-33.
Eliezer does have a post in which he talks about doing what you call conceptual analysis more-or-less as you describe and why it’s worthwhile. Unfortunately, since that’s just one somewhat obscure post whereas he talks about tabooing words in many of his posts, when LWrongers encounter conceptual analysis, their cached thought is to say “taboo your words” and dismiss the whole analysis as useless.
The ‘taboo X’ reply does seem overused. It is something that is sometimes best to just ignore when you don’t think it aids in conveying the point you were making.
When I try that, I tend to get down-votes and replies complaining that I’m not responding to their arguments.
I don’t know the specific details of the instances in question. One thing I am sure about, however, is that people can’t downvote comments that you don’t make. Sometimes a thread is just a lost cause. Once things get polarized it often makes no difference at all what you say. Which is not to say I am always wise enough to steer clear of arguments. Merely that I am wise enough to notice when I do make that mistake. ;)
I do not think that he is describing conceptual analysis. Starting with a word vs. starting with a set of objects makes all the difference.
In the example he does start with a word, namely ‘art’, then uses our intuition to get a set of examples. This is more-or-less how conceptual analysis works.
But he’s not analyzing “art”, he’s analyzing the set of examples, and that is all the difference.
I disagree. Suppose after proposing a definition of art based to the listed examples, someone produced another example that clearly satisfied our intuitions of what constituted art but didn’t satisfy the definitions. Would Eliezer:
a) say “sorry despite our intuitions that example isn’t art by definition”, or
b) conclude that the example was art and there was a problem with the definition?
I’m guessing (b).
He’s not trying to define art in accord with on our collective intuitions, he’s trying to find the simplest boundary around a list of examples based on an individual’s intuitions.
I would argue that the list of examples in the article is abbreviated for simplicity. If there is no single clear simple boundary between the two sets, one can always ask for more examples. But one asks an individual and not all of humanity.
I would argue he’s trying to find the simplest coherent extrapolation of our intuitions.
Why do we even care about what specifically Eliezer Yudkowsky was trying to do in that post? Isn’t “is it more helpful to try to find the simplest boundary around a list or the simplest coherent explanation of intuitions?” a much better question?
Focus on what matters, work on actually solving problems instead of trying to just win arguments.
The answer to your question is “it depends on the situation”. There are some situations in which are intuitions contain some useful, hidden information which we can extract with this method. There are some situation in which our intuitions differ and it makes sense to consider a bunch of separate lists.
But, regardless, it is simply the case that when Eliezer says
“Perhaps you come to me with a long list of the things that you call “art” and “not art”″
and
“It feels intuitive to me to draw this boundary, but I don’t know why—can you find me an intension that matches this extension? Can you give me a simple description of this boundary?”
he is not talking about “our intuitions”, but a single list provided by a single person.
(It is also the case that I would rather talk about that than whatever useless thing I would instead be doing with my time.)
Eliezer’s point in that post was that there are more and less natural ways to “carve reality at the joints.” That however much we might say that a definition is just a matter of preference, there are useful definitions and less useful ones. The conceptual analysis lukeprog is talking about does call for the rationalist taboo, in my opinion, but simply arguing about which definition is more useful as Eliezer does (if we limit conceptual analysis to that) does not.