You are surely right that there is no point in arguing over definitions in at least one sense—esp the definition of “definition”. Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. Hopefully the following will illuminate rather than obfuscate.
Suppose
we have two people, Albert and Barry
we have one thing, a car, X, of determinate interior volume
we have one sentence, S: “X is a subcompact”.
Albert affirms S, Barry denies S.
Scenario (1): Albert and Barry agree on the standard definition of ‘subcompact’ - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.
Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of ‘subcompact’ (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn’t anything people should engage in for long, I agree.
Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn’t be classified as subcompact -ie, X isn’t really subcompact, notwithstanding the received definition. This doesn’t have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter—a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of ‘subcompact car’. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).
Argument in scenarios 1 and 2 is futile—there is an acknowledged objective answer, and a way to get it—the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don’t arrive at an uncontroversial end point, you often learn a lot about the concepts (‘good’, knowledge’, ‘desires’, etc) in the process. Your example of the re-definition of ‘planet’ fits this model, I think.
This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don’t typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis.
I opened with a debate that everybody knew was silly, and tried to show that it was analagous to popular forms of conceptual analysis. I didn’t want to start with a popular example of conceptual analysis because philosophy-familiar people will have been trained not to find those examples silly. I gave at least three examples of actual philosophical analysis in my post (Schroeder on desire, Gettier on knowledge, Jackson on morality).
You may think it’s obvious, but I don’t see you’ve shown any of these 3 examples is silly. I don’t see that Schroeder’s project is silly (I haven’t read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept—helps us think about what a desire -and hence in part a rational agent- is.
As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition—that’s why the paper was so successful, and this is often how these debates go. Part of what’s interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit—conceptual analysis—is elusive.
I objected to your example because I didn’t see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments—not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle’s Chinese Room argument. His argument is multiply flawed, as far as I’m concerned -could get into that another time. But I still think it’s interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.
Words mean slightly different things in different cultures, subcultures, and small communities. We develop different intuitions about their meaning based on divergent life experiences. Our intuitions differ from each other’s due to the specifics of unconscious associative learning and attribution substitution heuristics. What is the point of bashing our intuitions about the meaning of terms against each other for thousands of pages, in the hopes that we’ll converge on a precise set of necessary and sufficient conditions? Even if we can get Albert and Barry to agree, what happens when Susan wants to use the same term, but has slightly differing intuitions about its meaning?
This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of ‘planet’ demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress—we really do come to a better understand of things.
You are surely right that there is no point in arguing over definitions in at least one sense—esp the definition of “definition”. Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. Hopefully the following will illuminate rather than obfuscate. Suppose
we have two people, Albert and Barry
we have one thing, a car, X, of determinate interior volume
we have one sentence, S: “X is a subcompact”.
Albert affirms S, Barry denies S.
Scenario (1): Albert and Barry agree on the standard definition of ‘subcompact’ - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.
Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of ‘subcompact’ (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn’t anything people should engage in for long, I agree.
Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn’t be classified as subcompact -ie, X isn’t really subcompact, notwithstanding the received definition. This doesn’t have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter—a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of ‘subcompact car’. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).
Argument in scenarios 1 and 2 is futile—there is an acknowledged objective answer, and a way to get it—the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don’t arrive at an uncontroversial end point, you often learn a lot about the concepts (‘good’, knowledge’, ‘desires’, etc) in the process. Your example of the re-definition of ‘planet’ fits this model, I think.
This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don’t typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis.
You may think it’s obvious, but I don’t see you’ve shown any of these 3 examples is silly. I don’t see that Schroeder’s project is silly (I haven’t read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept—helps us think about what a desire -and hence in part a rational agent- is.
As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition—that’s why the paper was so successful, and this is often how these debates go. Part of what’s interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit—conceptual analysis—is elusive.
I objected to your example because I didn’t see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments—not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle’s Chinese Room argument. His argument is multiply flawed, as far as I’m concerned -could get into that another time. But I still think it’s interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.
This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of ‘planet’ demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress—we really do come to a better understand of things.