You are surely right that there is no point in arguing over definitions in at least one sense—esp the definition of “definition”. Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. I hope the following engages constructively with your comments.
Suppose
we have two people, Albert and Barry
we have one thing, a car, X, of determinate interior volume
we have one sentence, S: “X is a subcompact”.
Albert affirms S, Barry denies S.
Scenario (1): Albert and Barry agree on the standard definition of ‘subcompact’ - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.
Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of ‘subcompact’ (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn’t anything people should engage in for long, I agree.
Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn’t be classified as subcompact -ie, X isn’t really subcompact, notwithstanding the received definition. This doesn’t have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter—a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of ‘subcompact car’. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).
Argument in scenarios 1 and 2 is futile—there is an acknowledged objective answer, and a way to get it—the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don’t arrive at an uncontroversial end point, you often learn a lot about the concepts (‘good’, knowledge’, ‘desires’, etc) in the process. Your example of the re-definition of ‘planet’ fits this model, I think.
This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don’t typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis. This is how I see it, anyway. I’d be interested to know if this seems wrong.
I opened with a debate that everybody knew was silly, and tried to show that it was analagous to popular forms of conceptual analysis. I didn’t want to start with a popular example of conceptual analysis because philosophy-familiar people will have been trained not to find those examples silly. I gave at least three examples of actual philosophical analysis in my post (Schroeder on desire, Gettier on knowledge, Jackson on morality).
You may think it’s obvious, but I don’t see you’ve shown any of these 3 examples is silly. I don’t see that Schroeder’s project is silly (I haven’t read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept—helps us think about what a desire -and hence in part a rational agent- is.
As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition—that’s why the paper was so successful, and this is often how these debates go. Part of what’s interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit—conceptual analysis—is elusive.
I objected to your example because I didn’t see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments—not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle’s Chinese Room argument. His argument is multiply flawed, as far as I’m concerned -could get into that another time. But I still think it’s interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.
Words mean slightly different things in different cultures, subcultures, and small communities. We develop different intuitions about their meaning based on divergent life experiences. Our intuitions differ from each other’s due to the specifics of unconscious associative learning and attribution substitution heuristics. What is the point of bashing our intuitions about the meaning of terms against each other for thousands of pages, in the hopes that we’ll converge on a precise set of necessary and sufficient conditions? Even if we can get Albert and Barry to agree, what happens when Susan wants to use the same term, but has slightly differing intuitions about its meaning?
This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of ‘planet’ demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress—we really do come to a better understand of things.
As I see it, your central point is that conceptual analysis is useful because it results in a particular kind of process: the clarification of our intuitive concepts. Because our intuitive concepts are so muddled and not as clear-cut and useful as a stipulated definition such as the IAU’s definition for ‘planet’, I fail to see why clarifying our intuitive concepts is a good use of all that brain power. Such work might theoretically have some value for the psychology of concepts and for linguistics, and yet I suspect neither science would miss philosophy if philosophy went away. Indeed, scientific psychology is often said to have ‘debunked’ conceptual analysis because concepts are not processed in our brains in terms of necessary and sufficient conditions.
But I’m not sure I’m reading you correctly. Why do you think its useful to devote all that brainpower to clarifying our intuitive concepts of things?
As I see it, your central point is that conceptual analysis is useful because it results in a particular kind of process: the clarification of our intuitive concepts. Because our intuitive concepts are so muddled and not as clear-cut and useful as a stipulated definition such as the IAU’s definition for ‘planet’, I fail to see why clarifying our intuitive concepts is a good use of all that brain power.
I think that where we differ is on ‘intuitive concepts’ -what I would want to call just ‘concepts’. I don’t see that stipulative definitions replace them. Scenario (3), and even the IAU’s definition, illustrate this. It is coherent for an astronomer to argue that the IAU’s definition is mistaken. This implies that she has a more basic concept -which she would strive to make explicit in arguing her case- different than the IAU’s. For her to succeed in making her case -which is imaginable- people would have to agree with her, in which case we would have at least partially to share her concept. The IAU’s definition tries to make explicit our shared concept -and to some extent legislates, admittedly- but it is a different sort of animal than what we typically use in making judgements.
Philosophy doesn’t impact non-philosophical activities often, but when it does the impact is often quite big. Some examples: the influence of Mach on Einstein, of Rousseau and others on the French and American revolutions, Mill on the emancipation of women and freedom of speech, Adam Smith’s influence on economic thinking.
I consider though that the clarification is an end in itself. This site proves -what’s obvious anyway- that philosophical questions naturally have a grip on thinking people. People usually suppose the answer to any given philosophical question to be self-evident, but equally we typically disagree about what the obvious answer is. Philosophy is about elucidating those disagreements.
Keeping people busy with activities which don’t turn the planet into more non-biodegradeable consumer durables is fine by me. More productivity would not necessarily be a good thing (...to end with a sweeping undefended assertion.).
Because our intuitive concepts are so muddled and not as clear-cut and useful as a stipulated definition such as the IAU’s definition for ‘planet’, I fail to see why clarifying our intuitive concepts is a good use of all that brain power.
OTOH, there is a class of fallacies (the True Scotsman argument, tendentious redefinition, etc),which are based on getting stipulative definitions wrong. Getting them right means formalisation of intution or common usage or something like that.
You are surely right that there is no point in arguing over definitions in at least one sense—esp the definition of “definition”. Your reply is reasonable and I continue to think that the hallmark of rationality is susceptibility to persuasion, but I am not won over yet. I hope the following engages constructively with your comments.
Suppose
we have two people, Albert and Barry
we have one thing, a car, X, of determinate interior volume
we have one sentence, S: “X is a subcompact”.
Albert affirms S, Barry denies S.
Scenario (1): Albert and Barry agree on the standard definition of ‘subcompact’ - a car is a subcompact just in case 2 407 L < car volume < 2 803 L, but they disagree as to the volume of X. Clearly a factual disagreement.
Scenario (2): Albert and Barry agree on the volume of X, but disagree on the standard definition of ‘subcompact’ (a visit to Wikipedia would resolve the matter). This a disagreement about standard definitions, and isn’t anything people should engage in for long, I agree.
Scenario (3) Albert and Barry agree as to the volume of X and the standard definition, but Barry thinks the standard definition is misguided, and that if it were corrected, X wouldn’t be classified as subcompact -ie, X isn’t really subcompact, notwithstanding the received definition. This doesn’t have to be a silly position. It might be that if you graphed numbers of models of car against volume, using various different volume increments, you would find cars really do fall into natural -if vague- groups, and that the natural cutoff for subcompacts is different than the received definition. And this might really matter—a parking-challenged jurisdiction might offer a fee discount for subcompact owners. I would call this a disagreement about the concept of ‘subcompact car’. I understand you want to call this a disagreement about definitions, albeit of a different kind than in scenario (2).
Argument in scenarios 1 and 2 is futile—there is an acknowledged objective answer, and a way to get it—the way to resolve the matter is to measure or to look-up. Arguments as in scenario 3, though, can be useful -especially with less arbitrary concepts than in the example. The goal in such cases is to clarify -to rationalize- concepts. Even if you don’t arrive at an uncontroversial end point, you often learn a lot about the concepts (‘good’, knowledge’, ‘desires’, etc) in the process. Your example of the re-definition of ‘planet’ fits this model, I think.
This said, none of these scenarios represents a typical disagreement over a conceptual analysis. In such a debate, there typically is not a received, widely accepted analysis or strict definition, just as in meaning something by a word, we don’t typically have in mind some strict definition. On the contrary, typically, intuitions about what falls under the concept are agreed by almost everyone, one person sticks his neck out with proposed necessary and sufficient conditions meant to capture all and only the agreed instances, and then challengers work to contrive examples which often everyone agrees refute the analysis. This is how I see it, anyway. I’d be interested to know if this seems wrong.
You may think it’s obvious, but I don’t see you’ve shown any of these 3 examples is silly. I don’t see that Schroeder’s project is silly (I haven’t read Schroeder, admittedly). Insofar as rational agents are typically modelled merely in terms of their beliefs and desires, what desires are is important to our understanding of ourselves as rational. Testing a proposed analysis by seeking to contrive counter-examples -even far-fetched- helps illuminate the concept—helps us think about what a desire -and hence in part a rational agent- is.
As for Gettier, his paper, as I know you are aware, listed counter-examples to the analysis of knowledge as justified true belief. He contrived a series of cases in which people justifiedly believe true propositions, and yet -we intuitively agree- do not know them. The key point is that effectively everyone shares the intuition—that’s why the paper was so successful, and this is often how these debates go. Part of what’s interesting is precisely that although people do share quite subtle intuitions, the task of making them explicit—conceptual analysis—is elusive.
I objected to your example because I didn’t see how anyone could have an intuition base on what you said, whereas clear intuitions are key to such arguments. Now, it would definitely be a bad plan to take on the burden of defending all philosophical arguments—not all published arguments are top-drawer stuff (but can Cog Sci, eg, make this boast?). One target of much abuse is John Searle’s Chinese Room argument. His argument is multiply flawed, as far as I’m concerned -could get into that another time. But I still think it’s interesting, for what it reveals about differences in intuitions. There are quite different reactions from smart people.
This gets to the crux. We make different judgements, true, but in virtue of speaking the same language we must in an important sense mean the same thing by our words. The logic of communication requires that we take ourselves to be talking about the same thing in using language -whether that thing be goodness, knowledge, planethood or hockey pucks. Your point about the IAU and the definition of ‘planet’ demonstrates the same kind of process of clarification of a concept, albeit informed by empirical data. The point of the bashing is that is that it really does result in progress—we really do come to a better understand of things.
As I see it, your central point is that conceptual analysis is useful because it results in a particular kind of process: the clarification of our intuitive concepts. Because our intuitive concepts are so muddled and not as clear-cut and useful as a stipulated definition such as the IAU’s definition for ‘planet’, I fail to see why clarifying our intuitive concepts is a good use of all that brain power. Such work might theoretically have some value for the psychology of concepts and for linguistics, and yet I suspect neither science would miss philosophy if philosophy went away. Indeed, scientific psychology is often said to have ‘debunked’ conceptual analysis because concepts are not processed in our brains in terms of necessary and sufficient conditions.
But I’m not sure I’m reading you correctly. Why do you think its useful to devote all that brainpower to clarifying our intuitive concepts of things?
I think that where we differ is on ‘intuitive concepts’ -what I would want to call just ‘concepts’. I don’t see that stipulative definitions replace them. Scenario (3), and even the IAU’s definition, illustrate this. It is coherent for an astronomer to argue that the IAU’s definition is mistaken. This implies that she has a more basic concept -which she would strive to make explicit in arguing her case- different than the IAU’s. For her to succeed in making her case -which is imaginable- people would have to agree with her, in which case we would have at least partially to share her concept. The IAU’s definition tries to make explicit our shared concept -and to some extent legislates, admittedly- but it is a different sort of animal than what we typically use in making judgements.
Philosophy doesn’t impact non-philosophical activities often, but when it does the impact is often quite big. Some examples: the influence of Mach on Einstein, of Rousseau and others on the French and American revolutions, Mill on the emancipation of women and freedom of speech, Adam Smith’s influence on economic thinking.
I consider though that the clarification is an end in itself. This site proves -what’s obvious anyway- that philosophical questions naturally have a grip on thinking people. People usually suppose the answer to any given philosophical question to be self-evident, but equally we typically disagree about what the obvious answer is. Philosophy is about elucidating those disagreements.
Keeping people busy with activities which don’t turn the planet into more non-biodegradeable consumer durables is fine by me. More productivity would not necessarily be a good thing (...to end with a sweeping undefended assertion.).
OTOH, there is a class of fallacies (the True Scotsman argument, tendentious redefinition, etc),which are based on getting stipulative definitions wrong. Getting them right means formalisation of intution or common usage or something like that.