As a person with a scientific background who suddenly has come into academic philosophy, I have been puzzled by some of the aspects of its methodology. I have been particularly bothered with the reluctance of some people to give precise definitions of the concepts that they are discussing about. But lately, as a result of several discussions with certain member of the Faculty, I have come to understand why this occurs (if not in the whole of philosophy, at least in this particular trend in academic philosophy).
I have seen that philosophers (I am talking about several of them published in top-ranked, peer-reviewed journals, the kinds of articles I read, study and discuss) who discuss about a concept which tries to capture “x” have, on one hand, an intuitive idea of this concept, imprecise, vague, partial and maybe even self-contradictory. On the other hand, they have several “approaches” to “x”, corresponding to several philosophical trends that have a more precise characterisation of “x” in terms of other ideas that are more clear i.e. in terms of the composites “y1”, “y2“, “y3”, … The major issue at stake in the discussion seems to be whether “x” is really “y1” or “y2” or “y3” or something else (note that sometimes an “yi” is a reduction to other terms, sometimes “yi” is a more accurate characterisation that keeps the words used to speak of “x”, that does not matter).
What is puzzling is this: how come all of them agree they are taking about “x” while actually, each is proposing a different approach? Indeed, those who say that “x” is “y1” are actually saying that we should adopt “y1“ in our thought, and by “x” they understand “y1”. Others understand “y2” in “x”. Why don’t they realise they are talking past each other, that each of them is proposing a different concept and the problem comes just because they want all to call it like they call “x”? Why don’t they make sub-indices for “x”, therefore managing to keep the word they so desperately want, but without confusing each of its possible meanings?
The answer I have come up with is this: they all believe that there is a unique, best sense to which they refer when they speak about “x”, even if it they don’t know which is it. They agree that they have an intuitive grasp of something and that something is “x”, but they disagree about how to better refine that (“y1”? “y2“? “y3”?). Instead, I used to focus only on “y1” “y2” and “y3″ and assess them according to whether they are self-consistent or not, simple or not, useful or not, etc. “x” had no clear definition, it barely meant anything to me, and therefore I decided I should banish it from my thought.
But I have come to the conclusion that it is useful to keep this loose idea about “x” in mind and believe that there is something to that intuition, because only in the contemplation of this intuition you seem to have access to knowledge that you have not been able to formalise, and hence, the intuition is a source of new knowledge. Therefore, philosophers are quite right in keeping vague, loose and perhaps self-contradictory concepts about “x”, because this is an important source from where they draw in order to create and refine approaches “y1” “y2″ and “y3”, hoping that one of them might get “x” right. ((At this point, one might claim that I am simply saying that it is useful to have the illusion that the concept of “x” really means something, even though it actually means nothing, simply because having the illusion is a source of inspiration. But doesn’t precisely the fact that it is a source of inspiration suggest that it is more than a simple illusion? There seems to be a sense in which a bad approach to “x” is still ABOUT “x”))
I would be grateful if I got your thoughts on this.
P.S. A more daring hypothesis is that when philosophers get “x” right in “y”, this approach “y” becomes a scientific paradigm. This also suggests that for those “x” where little progress has been made in millennia, the debate is not necessarily misguided, but what happens is that the intuition is pointing towards something very, very complicated, and no one has been able to give a formal accout of the things it refers to.
It might be useful to look at what happens in mathematics. What, for example, is a “number”? In antiquity, there were the whole numbers and fractions of everyday experience. You can count apples, and cut an apple in half. (BTW, I recently discovered that among the ancient Greeks, there was some dispute about whether 1 was a number. No, some said, 1 was the unit with which other things were measured. 2, 3, 4, and so on were numbers, but not 1.)
Then irrationals were discovered, and negative numbers, and the real line, and complex numbers, and octonions, and Cayley numbers, and p-adic numbers, and perhaps there are even more things that mathematicians call numbers. And there are other ways that the ways that “numbers” behave have been generalised to define such things as fields, vector spaces, rings, and many more, the elements of which are generally not called numbers. But unlike philosophers, mathematicians do not dispute which of these is the “right” concept of “number”. All of the concepts have their uses, and many of them are called “numbers”, but “number” has never been given a formal definition, and does not need one.
For another example, consider “integration”. The idea of dividing an arbitrary shape into pieces of known area and summing their areas goes back at least to Archimedes’ “method of exhaustion”. When real numbers and functions became better understood it was formalised as Riemann integration. That was later generalised to Lebesgue integration, and then to Haar measure. Stochastic processes brought in Itô integration and several other forms.
Again, no-one as far as I know has ever troubled with the question, “but what is integration, really?” There is a general, intuitive idea of “measuring the size of things”, which has been given various precise formulations in various contexts. In some of those contexts it may make sense to speak of the “right” concept of integration, when there is one that subsumes all of the others and appears to be the most general possible (e.g. Lebesgue integration on Euclidean spaces), but in other contexts there may be multiple incomparable concepts, each with its own uses (e.g. Itô and Stratonovich integration for stochastic processes).
But in philosophy, there are no theorems by which to judge the usefulness of a precisely defined concept.
I think this is a very good contrast, indeed. I agree with your view of the matter, and I think I will use “number” as a particular example next time I recount the thoughts which brought me to write the post. Thank you.
Scott Aaronson has formulated it in a similar way (quoted from here):
whenever it’s been possible to make definite progress on ancient philosophical problems, such progress has almost always involved a [kind of] “bait-and-switch.” In other words: one replaces an unanswerable philosophical riddle Q by a “merely” scientific or mathematical question Q′, which captures part of what people have wanted to know when they’ve asked Q. Then, with luck, one solves Q′.
Of course, even if Q′ is solved, centuries later philosophers might still be debating the exact relation between Q and Q′! And further exploration might lead to other scientific or mathematical questions — Q′′, Q′′′, and so on — which capture aspects of Q that Q′ left untouched. But from my perspective, this process of “breaking off” answerable parts of unanswerable riddles, then trying to answer those parts, is the closest thing to philosophical progress that there is.
…A good replacement question Q′ should satisfy two properties: (a) Q′ should capture some aspect of the original question Q — so that an answer to Q′ would be hard to ignore in any subsequent discussion of Q, [and] (b) Q′ should be precise enough that one can see what it would mean to make progress on Q′: what experiments one would need to do, what theorems one would need to prove, etc.
Thank you for the reference. I am not sure if Aaronson and I would agree. After all, depending on the situation, a philosopher of the kind I am talking about could claim that whatever progress has been made by answering the quesion Q’ also allows us to know the answer to the question Q (maybe because they are really the same question), or at least to get closer to it, instead of simply saying that Q does not have an answer.
I think Protagoras’ example of the question about whales being fish or not would make a good example of the former case.
It is almost completely uncontroversial that meaning is not determined by the conscious intentions of individual speakers (the “Humpty Dumpty” theory is false). More sophisticated theories of meaning note that people want their words to mean the same as what other people mean by them (as otherwise they are useless for communication). So, bare minimum, knowing what a word means requires looking at a community of language users, not just one speaker. But there are more complications; people want to use their words to mean the same as what experts intend more than they want to use their words to mean the same as what the ignorant intend. Partly that may be just to make coordination easier, but probably an even bigger motive is that people want their words to pick out useful and important categories, and of course experts are more likely to have latched on to those. A relatively uncontroversial extension of this is that meaning needn’t precisely match the intentions of any current language speaker or group of language speakers; if the intentions of speakers would point to one category, but there’s a very similar, mostly overlapping, but much more useful and important category, the correct account of the meaning is probably that it refers to the more useful and important category, even if none of the speakers know enough to pick out that category. That’s why words for “fish” in languages whose origins predate any detailed biological knowledge of whales nonetheless probably shouldn’t be thought to have ever included whales in their reference.
So, people can use words without anybody knowing exactly what they mean. And figuring out what they mean can be a useful exercise, as it requires learning more about what you’re dealing with; it isn’t just a matter of making an arbitrary decision. All that being said, I admit to having some skepticism about some of the words my fellow philosophers use; I suspect in a number of cases there are no ideal, unambiguous meanings to be uncovered (indeed, there are probably cases where they don’t mean anything at all, as the Logical Positivists sometimes argued).
You think that words can be defined and then the definition if you look at a sentence and know the gramatical rules and the definition of those words you can find out what the sentence means.
That belief is wrong. If reasoning would work that way, we would have smart AI by now. Meaning depends on context.
I like the concept of phenomelogical primitives. Getting people to integrate a new phenomelogical primitives into their thinking is really hard. I even read someone argue that it’s impossible in physics education to teach new primitives.
Teaching physics students that a metal ball thrown on the floor bounces back because of springiness that lets the ball contract when it hits the floor and then expands again is hard. It takes a while till students don’t reason anymore that the floor somehow pushes the ball back but that a steel ball contracts.
In biology there the concept of a pseudogene. It’s basically a string of DNA that looks like a gene that codes for a gene but that’s not expressed into a protein.
On the first instance that seems like a fine definition, on second investigation different biologists differ about what “looking like a gene” means. Different bioinformaticians each write their own algorithms to detect genes and there are cases where one algorithm A says that D is a pseudogene but algorithm B says that D isn’t.
Of course changing the trainings data on which the algorithms runs also changes the classification. A really deep definition of a particular concept of a pseudogene would probably mention all the trainings data and the specific machine learnine algorithm used.
There are various arguments complicated arguments to prefer one algorithm over another because the resulting classification is better. You can say it’s okay that the algorithm doesn’t notice that some strings are genes because they don’t look like genes are supposed to look or you can say that you really want that your algorithm detects all genes that exist as genes. As a result the amount of pseudogenes changes.
You could speak of pseudogene_A and pseudogene_B but in many cases, you don’t need to think about those details. and abstract them away. It okay if a few people figure out a decent definition of pseudogene that behaves like it’s supposed to and then others can use that notion.
In philosophy the literature and how the literature handles various concepts could be thought as training data for the general human mental classification algorithm. A full definition of a concept would have to conclude what the concept does in various edge cases.
On LW we have our jargon problem. We can use an existing word for a concept or we can invent a new term for what we speak about. We have to decide whether the existing term is good enough for our purposes or whether we mean something slightly different that warrants a new term.
That’s not always an easy decision.
To repeat a cliche: “There are only two hard things in Computer Science: cache invalidation and naming things”
Naming is also hard outside of computer science.
I recently asked a question that I think is similar to what you’re discussing. To recap, my question was on the philosophical debate about what “knowledge” really means. I asked why anyone cares—why not just define Knowledge Type A, Knowledge Type B, etc. and be done with it? If you would taboo the word knowledge would there be anything left to discuss?
Am I correct that that’s basically what you’re referring to? Do you have any thoughts specifically regarding my question?
Maybe those people are bad at “tabooing” their topics. Which may either mean the topic is very difficult to “taboo”, or that they simply do not think this way and instead e.g. try to associate the topic with applause lights. In other words, either the “philosophical” topics are those where tabooing is difficult, or the “philosophers” are people who are bad at tabooing.
Since there are many different philosophers trying many different things, I would guess that it really is difficult to taboo those topics. (Which does not exclude the possibility that most philosophers are actually bad at tabooing; I just think it is unlikely that all of them are.)
On the other hand, maybe the philosophers who taboo the topic properly are simply ignored by the others. The problem is never solved because even when someone solves it, others do not accept the solution.
Also, even proper tabooing does not answer the question immediately. Even if you taboo “knowledge” properly, the explanation may require some knowledge about informatics or neuroscience, which may be not available yet.
On the other hand, maybe the philosophers who taboo the topic properly are simply ignored by the others. The problem is never solved because even when someone solves it, others do not accept the solution.
And if they do it stops being called “philosophy”. This happened most notably to natural philosophy.
His problem is that he isn’t clear what knowledge means in academic philosophy and he tabooed the word in his post. There’s obviously something left to discuss.
I think the approach you describe is valid but dangerous.
It’s valid because occasionally (and maybe even frequently) you want to think about something that you cannot properly express in words and so cannot define precisely and unambiguously. Some people (e.g. Heidegger) basically create a new language to deal with that problem, but more often you try to define that je ne sais quoi through, to use a geometric analogy, multiple projections. Imagine that you want to think about a 6-dimensional manifold. Human minds, alas, are not well suited to thinking in six dimensions, so you need to construct some projections of that manifold into a 3-dimensional space which humans can deal with. You, of course, can construct many different projections and you will feel that some of them are more useful for capturing the character of that 6-dimensional thing, and some not so much. But other people may and probably will disagree about which projections are useful and which are not.
It’s also dangerous for obvious reasons, starting with the well-know tale of the blind men and the elephant...
There are lots of examples where this struggle with definitions has been fruitful. In the early 20th century in the boundary between philosophy and mathematics there were debates about the meanings of “proof” and “computation.” It is true that the successful resolution of these debates has largely turned the subject from philosophy into math, although that has little do with the organization of academic departments.
As a person with a scientific background who suddenly has come into academic philosophy, I have been puzzled by some of the aspects of its methodology. I have been particularly bothered with the reluctance of some people to give precise definitions of the concepts that they are discussing about. But lately, as a result of several discussions with certain member of the Faculty, I have come to understand why this occurs (if not in the whole of philosophy, at least in this particular trend in academic philosophy).
I have seen that philosophers (I am talking about several of them published in top-ranked, peer-reviewed journals, the kinds of articles I read, study and discuss) who discuss about a concept which tries to capture “x” have, on one hand, an intuitive idea of this concept, imprecise, vague, partial and maybe even self-contradictory. On the other hand, they have several “approaches” to “x”, corresponding to several philosophical trends that have a more precise characterisation of “x” in terms of other ideas that are more clear i.e. in terms of the composites “y1”, “y2“, “y3”, … The major issue at stake in the discussion seems to be whether “x” is really “y1” or “y2” or “y3” or something else (note that sometimes an “yi” is a reduction to other terms, sometimes “yi” is a more accurate characterisation that keeps the words used to speak of “x”, that does not matter).
What is puzzling is this: how come all of them agree they are taking about “x” while actually, each is proposing a different approach? Indeed, those who say that “x” is “y1” are actually saying that we should adopt “y1“ in our thought, and by “x” they understand “y1”. Others understand “y2” in “x”. Why don’t they realise they are talking past each other, that each of them is proposing a different concept and the problem comes just because they want all to call it like they call “x”? Why don’t they make sub-indices for “x”, therefore managing to keep the word they so desperately want, but without confusing each of its possible meanings?
The answer I have come up with is this: they all believe that there is a unique, best sense to which they refer when they speak about “x”, even if it they don’t know which is it. They agree that they have an intuitive grasp of something and that something is “x”, but they disagree about how to better refine that (“y1”? “y2“? “y3”?). Instead, I used to focus only on “y1” “y2” and “y3″ and assess them according to whether they are self-consistent or not, simple or not, useful or not, etc. “x” had no clear definition, it barely meant anything to me, and therefore I decided I should banish it from my thought.
But I have come to the conclusion that it is useful to keep this loose idea about “x” in mind and believe that there is something to that intuition, because only in the contemplation of this intuition you seem to have access to knowledge that you have not been able to formalise, and hence, the intuition is a source of new knowledge. Therefore, philosophers are quite right in keeping vague, loose and perhaps self-contradictory concepts about “x”, because this is an important source from where they draw in order to create and refine approaches “y1” “y2″ and “y3”, hoping that one of them might get “x” right. ((At this point, one might claim that I am simply saying that it is useful to have the illusion that the concept of “x” really means something, even though it actually means nothing, simply because having the illusion is a source of inspiration. But doesn’t precisely the fact that it is a source of inspiration suggest that it is more than a simple illusion? There seems to be a sense in which a bad approach to “x” is still ABOUT “x”))
I would be grateful if I got your thoughts on this.
P.S. A more daring hypothesis is that when philosophers get “x” right in “y”, this approach “y” becomes a scientific paradigm. This also suggests that for those “x” where little progress has been made in millennia, the debate is not necessarily misguided, but what happens is that the intuition is pointing towards something very, very complicated, and no one has been able to give a formal accout of the things it refers to.
It might be useful to look at what happens in mathematics. What, for example, is a “number”? In antiquity, there were the whole numbers and fractions of everyday experience. You can count apples, and cut an apple in half. (BTW, I recently discovered that among the ancient Greeks, there was some dispute about whether 1 was a number. No, some said, 1 was the unit with which other things were measured. 2, 3, 4, and so on were numbers, but not 1.)
Then irrationals were discovered, and negative numbers, and the real line, and complex numbers, and octonions, and Cayley numbers, and p-adic numbers, and perhaps there are even more things that mathematicians call numbers. And there are other ways that the ways that “numbers” behave have been generalised to define such things as fields, vector spaces, rings, and many more, the elements of which are generally not called numbers. But unlike philosophers, mathematicians do not dispute which of these is the “right” concept of “number”. All of the concepts have their uses, and many of them are called “numbers”, but “number” has never been given a formal definition, and does not need one.
For another example, consider “integration”. The idea of dividing an arbitrary shape into pieces of known area and summing their areas goes back at least to Archimedes’ “method of exhaustion”. When real numbers and functions became better understood it was formalised as Riemann integration. That was later generalised to Lebesgue integration, and then to Haar measure. Stochastic processes brought in Itô integration and several other forms.
Again, no-one as far as I know has ever troubled with the question, “but what is integration, really?” There is a general, intuitive idea of “measuring the size of things”, which has been given various precise formulations in various contexts. In some of those contexts it may make sense to speak of the “right” concept of integration, when there is one that subsumes all of the others and appears to be the most general possible (e.g. Lebesgue integration on Euclidean spaces), but in other contexts there may be multiple incomparable concepts, each with its own uses (e.g. Itô and Stratonovich integration for stochastic processes).
But in philosophy, there are no theorems by which to judge the usefulness of a precisely defined concept.
I think this is a very good contrast, indeed. I agree with your view of the matter, and I think I will use “number” as a particular example next time I recount the thoughts which brought me to write the post. Thank you.
Scott Aaronson has formulated it in a similar way (quoted from here):
Thank you for the reference. I am not sure if Aaronson and I would agree. After all, depending on the situation, a philosopher of the kind I am talking about could claim that whatever progress has been made by answering the quesion Q’ also allows us to know the answer to the question Q (maybe because they are really the same question), or at least to get closer to it, instead of simply saying that Q does not have an answer.
I think Protagoras’ example of the question about whales being fish or not would make a good example of the former case.
It is almost completely uncontroversial that meaning is not determined by the conscious intentions of individual speakers (the “Humpty Dumpty” theory is false). More sophisticated theories of meaning note that people want their words to mean the same as what other people mean by them (as otherwise they are useless for communication). So, bare minimum, knowing what a word means requires looking at a community of language users, not just one speaker. But there are more complications; people want to use their words to mean the same as what experts intend more than they want to use their words to mean the same as what the ignorant intend. Partly that may be just to make coordination easier, but probably an even bigger motive is that people want their words to pick out useful and important categories, and of course experts are more likely to have latched on to those. A relatively uncontroversial extension of this is that meaning needn’t precisely match the intentions of any current language speaker or group of language speakers; if the intentions of speakers would point to one category, but there’s a very similar, mostly overlapping, but much more useful and important category, the correct account of the meaning is probably that it refers to the more useful and important category, even if none of the speakers know enough to pick out that category. That’s why words for “fish” in languages whose origins predate any detailed biological knowledge of whales nonetheless probably shouldn’t be thought to have ever included whales in their reference.
So, people can use words without anybody knowing exactly what they mean. And figuring out what they mean can be a useful exercise, as it requires learning more about what you’re dealing with; it isn’t just a matter of making an arbitrary decision. All that being said, I admit to having some skepticism about some of the words my fellow philosophers use; I suspect in a number of cases there are no ideal, unambiguous meanings to be uncovered (indeed, there are probably cases where they don’t mean anything at all, as the Logical Positivists sometimes argued).
You think that words can be defined and then the definition if you look at a sentence and know the gramatical rules and the definition of those words you can find out what the sentence means. That belief is wrong. If reasoning would work that way, we would have smart AI by now. Meaning depends on context.
I like the concept of phenomelogical primitives. Getting people to integrate a new phenomelogical primitives into their thinking is really hard. I even read someone argue that it’s impossible in physics education to teach new primitives.
Teaching physics students that a metal ball thrown on the floor bounces back because of springiness that lets the ball contract when it hits the floor and then expands again is hard. It takes a while till students don’t reason anymore that the floor somehow pushes the ball back but that a steel ball contracts.
In biology there the concept of a pseudogene. It’s basically a string of DNA that looks like a gene that codes for a gene but that’s not expressed into a protein.
On the first instance that seems like a fine definition, on second investigation different biologists differ about what “looking like a gene” means. Different bioinformaticians each write their own algorithms to detect genes and there are cases where one algorithm A says that D is a pseudogene but algorithm B says that D isn’t.
Of course changing the trainings data on which the algorithms runs also changes the classification. A really deep definition of a particular concept of a pseudogene would probably mention all the trainings data and the specific machine learnine algorithm used.
There are various arguments complicated arguments to prefer one algorithm over another because the resulting classification is better. You can say it’s okay that the algorithm doesn’t notice that some strings are genes because they don’t look like genes are supposed to look or you can say that you really want that your algorithm detects all genes that exist as genes. As a result the amount of pseudogenes changes.
You could speak of pseudogene_A and pseudogene_B but in many cases, you don’t need to think about those details. and abstract them away. It okay if a few people figure out a decent definition of pseudogene that behaves like it’s supposed to and then others can use that notion.
In philosophy the literature and how the literature handles various concepts could be thought as training data for the general human mental classification algorithm. A full definition of a concept would have to conclude what the concept does in various edge cases.
On LW we have our jargon problem. We can use an existing word for a concept or we can invent a new term for what we speak about. We have to decide whether the existing term is good enough for our purposes or whether we mean something slightly different that warrants a new term. That’s not always an easy decision.
To repeat a cliche: “There are only two hard things in Computer Science: cache invalidation and naming things” Naming is also hard outside of computer science.
I recently asked a question that I think is similar to what you’re discussing. To recap, my question was on the philosophical debate about what “knowledge” really means. I asked why anyone cares—why not just define Knowledge Type A, Knowledge Type B, etc. and be done with it? If you would taboo the word knowledge would there be anything left to discuss?
Am I correct that that’s basically what you’re referring to? Do you have any thoughts specifically regarding my question?
Maybe those people are bad at “tabooing” their topics. Which may either mean the topic is very difficult to “taboo”, or that they simply do not think this way and instead e.g. try to associate the topic with applause lights. In other words, either the “philosophical” topics are those where tabooing is difficult, or the “philosophers” are people who are bad at tabooing.
Since there are many different philosophers trying many different things, I would guess that it really is difficult to taboo those topics. (Which does not exclude the possibility that most philosophers are actually bad at tabooing; I just think it is unlikely that all of them are.)
On the other hand, maybe the philosophers who taboo the topic properly are simply ignored by the others. The problem is never solved because even when someone solves it, others do not accept the solution.
Also, even proper tabooing does not answer the question immediately. Even if you taboo “knowledge” properly, the explanation may require some knowledge about informatics or neuroscience, which may be not available yet.
And if they do it stops being called “philosophy”. This happened most notably to natural philosophy.
His problem is that he isn’t clear what knowledge means in academic philosophy and he tabooed the word in his post. There’s obviously something left to discuss.
Yes, that is an example of what I am referring to.
Sadly, I’m afraid I can’t give you any other thoughts that what I have said for the general case, since I know little epistemology.
I think the approach you describe is valid but dangerous.
It’s valid because occasionally (and maybe even frequently) you want to think about something that you cannot properly express in words and so cannot define precisely and unambiguously. Some people (e.g. Heidegger) basically create a new language to deal with that problem, but more often you try to define that je ne sais quoi through, to use a geometric analogy, multiple projections. Imagine that you want to think about a 6-dimensional manifold. Human minds, alas, are not well suited to thinking in six dimensions, so you need to construct some projections of that manifold into a 3-dimensional space which humans can deal with. You, of course, can construct many different projections and you will feel that some of them are more useful for capturing the character of that 6-dimensional thing, and some not so much. But other people may and probably will disagree about which projections are useful and which are not.
It’s also dangerous for obvious reasons, starting with the well-know tale of the blind men and the elephant...
There are lots of examples where this struggle with definitions has been fruitful. In the early 20th century in the boundary between philosophy and mathematics there were debates about the meanings of “proof” and “computation.” It is true that the successful resolution of these debates has largely turned the subject from philosophy into math, although that has little do with the organization of academic departments.