I think this radically misunderstands what thought experiments are for. As I see it, the job of philosophy is to clear up our own conceptual confusions; that’s not the sort of thing that ever could conflict with science!
(EDIT: I mean that it shouldn’t conflict with science; if you do your philosophy wrong then you might end up conflicting.)
Besides, Putnam’s thought experiment can be easily tweaked to get around that problem: suppose that on Twin Earth cats are in fact very sophisticated cat-imitating robots. Then a similar conclusion follows about the meaning of “cat”. The point is that if X had in fact been Y, where Y is the same as X in all the respects which we use to pick out X, then words which currently refer to X would refer to Y in that situation. I think Putnam even specifies that we are to imagine that XYZ behaves chemically the same as H2O. Sure, that couldn’t happen in our world; but the laws of physics might have turned out differently, and we ought to be able to conceptually deal with possibilities like this.
Right. It’s very useful to clear up conceptual confusions. That’s much of what The Sequences can teach people. What’s wrong is the claim that attempts to clear up conceptual confusions couldn’t conflict with science.
Hm. Perhaps you’re right. Maybe I should have said that it shouldn’t ever conflict with science. But I think that’s because if you’re coming into conflict with science you’re doing your philosophy wrong, more than anything else.
Hmm. I guess I agree with that. That is, dominant scientific theories can be conceptually confused and need correction.
But would 20th century analytic philosophy have denied that? The opposite seems to me to be true. Analytic philosophers would justify their intrusions into the sciences by arguing that they were applying their philosophical acumen to identify conceptual confusions that the scientists hadn’t noticed. (I’m thinking of Jerry Fodor’s recent critique of the explanatory power of Darwinian natural selection, for example—though that’s from our own century.)
Just to be clear, I think that analytic philosophers often should have been more humble when they barged in and started telling scientist how confused they were. Fodor’s critique of NS would again be my go-to example of that.
Dennett states this point in typically strong terms in his review of Fodor’s argument:
I cannot forebear noting, on a rather more serious note, that such ostentatiously unresearched ridicule as Fodor heaps on Darwinians here is both very rude and very risky to one’s reputation. (Remember Mary Midgley’s notoriously ignorant and arrogant review of The Selfish Gene? Fodor is vying to supplant her as World Champion in the Philosophers’ Self- inflicted Wound Competition.) Before other philosophers countenance it they might want to bear in mind that the reaction of most biologists to this sort of performance is apt to be–at best: “Well, we needn’t bother paying any attention to him. He’s just one of those philosophers playing games with words.” It may be fun, but it contributes to the disrespect that many non- philosophers have for our so-called discipline.
I don’t think I’m committed to the view of concepts that you’re attacking. Concepts don’t have to be some kind of neat things you can specify with necessary and sufficient conditions or anything. And TBH, I prefer to talk about languages. I don’t think philosophy can get us out of any holes we didn’t get ourselves into!
(FWIW, I do also reject the Quinean thesis that everything is continuous with science, which might be another part of your objection)
As I see it, the job of philosophy is to clear up our own conceptual confusions; that’s not the sort of thing that ever could conflict with science!
It certainly can, if the job is done badly.
Agreed that Grisdale’s argument isn’t very good, I have a hard time taking Putnam’s argument seriously, or even the whole context in which he presented his thought experiment. Like a lot of philosophy, it reminds me of a bunch of maths noobs arguing long and futilely in a not-even-wrong manner over whether 0.999...=1.
We on Earth use “water” to refer to a certain substance; those on Twin Earth use “water” to refer to a different substance with many of the same properties; our scientists and theirs meet with samples of the respective substances, discover their constitutions are actually diffferent, and henceforth change their terminology to make it clear, when it needs to be, which of the two substances is being referred to in any particular case.
It sounds to me that you’re expecting something from Putnam’s argument that he isn’t trying to give you. He’s trying to clarify what’s going on when we talk about words having “meaning”. His conclusion is that the “meaning”, insofar as it involves “referring” to something, depends on stuff outside the mind of the speaker. That may seem obvious in retrospect, but it’s pretty tempting to think otherwise: as competent users of a language, we tend to feel like we know all there is to know about the meanings of our own words! That’s the sort of position that Putnam is attacking: a position about that mysterious word “meaning”.
EDIT: to clarify, I’m not necessarily in total agreement with Putnam, I just don’t think that this is the way to refute him!
It still looks to me like arguing about a wrong question. We use words to communicate with each other, which requires that by and large we learn to use the same words in similar ways. There are interesting questions to ask about how we do this, but questions of a sort that require doing real work to discover answers. To philosophically ask, “Ah, but what what sort of thing is a meaning? What are meanings? What is the process of referring?” is nugatory.
It is as if one were to look at the shapes that crystals grow into and ask not, “What mechanisms produce these shapes?” (a question answered in the laboratory, not the armchair, by discovering that atoms bind to each other in ways that form orderly lattices), but “What is a shape?”
It is as if one were to look at the shapes that crystals grow into and ask not, “What mechanisms produce these shapes?” (a question answered in the laboratory, not the armchair, by discovering that atoms bind to each other in ways that form orderly lattices), but “What is a shape?”
Why aren’t both questions valuable to ask? The latter one must have contributed to the eventual formation of the mathematical field of geometry.
I find it difficult to see any trace of the idea in Euclid. Circles and straight lines, yes, but any abstract idea of shape in general, if it can be read into geometry at all, would only be in the modern axiomatisation. And done by mathematicians finding actual theorems, not by philosophers assuming there is an actual thing behind our use of the word, that it is their task to discover.
And done by mathematicians finding actual theorems, not by philosophers assuming there is an actual thing behind our use of the word, that it is their task to discover.
I don’t mean to pick just on you, but I think philosophy is often unfairly criticized for being less productive than other fields, when the problem is just that philosophy is damned hard, and whenever we do discover, via philosophy, some good method for solving a particular class of problems, then people no longer consider that class of problems to belong to the realm of philosophy, and forget that philosophy is what allowed us to get started in the first place. For example, without philosophy, how would one have known that proving theorems using logic might be a good way to understand things like circles, lines, and shapes (or even came up with the idea of “logic”)?
(Which isn’t to say that there might not be wrong ways to do philosophy. I just think we should cut philosophers some slack for doing things that turn out to be unproductive in retrospect, and appreciate more the genuine progress they have made.)
For example, without philosophy, how would one have known that proving theorems using logic might be a good way to understand things like circles, lines, and shapes (or even came up with the idea of “logic”)?
How people like Euclid came up with the methods they did is, I suppose, lost in the mists of history. Were Euclid and his predecessors doing “philosophy”? That’s just a definitional question.
The problem is that there is no such thing as philosophy. You cannot go and “do philosophy”, in the way that you can “do mathematics” or “do skiing”. There are only people thinking, some well and some badly. The less they get out of their armchairs, the more their activity is likely to be called philosophy, and in general, the less useful their activity is likely to be. Mathematics is the only exception, and only superficially, because mathematical objects are clearly outside your head, just as much as physical ones are. You bang up against them, in a way that never happens in philosophy.
When philosophy works, it isn’t philosophy any more, so the study of philosophy is the study of what didn’t work. It’s a subject defined by negation, like the biology of non-elephants. It’s like a small town in which you cannot achieve anything of substance except by leaving it. Philosophers are the ones who stay there all their lives.
I realise that I’m doing whatever the opposite is of cutting them some slack. Maybe trussing them up and dumping them in the trash.
I just think we should cut philosophers some slack for doing things that turn out to be unproductive in retrospect, and appreciate more the genuine progress they have made.
What has philosophy ever done for us? :-) I just googled that exact phrase, and the same without the “ever”, but none of the hits gave a satisfactory defence. In fact, I turned up this quote from the philosopher Austin, characterising philosophy much as I did above:
“It’s the dumping ground for all the leftovers from other sciences, where everything turns up which we don’t know quite how to take. As soon as someone discovers a reputable and reliable method of handling some portion of these residual problems, a new science is set up, which tends to break away from philosophy.”
Responding to the sibling comment here as it’s one train of thought:
How might one know, a priori, that “What is a circle?” is a valid question to ask, but not “What is a shape?”
By knowing this without knowing why. That’s all that a priori knowledge is: stuff you know without knowing why. Or to make the levels of abstraction more explicit, a priori beliefs are beliefs you have without knowing why. Once you start thinking about them, asking why you believe something and finding reasons to accept or reject it, it’s no longer a priori. The way to discover whether either of those questons is sensible is to try answering them and see where that leads you.
That activity is called “philosophy”, but only until the process gets traction and goes somewhere. Then it’s something else.
That’s all that a priori knowledge is: stuff you know without knowing why. Or to make the levels of abstraction more explicit, a priori beliefs are beliefs you have without knowing why. Once you start thinking about them, asking why you believe something and finding reasons to accept or reject it, it’s no longer a priori.
The problem is that there is no such thing as philosophy. You cannot go and “do philosophy”, in the way that you can “do mathematics” or “do skiing”. There are only people thinking, some well and some badly. The less they get out of their armchairs, the more their activity is likely to be called philosophy, and in general, the less useful their activity is likely to be. [...]
When philosophy works, it isn’t philosophy any more, so the study of philosophy is the study of what didn’t work. It’s a subject defined by negation, like the biology of non-elephants.
I think there are useful kinds of thought that are best categorized as “philosophy” (even if it’s just “philosophy of the gaps”, i.e. not clear enough to fall into an existing field); mostly around the area of how we should adapt our behavior or values in light of learning about game theory, evolutionary biology, neuroscience etc. - for example, “We are the product of evolution, therefore it’s every man for himself” is the product of bad philosophy, and should be fixed with better philosophy rather than with arguments from evolutionary biology or sociology.
A lot of what we discuss here on LessWrong falls more easily under the heading of “philosophy” than that of any other specific field.
(Note that whether most academic philosophers are producing any valuable intellectual contributions is a different question, I’m only arguing “some valuable contributions are philosophy”)
Well, we seem to have this word, “meaning”, that pops up a lot and that lots of people seem to think is pretty interesting, and questions of whether people “mean” the same thing as other people do turn up quite often. That said, it’s often a pretty confusing topic. So it seems worthwhile to try and think about what’s going on with the word “meaning” when people use it, and if possible, clarify it. If you’re just totally uninterested in that, fine. Or you can just ditch the concept of “meaning” altogether, but good luck talking to anyone else about interesting stuff in that case!
Well, I did just post my thinking about that, and I feel like I’m the only person pointing out that Putnam and the rest are arguing the acoustics of unheard falling trees. To me, the issue is dissolved so thoroughly that there isn’t a question left, other than the real questions of what’s going on in our brains when we talk.
Okay, I was kind of interpreting you as just not being interested in these kinds of question. I agree that some questions about “meaning” don’t go anywhere and need to be dissolved, but I don’t think that all such questions can be dissolved. If you don’t think that any such questions are legitimate, then obviously this will look like a total waste of time to you.
the “meaning”, insofar as it involves “referring” to something, depends on stuff outside the mind of the speaker. That may seem obvious in retrospect, but it’s pretty tempting to think otherwise
The idea produces non-obvious results if you apply it to, for example, mathematical concepts. They certainly refer to something, which is therefore outside the mind. Conclusion: Hylaean Theoric World.
Being convinced by Putnam on this front doesn’t mean that you have to think that everything refers! There are plenty of accounts of what’s going on with mathematics that don’t have mathematical terms referring to floaty mathematical entities. Besides, Putnam’s point isn’t that the referent of a term is going to be outside your head; that’s pretty uncontroversial, as long as you think we’re talking about something outside your head. What he argues is that this means that the meaning of a term depends on stuff outside your head, which is a bit different.
There are plenty of accounts of what’s going on with mathematics that don’t have mathematical terms referring to floaty mathematical entities
Could you list the one(s) that you find convincing? (even if this is somewhat off-topic in this thread...)
What he argues is that this means that the meaning of a term depends on stuff outside your head, which is a bit different
That is, IIUC, the “meaning” of a concept is not completely defined by its place within the mind’s conceptual structure. This seems correct, as the “meaning” is supposed to be about the correspondence between the map and the territory, an not about some topological property of the map.
Have a look here for a reasonable overview of philosophy of maths. Any kind of formalism or nominalism won’t have floaty mathematical entities—in the former case you’re talking about concrete symbols, and in the latter case about the physical world in some way (these are broad categories, so I’m being vague).
Personally, I think a kind of logical modal structuralism is on the right track. That would claim that when you make a mathematical statement, you’re really saying: “It is a necessary logical truth that any system which satisfied my axioms would also satisfy this conclusion.”
So if you say “2+2 = 4”, you’re actually saying that if there were a system that behaved like the natural numbers (which is logically possible, so long as the axioms are consistent), then in that system two plus two would equal four.
See Hellman’s “Mathematics Without Numbers” for the classic defense of this kind of position.
Thanks for the answer! But I am still confused regarding the ontological status of “2” under many of the philosophical positions. Or, better yet, the ontological status of the real numbers field R. Formalism and platonism are easy: under formalism, R is a symbol that has no referent. Under platonism, R exists in the HTW. If I understand your preferred position correctly, it says: “any system that satisfies axioms of R also satisfies the various theorems about it”. But, assuming the universe is finite or discrete, there is no physical system that satisfies axioms of R. Does it mean your position reduces to formalism then?
There’s no actual system that satisfies the axioms of the reals, but there (logically) could be. If you like, you could say that there is a “possible system” that satisfies those axioms (as long as they’re not contradictory!).
The real answer is that talk of numbers as entities can be thought of as syntactic sugar for saying that certain logical implications hold. It’s somewhat revisionary, in that that’s not what people think that they are doing, and people talked about numbers long before they knew of any axiomatizations for them, but if you think about it it’s pretty clear why those ways of talking would have worked, even if people hadn’t quite figured out the right way to think about it yet.
If you like, you can think of it as saying: “Numbers don’t exist as floaty entities, so strictly speaking normal number talk is all wrong. However, [facts about logical implications] are true, and there’s a pretty clear truth-preserving mapping between the two, so perhaps this is what people were trying to get at.”
Seems to me that you can dodge the Platonic implications (that Anathem was riffing on). You can talk about relations between objects, which depend on objects outside the mind of the speaker but have no independent physical existence in themselves; you need not only a shared referent but also some shared inference, but that’s still quite achievable without needing to invoke some Form of, say, mathematical associativeness.
“that’s not the sort of thing that ever could conflict with science!” do you mean to include psychology in ‘science’ if so, why would you care about it then?
Psychology could (and often does!) show that the way we think about our own minds is just unhelpful in some way: actually, we work differently. I think the job of philosophy is to clarify what we’re actually doing when we talk about our minds, say, regardless of whether that turns out to be a sensible way to talk about them. Psychology might then counsel that we ditch that way of talking! Sometimes we might get to that conclusion from within philosophy; e.g. Parfit’s conclusion that our notion of personal identity is just pretty incoherent.
I meant to suggest that any philosophy which could never conflict with science is immediately suspicious unless you mean something relatively narrow by ‘science’ (for example, by excluding psychology). If you claim that something could never be disproven by science, that’s pretty close to saying ‘it won’t ever affect your decisions’, in which case, why care?
I think of philosophy as more like trying to fix the software that your brain runs on. Which includes, for example, how you categorize the outside world, and also your own model of yourself. That sounds like it ought to be the stamping ground of cognitive science, but we actually have a nice, high-level access to this kind of thing that doesn’t involve thinking about neurons at all: language. So we can work at that level, instead (or as well).
A lot of the stuff in the Sequences, for example, falls under this: it’s an investigation into what the hell is going on with our mindware, (mostly) done at the high level of language.
(Disclaimer: Philosophers differ a lot about what they think philsophy does/should do. Some of them definitely do think that it can tell you stuff about the world that science can’t, or that it can overrule it, or any number of crazy things!)
Putnam’s thought experiment can be easily tweaked to get around that problem: suppose that on Twin Earth cats are in fact very sophisticated cat-imitating robots.
That would be an even weirder version of Earth. Well, less weird because it wouldn’t be a barren, waterless hellscape, but easier for my mind to paint.
A universe were cats were replaced with cat-imitating robots would be amazing for humans. Instead of the bronze age, we would hunt cats for their strong skeletons to use as tools and weapons. Should the skeletons be made instead of brittle epoxy of some kind, we would be able to study cat factories and bootstrap our mechanical knowledge. Should cats be self replicating with nano-machines, we would employ them as guard animals for crops bootstrapping agriculture; an artificial animal which cannot be eaten would have caused other animals to evolve not to mess with them. Should cats, somehow, manage to turn themselves edible after they die, we would still be able to look at their construction and know that they were not crafted by evolution; humanity would know that there was another race out there in the stars and that artificial life was possible. Twin-Eliezer could point to cats and say, “see, we can do this,” and all of humanity would be able to agree and put huge sums of money into AI research.
And if they are cat-robots who are indeed made of bone instead of metal, who reproduce just like cats do, who have exactly the same chemical composition as cats, and evolved here on earth in the exact same way cats do… then they’re just cats. The concept of identical-robot-cats is no different than the worthless concept of philosophical zombies. That’s the whole point of the quote.
Perhaps “cat” was a bad idea: we know too much about cats. Pick something where there are some properties that we don’t know about yet; then consider the situation where they are as the actually are, and where they’re different. The two would be indistinguishable to us, but that doesn’t mean that no experiment could ever tell them apart. See also asparisi’s comment.
I am most assuredly fighting the hypothetical (I’m familiar with and disagree with that link). As far as I can tell, that’s what Thagard is doing too.
I’m reminded of a rebuttal to that post, about how hypotheticals are used as a trap. Putnam intentionally chose to create a scientifically incoherent world. He could have chosen a jar of acid instead of an incoherent twin-earth, but he didn’t. He wanted the sort of confusion that could only come from an incoherent universe (luke links that in his quote).
I think that’s Thagard’s point. As he notes: these types of thought experiments are only expressions of our ignorance, and not deep insights about the mind.
What mileage do you think Putnam is getting here from creating this confusion? Do you think the point he’s trying to make hinges on the incoherence of the world he’s constructed?
I’m not quite sure why it matters that the world Putnam creates is “scientifically incoherent”—which I take to mean it conflicts with our current understanding of science?
As far as we know, the facts of science could have been different; hell, we could still be wrong about the ones we currently think we know. So our language ought to be able to cope with situations where the scientific facts are different than they actually are. It doesn’t matter that Putnam’s scenario can’t happen in this world: it could have happened, and thinking about what we would want to say in that situation can be illuminating. That’s all that’s being claimed here.
I wonder if the problem is referring to these kinds of things as “thought experiments”. They’re not really experiments. Imagine a non-native speaker asking you about the usage of a word, who concocts an unlikely (or maybe even impossible scenario) and then asks you whether the word would apply in that situation. That’s more like what’s going on, and it doesn’t bear a lot of resemblance to a scientific experiment!
Well you could go for something much more subtle, like using sugar of the opposite handedness on the other ‘Earth’. I don’t think it really changes the argument much whether the distinction is subtle or not.
I think this radically misunderstands what thought experiments are for. As I see it, the job of philosophy is to clear up our own conceptual confusions; that’s not the sort of thing that ever could conflict with science!
(EDIT: I mean that it shouldn’t conflict with science; if you do your philosophy wrong then you might end up conflicting.)
Besides, Putnam’s thought experiment can be easily tweaked to get around that problem: suppose that on Twin Earth cats are in fact very sophisticated cat-imitating robots. Then a similar conclusion follows about the meaning of “cat”. The point is that if X had in fact been Y, where Y is the same as X in all the respects which we use to pick out X, then words which currently refer to X would refer to Y in that situation. I think Putnam even specifies that we are to imagine that XYZ behaves chemically the same as H2O. Sure, that couldn’t happen in our world; but the laws of physics might have turned out differently, and we ought to be able to conceptually deal with possibilities like this.
I think this is wrong, and one of the major mistakes of 20th century analytic philosophy.
What is wrong, that the job of philosophy is to clear up conceptual confusions, or that philosophy could not conflict with science?
It is still worthwhile to clear up conceptual confusions, even if the specific approach known as “conceptual analysis” is usually a mistake.
Right. It’s very useful to clear up conceptual confusions. That’s much of what The Sequences can teach people. What’s wrong is the claim that attempts to clear up conceptual confusions couldn’t conflict with science.
Hm. Perhaps you’re right. Maybe I should have said that it shouldn’t ever conflict with science. But I think that’s because if you’re coming into conflict with science you’re doing your philosophy wrong, more than anything else.
Would you mind adding this clarification to your original comment above that was upvoted 22 times? :)
Sure; it is indeed ambiguous ;)
Hmm. I guess I agree with that. That is, dominant scientific theories can be conceptually confused and need correction.
But would 20th century analytic philosophy have denied that? The opposite seems to me to be true. Analytic philosophers would justify their intrusions into the sciences by arguing that they were applying their philosophical acumen to identify conceptual confusions that the scientists hadn’t noticed. (I’m thinking of Jerry Fodor’s recent critique of the explanatory power of Darwinian natural selection, for example—though that’s from our own century.)
No, I don’t think the better half of 20th century analytic philosophers would have denied that.
Just to be clear, I think that analytic philosophers often should have been more humble when they barged in and started telling scientist how confused they were. Fodor’s critique of NS would again be my go-to example of that.
Dennett states this point in typically strong terms in his review of Fodor’s argument:
I
I don’t think I’m committed to the view of concepts that you’re attacking. Concepts don’t have to be some kind of neat things you can specify with necessary and sufficient conditions or anything. And TBH, I prefer to talk about languages. I don’t think philosophy can get us out of any holes we didn’t get ourselves into!
(FWIW, I do also reject the Quinean thesis that everything is continuous with science, which might be another part of your objection)
It certainly can, if the job is done badly.
Agreed that Grisdale’s argument isn’t very good, I have a hard time taking Putnam’s argument seriously, or even the whole context in which he presented his thought experiment. Like a lot of philosophy, it reminds me of a bunch of maths noobs arguing long and futilely in a not-even-wrong manner over whether 0.999...=1.
We on Earth use “water” to refer to a certain substance; those on Twin Earth use “water” to refer to a different substance with many of the same properties; our scientists and theirs meet with samples of the respective substances, discover their constitutions are actually diffferent, and henceforth change their terminology to make it clear, when it needs to be, which of the two substances is being referred to in any particular case.
There is no problem here to solve.
Well, sure, you can do philosophy wrong!
It sounds to me that you’re expecting something from Putnam’s argument that he isn’t trying to give you. He’s trying to clarify what’s going on when we talk about words having “meaning”. His conclusion is that the “meaning”, insofar as it involves “referring” to something, depends on stuff outside the mind of the speaker. That may seem obvious in retrospect, but it’s pretty tempting to think otherwise: as competent users of a language, we tend to feel like we know all there is to know about the meanings of our own words! That’s the sort of position that Putnam is attacking: a position about that mysterious word “meaning”.
EDIT: to clarify, I’m not necessarily in total agreement with Putnam, I just don’t think that this is the way to refute him!
It still looks to me like arguing about a wrong question. We use words to communicate with each other, which requires that by and large we learn to use the same words in similar ways. There are interesting questions to ask about how we do this, but questions of a sort that require doing real work to discover answers. To philosophically ask, “Ah, but what what sort of thing is a meaning? What are meanings? What is the process of referring?” is nugatory.
It is as if one were to look at the shapes that crystals grow into and ask not, “What mechanisms produce these shapes?” (a question answered in the laboratory, not the armchair, by discovering that atoms bind to each other in ways that form orderly lattices), but “What is a shape?”
Why aren’t both questions valuable to ask? The latter one must have contributed to the eventual formation of the mathematical field of geometry.
I find it difficult to see any trace of the idea in Euclid. Circles and straight lines, yes, but any abstract idea of shape in general, if it can be read into geometry at all, would only be in the modern axiomatisation. And done by mathematicians finding actual theorems, not by philosophers assuming there is an actual thing behind our use of the word, that it is their task to discover.
I don’t mean to pick just on you, but I think philosophy is often unfairly criticized for being less productive than other fields, when the problem is just that philosophy is damned hard, and whenever we do discover, via philosophy, some good method for solving a particular class of problems, then people no longer consider that class of problems to belong to the realm of philosophy, and forget that philosophy is what allowed us to get started in the first place. For example, without philosophy, how would one have known that proving theorems using logic might be a good way to understand things like circles, lines, and shapes (or even came up with the idea of “logic”)?
(Which isn’t to say that there might not be wrong ways to do philosophy. I just think we should cut philosophers some slack for doing things that turn out to be unproductive in retrospect, and appreciate more the genuine progress they have made.)
How people like Euclid came up with the methods they did is, I suppose, lost in the mists of history. Were Euclid and his predecessors doing “philosophy”? That’s just a definitional question.
The problem is that there is no such thing as philosophy. You cannot go and “do philosophy”, in the way that you can “do mathematics” or “do skiing”. There are only people thinking, some well and some badly. The less they get out of their armchairs, the more their activity is likely to be called philosophy, and in general, the less useful their activity is likely to be. Mathematics is the only exception, and only superficially, because mathematical objects are clearly outside your head, just as much as physical ones are. You bang up against them, in a way that never happens in philosophy.
When philosophy works, it isn’t philosophy any more, so the study of philosophy is the study of what didn’t work. It’s a subject defined by negation, like the biology of non-elephants. It’s like a small town in which you cannot achieve anything of substance except by leaving it. Philosophers are the ones who stay there all their lives.
I realise that I’m doing whatever the opposite is of cutting them some slack. Maybe trussing them up and dumping them in the trash.
What has philosophy ever done for us? :-) I just googled that exact phrase, and the same without the “ever”, but none of the hits gave a satisfactory defence. In fact, I turned up this quote from the philosopher Austin, characterising philosophy much as I did above:
“It’s the dumping ground for all the leftovers from other sciences, where everything turns up which we don’t know quite how to take. As soon as someone discovers a reputable and reliable method of handling some portion of these residual problems, a new science is set up, which tends to break away from philosophy.”
Responding to the sibling comment here as it’s one train of thought:
By knowing this without knowing why. That’s all that a priori knowledge is: stuff you know without knowing why. Or to make the levels of abstraction more explicit, a priori beliefs are beliefs you have without knowing why. Once you start thinking about them, asking why you believe something and finding reasons to accept or reject it, it’s no longer a priori. The way to discover whether either of those questons is sensible is to try answering them and see where that leads you.
That activity is called “philosophy”, but only until the process gets traction and goes somewhere. Then it’s something else.
This is a nice concise statement of the idea that didn’t easily get across through the posts A Priori and How to Convince Me That 2 + 2 = 3.
I think there are useful kinds of thought that are best categorized as “philosophy” (even if it’s just “philosophy of the gaps”, i.e. not clear enough to fall into an existing field); mostly around the area of how we should adapt our behavior or values in light of learning about game theory, evolutionary biology, neuroscience etc. - for example, “We are the product of evolution, therefore it’s every man for himself” is the product of bad philosophy, and should be fixed with better philosophy rather than with arguments from evolutionary biology or sociology.
A lot of what we discuss here on LessWrong falls more easily under the heading of “philosophy” than that of any other specific field.
(Note that whether most academic philosophers are producing any valuable intellectual contributions is a different question, I’m only arguing “some valuable contributions are philosophy”)
How might one know, a priori, that “What is a circle?” is a valid question to ask, but not “What is a shape?”
Well, we seem to have this word, “meaning”, that pops up a lot and that lots of people seem to think is pretty interesting, and questions of whether people “mean” the same thing as other people do turn up quite often. That said, it’s often a pretty confusing topic. So it seems worthwhile to try and think about what’s going on with the word “meaning” when people use it, and if possible, clarify it. If you’re just totally uninterested in that, fine. Or you can just ditch the concept of “meaning” altogether, but good luck talking to anyone else about interesting stuff in that case!
Well, I did just post my thinking about that, and I feel like I’m the only person pointing out that Putnam and the rest are arguing the acoustics of unheard falling trees. To me, the issue is dissolved so thoroughly that there isn’t a question left, other than the real questions of what’s going on in our brains when we talk.
Okay, I was kind of interpreting you as just not being interested in these kinds of question. I agree that some questions about “meaning” don’t go anywhere and need to be dissolved, but I don’t think that all such questions can be dissolved. If you don’t think that any such questions are legitimate, then obviously this will look like a total waste of time to you.
One person pointing it out suffices. (I tend to agree with your position.)
EY discussed this in depth in the quotation is not the referent.
The idea produces non-obvious results if you apply it to, for example, mathematical concepts. They certainly refer to something, which is therefore outside the mind. Conclusion: Hylaean Theoric World.
Being convinced by Putnam on this front doesn’t mean that you have to think that everything refers! There are plenty of accounts of what’s going on with mathematics that don’t have mathematical terms referring to floaty mathematical entities. Besides, Putnam’s point isn’t that the referent of a term is going to be outside your head; that’s pretty uncontroversial, as long as you think we’re talking about something outside your head. What he argues is that this means that the meaning of a term depends on stuff outside your head, which is a bit different.
Could you list the one(s) that you find convincing? (even if this is somewhat off-topic in this thread...)
That is, IIUC, the “meaning” of a concept is not completely defined by its place within the mind’s conceptual structure. This seems correct, as the “meaning” is supposed to be about the correspondence between the map and the territory, an not about some topological property of the map.
Have a look here for a reasonable overview of philosophy of maths. Any kind of formalism or nominalism won’t have floaty mathematical entities—in the former case you’re talking about concrete symbols, and in the latter case about the physical world in some way (these are broad categories, so I’m being vague).
Personally, I think a kind of logical modal structuralism is on the right track. That would claim that when you make a mathematical statement, you’re really saying: “It is a necessary logical truth that any system which satisfied my axioms would also satisfy this conclusion.”
So if you say “2+2 = 4”, you’re actually saying that if there were a system that behaved like the natural numbers (which is logically possible, so long as the axioms are consistent), then in that system two plus two would equal four.
See Hellman’s “Mathematics Without Numbers” for the classic defense of this kind of position.
Thanks for the answer! But I am still confused regarding the ontological status of “2” under many of the philosophical positions. Or, better yet, the ontological status of the real numbers field R. Formalism and platonism are easy: under formalism, R is a symbol that has no referent. Under platonism, R exists in the HTW. If I understand your preferred position correctly, it says: “any system that satisfies axioms of R also satisfies the various theorems about it”. But, assuming the universe is finite or discrete, there is no physical system that satisfies axioms of R. Does it mean your position reduces to formalism then?
There’s no actual system that satisfies the axioms of the reals, but there (logically) could be. If you like, you could say that there is a “possible system” that satisfies those axioms (as long as they’re not contradictory!).
The real answer is that talk of numbers as entities can be thought of as syntactic sugar for saying that certain logical implications hold. It’s somewhat revisionary, in that that’s not what people think that they are doing, and people talked about numbers long before they knew of any axiomatizations for them, but if you think about it it’s pretty clear why those ways of talking would have worked, even if people hadn’t quite figured out the right way to think about it yet.
If you like, you can think of it as saying: “Numbers don’t exist as floaty entities, so strictly speaking normal number talk is all wrong. However, [facts about logical implications] are true, and there’s a pretty clear truth-preserving mapping between the two, so perhaps this is what people were trying to get at.”
Seems to me that you can dodge the Platonic implications (that Anathem was riffing on). You can talk about relations between objects, which depend on objects outside the mind of the speaker but have no independent physical existence in themselves; you need not only a shared referent but also some shared inference, but that’s still quite achievable without needing to invoke some Form of, say, mathematical associativeness.
The robot-cat example is, in fact, one of Putnam’s examples. See page 162.
Indeed, that’s where I stole it from ;)
“that’s not the sort of thing that ever could conflict with science!” do you mean to include psychology in ‘science’ if so, why would you care about it then?
Psychology could (and often does!) show that the way we think about our own minds is just unhelpful in some way: actually, we work differently. I think the job of philosophy is to clarify what we’re actually doing when we talk about our minds, say, regardless of whether that turns out to be a sensible way to talk about them. Psychology might then counsel that we ditch that way of talking! Sometimes we might get to that conclusion from within philosophy; e.g. Parfit’s conclusion that our notion of personal identity is just pretty incoherent.
I meant to suggest that any philosophy which could never conflict with science is immediately suspicious unless you mean something relatively narrow by ‘science’ (for example, by excluding psychology). If you claim that something could never be disproven by science, that’s pretty close to saying ‘it won’t ever affect your decisions’, in which case, why care?
I think of philosophy as more like trying to fix the software that your brain runs on. Which includes, for example, how you categorize the outside world, and also your own model of yourself. That sounds like it ought to be the stamping ground of cognitive science, but we actually have a nice, high-level access to this kind of thing that doesn’t involve thinking about neurons at all: language. So we can work at that level, instead (or as well).
A lot of the stuff in the Sequences, for example, falls under this: it’s an investigation into what the hell is going on with our mindware, (mostly) done at the high level of language.
(Disclaimer: Philosophers differ a lot about what they think philsophy does/should do. Some of them definitely do think that it can tell you stuff about the world that science can’t, or that it can overrule it, or any number of crazy things!)
That would be an even weirder version of Earth. Well, less weird because it wouldn’t be a barren, waterless hellscape, but easier for my mind to paint.
A universe were cats were replaced with cat-imitating robots would be amazing for humans. Instead of the bronze age, we would hunt cats for their strong skeletons to use as tools and weapons. Should the skeletons be made instead of brittle epoxy of some kind, we would be able to study cat factories and bootstrap our mechanical knowledge. Should cats be self replicating with nano-machines, we would employ them as guard animals for crops bootstrapping agriculture; an artificial animal which cannot be eaten would have caused other animals to evolve not to mess with them. Should cats, somehow, manage to turn themselves edible after they die, we would still be able to look at their construction and know that they were not crafted by evolution; humanity would know that there was another race out there in the stars and that artificial life was possible. Twin-Eliezer could point to cats and say, “see, we can do this,” and all of humanity would be able to agree and put huge sums of money into AI research.
And if they are cat-robots who are indeed made of bone instead of metal, who reproduce just like cats do, who have exactly the same chemical composition as cats, and evolved here on earth in the exact same way cats do… then they’re just cats. The concept of identical-robot-cats is no different than the worthless concept of philosophical zombies. That’s the whole point of the quote.
I feel like you’re fighting the hypothetical a bit here.
Perhaps “cat” was a bad idea: we know too much about cats. Pick something where there are some properties that we don’t know about yet; then consider the situation where they are as the actually are, and where they’re different. The two would be indistinguishable to us, but that doesn’t mean that no experiment could ever tell them apart. See also asparisi’s comment.
I am most assuredly fighting the hypothetical (I’m familiar with and disagree with that link). As far as I can tell, that’s what Thagard is doing too.
I’m reminded of a rebuttal to that post, about how hypotheticals are used as a trap. Putnam intentionally chose to create a scientifically incoherent world. He could have chosen a jar of acid instead of an incoherent twin-earth, but he didn’t. He wanted the sort of confusion that could only come from an incoherent universe (luke links that in his quote).
I think that’s Thagard’s point. As he notes: these types of thought experiments are only expressions of our ignorance, and not deep insights about the mind.
What mileage do you think Putnam is getting here from creating this confusion? Do you think the point he’s trying to make hinges on the incoherence of the world he’s constructed?
I’m not quite sure why it matters that the world Putnam creates is “scientifically incoherent”—which I take to mean it conflicts with our current understanding of science?
As far as we know, the facts of science could have been different; hell, we could still be wrong about the ones we currently think we know. So our language ought to be able to cope with situations where the scientific facts are different than they actually are. It doesn’t matter that Putnam’s scenario can’t happen in this world: it could have happened, and thinking about what we would want to say in that situation can be illuminating. That’s all that’s being claimed here.
I wonder if the problem is referring to these kinds of things as “thought experiments”. They’re not really experiments. Imagine a non-native speaker asking you about the usage of a word, who concocts an unlikely (or maybe even impossible scenario) and then asks you whether the word would apply in that situation. That’s more like what’s going on, and it doesn’t bear a lot of resemblance to a scientific experiment!
Well you could go for something much more subtle, like using sugar of the opposite handedness on the other ‘Earth’. I don’t think it really changes the argument much whether the distinction is subtle or not.