Any discussion of what art is. Any discussion of whether or not the universe is real. Any conversation about whether machines can truly be intelligent. More specifically, the ship of Theseus thought experiment and the related sorites paradox are entirely definitional, as is Edmund Gettier’s problem of knowledge. The (appallingly bad, by the way) swamp man argument by Donald Davidson hinges entirely on the belief that words actually refer to things. Shades of this pop up in Searle’s Chinese room and other bad thought experiments.
I could go on, but that would require me to actually go out and start reading philosophy papers, and goodness knows I hate that,
(1) Any discussion of what art is. (2) Any discussion of whether or not the universe is real. (3) Any conversation about whether machines can truly be intelligent.
I agree that the answers to these questions depend on definitions, but then, so does the answer to the question, “how long is this stick ?”. Depending on your definition, the answer may be “this many meters long”, “depends on which reference frame you’re using”, “the concept of a fixed length makes no sense at this scale and temperature”, or “it’s not a stick, it’s a cube”. That doesn’t mean that the question is inherently confused, only that you and your interlocutor have a communication problem.
That said, I believe that questions (1) and (3) are, in fact, questions about humans. They can be rephrased as “what causes humans to interpret an object or a performance as art”, and “what kind of things do humans consider to be intelligent”. The answers to these questions would be complex, involving multi-modal distributions with fuzzy boundaries, etc., but that still does not necessarily imply that the questions are confused.
Which is not to say that confused questions don’t exist, or that modern philosophical academia isn’t riddled with them; all I’m saying is that your examples are not convincing.
I agree that the answers to these questions depend on definitions
I think he meant that those questions depend ONLY on definitions.
As in, there’s a lot of interesting real world knowledge that goes in getting a submarine to propel itself, but that now we know that, have, people asking “can a submarine swim” is only interesting in deciding “should the English word ‘swim’ apply to the motion of a submarine, which is somewhat like the motion of swimming, but not entirely”. That example sounds stupid, but people waste a lot of time on the similar case of “think” instead of “swim”.
“What causes humans to interpret an object or a performance as art” and “What is art?” may be seen as two entirely different questions to certain philosophers. I’m skeptical that people who frequent this site would make such a distinction, but we aren’t talking about LWers here.
People whoe frequent this site already do make parallel distinctions about more LW-friendly topics. For instance, the point of the Art of Rationality is that there is a right way to do thinking and persuading, which is not to say that Reason “just is” whatever happens to persuade or convince people, since people can be persuaded by bad arguments. If that can be made to work, then “it’s hanging in a gallery, but it isn’t art” can be made to work.
ETA:
That said, I believe that questions (1) and (3) are, in fact, questions about humans.
Rationality is about humans, in a sense, too. The moral is that being “about humans” doens’t imply that the
search for norms or real meanings, or genuine/pseudo distinctions is fruitless.
Agreed, but my point was that questions about humans are questions about the Universe (since humans are part of it), and therefore they can be answerable and meaningful. Thus, you could indeed come up with an answer that sounds something like, “it’s hanging in a gallery, but our model predicts that it’s only 12.5% art”.
But I agree with BerryPick6 when he says that not all philosophers make that distinction.
I agree that the answers to these questions depend on definitions, but then, so does the answer to the question, “how long is this stick ?”
There’s a key distinction that I feel you may be glossing over here. In the case of the stick question, there is an extremely high probability that you and the person you’re talking to, though you may not be using exactly the same definitions, are using definitions that are closely enough entangled with observable features of the world be broadly isomorphic.
In other words, there is a good chance that, without either of you adjusting your definitions, you and the neurotypical human you’re talking to are likely to be able to come up with some answer that both of you will find satisfying, and will allow you to meaningfully predict future experiences.
With the three examples I raised, this isn’t the case. There are a host of different definitions, which are not closely entangled with simple, observable features of the world. As such, even if you and the person you’re talking to have similar life experiences, there is no guarantee that you will come to the same conclusions, because your definitions are likely to be personal, and the outcome of the question depends heavily upon those definitions.
Furthermore, in the three cases I mentioned, unlike the stick, if you hold a given position, it’s not at all clear what evidence could persuade you to change your mind, for many possible (and common!) positions. This is a telltale sign of a confused question.
There are a host of different definitions, which are not closely entangled with simple, observable features of the world.
I believe that at least two of those definitions could be something like, “what kinds of humans would consider this art ?”, or “will machines ever pass the Turing test”. These questions are about human actions which express human thoughts, and are indeed observable features of the world. I do agree that there are many other, more personal definitions that are of little use.
I think we need a clearer idea of what we mean by a ‘bad’ thought experiment. Sometimes thought experiments are good precisely because they make us recognize (sometimes deliberately) that one of the concepts we imported into the experiment is unworkable. Searle’s Chinese room is a good example of this, since it (and a class of similar thought experiments) helps show that our intuitive conceptions of the mental are, on a physicalist account, defective in a variety of ways. The right response is to analyze and revise the problem concepts. The right response is not to simply pretend that the thought experiment was never proposed; the results of thought experiments are data, even if they’re only data about our own imaginative faculties.
My first thought was “every philosophical thought experiment ever” and to my surprise wikipedia says there aren’t that many thought experiments in philosophy (although, they are huge topics of discussion). I think the violinist experiment is uniquely bad. The floating man experiment is another good example, but very old.
What’s your objection to the violinist thought experiment? If you’re a utilitarian, perhaps you don’t think the waters here are very deep. It’s certainly a useful way of deflating and short-circuiting certain other intuitions that block scientific and medicinal progress in much of the developed world, though.
Judith Thomson provided one of the most striking and effective thought experiments in the moral realm (see Thomson, 1971). Her example is aimed at a popular anti-abortion argument that goes something like this: The foetus is an innocent person with a right to life. Abortion results in the death of a foetus. Therefore, abortion is morally wrong. In her thought experiment we are asked to imagine a famous violinist falling into a coma. The society of music lovers determines from medical records that you and you alone can save the violinist’s life by being hooked up to him for nine months. The music lovers break into your home while you are asleep and hook the unconscious (and unknowing, hence innocent) violinist to you. You may want to unhook him, but you are then faced with this argument put forward by the music lovers: The violinist is an innocent person with a right to life. Unhooking him will result in his death. Therefore, unhooking him is morally wrong.
However, the argument, even though it has the same structure as the anti-abortion argument, does not seem convincing in this case. You would be very generous to remain attached and in bed for nine months, but you are not morally obliged to do so.
The thought experiment depends on your intuitions or your definition of moral obligations and wrongness, but the experiment doesn’t make these distinctions. It just pretends that everyone has same intuition and as such the experiment should remain analogous regardless (probably because Judith didn’t think anyone else could have different intuitions), and so then you have all these other philosophers and people arguing about this minutia and adding on further qualifications and modifications to the point where that they may as well be talking about actual abortion.
The thought experiment functions as an informal reductio ad absurdum of the argument ‘Fetuses are people. Therefore abortion is immoral.’ or ‘Fetuses are conscious. Therefore abortion is immoral.’ That’s all it’s doing. If you didn’t find the arguments compelling in the first place, then the reductio won’t be relevant to you. Likewise, if you think the whole moral framework underlying these anti-abortion arguments is suspect, then you may want to fight things out at the fundaments rather than getting into nitty-gritty details like this. The significance of the violin thought experiment is that you don’t need to question the anti-abortionist’s premises in order to undermine the most common anti-abortion arguments; they yield consequences all on their own that most anti-abortionists would find unacceptable.
That is the dialectical significance of the above argument. It has nothing to do with assuming that everyone found the original anti-abortion argument plausible. An initially implausible argument that’s sufficiently popular may still be worth analyzing and refuting.
Can we first agree that some questions are not dissolved by observing that meanings are conventional? If I run up to you and say “My house is on fire, what should I do?”, and you tell me “The answer depends, in part, on what you mean by ‘house’ and ‘fire’...”, that will not save my possessions from destruction.
If I take your preceding comment at face value, then you are telling me
there is nothing to think about in pondering the nature of art, it’s just a matter of definition
there is nothing to think about regarding whether the universe exists, it’s just a matter of definition
there’s no question of whether artificial intelligence is the same thing as natural intelligence, it’s just a matter of definition
and that there’s no “house-on-fire” real issue lurking anywhere behind these topics. Is that really what you think?
Well, I’m sorry. Please fill out a conversational complaint form and put it in the box, and an HR representative will mail you a more detailed survey in six to eight weeks.
I agree entirely that meaningful questions exist, and made no claim to the contrary. I do not believe, however, that as an institution, modern philosophy is particularly good at identifying those questions.
In response to your questions,
Yes, absolutely.
Yes, mostly. There are different kinds of existence, but the answer you get out will depend entirely on your definitions.
Yes, mostly. There are different kinds of possible artificial intelligence, but the question of whether machines can -truly- be intelligent depends exclusively upon your definition of intelligence.
As a general rule, if you can’t imagine any piece of experimental evidence settling a question, it’s probably a definitional one.
The true nature of art, existence, and intelligence are all substantial topics—highly substantial! In each case, like the physical house-on-fire, there is an object of inquiry independent of the name we give it.
With respect to art—think of the analogous question concerning science. Would you be so quick to claim that whether something is science is purely a matter of definition?
With respect to existence—whether the universe is real—we can distinguish possibilities such as: there really is a universe containing billions of light-years of galaxies full of stars; there is a brain in a vat being fed illusory stimuli, with the real world actually being quite unlike the world described by known physics and astronomy; and even solipsistic metaphysical idealism—there is no matter at all, just a perceiving consciousness having experiences.
If I ponder whether the universe is real, I am trying to choose between these and other options. Since I know that the universe appears to be there, I also know that any viable scenario must contain “apparent universe” as an entity. To insist that the reality of the universe is just a matter of definition, you must say that “apparent universe” in all its forms is potentially worthy of the name “actual universe”. That’s certainly not true to what I would mean by “real”. If I ask whether the Andromeda galaxy is real, I mean whether there really is a vast tract of space populated with trillions of stars, etc. A data structure providing a small part of the cosmic backdrop in a simulated experience would not count.
With respect to intelligence—I think the root of the problem here is that you think you already know what intelligence in humans is—that it is fundamentally just computation—and that the boundary between smart computation and dumb computation is obviously arbitrary. It’s like thinking of a cloud as “water vapor”. Water vapor can congregate on a continuum of scales from invisibly small to kilometers in size, and a cloud is just a fuzzy naive category employed by humans for the water vapor they can see in the sky.
Intelligence, so the argument goes, is similarly a fuzzy naive category employed by humans for the computation they can see in human behavior. There would be some truth to that analysis of the concept… except that, in the longer run, we may find ourselves wanting to say that certain highly specific refinements of the original concept are the only reasonable ways of making it precise. Intelligence implies something like sophisticated insight; so it can’t apply to anything too simple (like a thermostat), and it can’t apply to algorithms that work through brute force.
And then there is the whole question of consciousness and its role in human intelligence. We may end up wishing to say that there is a fundamental distinction between conscious intelligence—sophisticated cognition which employs genuine insight, i.e. conscious insight, conscious awareness of salient facts and relations—and unconscious intelligence—where the “insight” is really a matter of computational efficiency. The topic of intelligence is the one where I would come closest to endorsing your semantic relativism, but that’s only because in this case, the “independent object of inquiry” appears to include heterogeneous phenomena (e.g. sophisticated conscious cognition, sophisticated unconscious cognition, sophisticated general problem-solving algorithms), and how we end up designating those phenomena once we obtain a mature understanding of their nature, might be somewhat contingent after all.
Any discussion of what art is. Any discussion of whether or not the universe is real. Any conversation about whether machines can truly be intelligent. More specifically, the ship of Theseus thought experiment and the related sorites paradox are entirely definitional, as is Edmund Gettier’s problem of knowledge. The (appallingly bad, by the way) swamp man argument by Donald Davidson hinges entirely on the belief that words actually refer to things. Shades of this pop up in Searle’s Chinese room and other bad thought experiments.
I could go on, but that would require me to actually go out and start reading philosophy papers, and goodness knows I hate that,
Your examples include:
(1) Any discussion of what art is.
(2) Any discussion of whether or not the universe is real.
(3) Any conversation about whether machines can truly be intelligent.
I agree that the answers to these questions depend on definitions, but then, so does the answer to the question, “how long is this stick ?”. Depending on your definition, the answer may be “this many meters long”, “depends on which reference frame you’re using”, “the concept of a fixed length makes no sense at this scale and temperature”, or “it’s not a stick, it’s a cube”. That doesn’t mean that the question is inherently confused, only that you and your interlocutor have a communication problem.
That said, I believe that questions (1) and (3) are, in fact, questions about humans. They can be rephrased as “what causes humans to interpret an object or a performance as art”, and “what kind of things do humans consider to be intelligent”. The answers to these questions would be complex, involving multi-modal distributions with fuzzy boundaries, etc., but that still does not necessarily imply that the questions are confused.
Which is not to say that confused questions don’t exist, or that modern philosophical academia isn’t riddled with them; all I’m saying is that your examples are not convincing.
I think he meant that those questions depend ONLY on definitions.
As in, there’s a lot of interesting real world knowledge that goes in getting a submarine to propel itself, but that now we know that, have, people asking “can a submarine swim” is only interesting in deciding “should the English word ‘swim’ apply to the motion of a submarine, which is somewhat like the motion of swimming, but not entirely”. That example sounds stupid, but people waste a lot of time on the similar case of “think” instead of “swim”.
Ok, that’s a good point; inserting the word “only” in there does make a huge difference.
I also agree with BerryPick6 on this sub-thread.
“What causes humans to interpret an object or a performance as art” and “What is art?” may be seen as two entirely different questions to certain philosophers. I’m skeptical that people who frequent this site would make such a distinction, but we aren’t talking about LWers here.
People whoe frequent this site already do make parallel distinctions about more LW-friendly topics. For instance, the point of the Art of Rationality is that there is a right way to do thinking and persuading, which is not to say that Reason “just is” whatever happens to persuade or convince people, since people can be persuaded by bad arguments. If that can be made to work, then “it’s hanging in a gallery, but it isn’t art” can be made to work.
ETA:
Rationality is about humans, in a sense, too. The moral is that being “about humans” doens’t imply that the search for norms or real meanings, or genuine/pseudo distinctions is fruitless.
Agreed, but my point was that questions about humans are questions about the Universe (since humans are part of it), and therefore they can be answerable and meaningful. Thus, you could indeed come up with an answer that sounds something like, “it’s hanging in a gallery, but our model predicts that it’s only 12.5% art”.
But I agree with BerryPick6 when he says that not all philosophers make that distinction.
There’s a key distinction that I feel you may be glossing over here. In the case of the stick question, there is an extremely high probability that you and the person you’re talking to, though you may not be using exactly the same definitions, are using definitions that are closely enough entangled with observable features of the world be broadly isomorphic.
In other words, there is a good chance that, without either of you adjusting your definitions, you and the neurotypical human you’re talking to are likely to be able to come up with some answer that both of you will find satisfying, and will allow you to meaningfully predict future experiences.
With the three examples I raised, this isn’t the case. There are a host of different definitions, which are not closely entangled with simple, observable features of the world. As such, even if you and the person you’re talking to have similar life experiences, there is no guarantee that you will come to the same conclusions, because your definitions are likely to be personal, and the outcome of the question depends heavily upon those definitions.
Furthermore, in the three cases I mentioned, unlike the stick, if you hold a given position, it’s not at all clear what evidence could persuade you to change your mind, for many possible (and common!) positions. This is a telltale sign of a confused question.
I believe that at least two of those definitions could be something like, “what kinds of humans would consider this art ?”, or “will machines ever pass the Turing test”. These questions are about human actions which express human thoughts, and are indeed observable features of the world. I do agree that there are many other, more personal definitions that are of little use.
I think we need a clearer idea of what we mean by a ‘bad’ thought experiment. Sometimes thought experiments are good precisely because they make us recognize (sometimes deliberately) that one of the concepts we imported into the experiment is unworkable. Searle’s Chinese room is a good example of this, since it (and a class of similar thought experiments) helps show that our intuitive conceptions of the mental are, on a physicalist account, defective in a variety of ways. The right response is to analyze and revise the problem concepts. The right response is not to simply pretend that the thought experiment was never proposed; the results of thought experiments are data, even if they’re only data about our own imaginative faculties.
My first thought was “every philosophical thought experiment ever” and to my surprise wikipedia says there aren’t that many thought experiments in philosophy (although, they are huge topics of discussion). I think the violinist experiment is uniquely bad. The floating man experiment is another good example, but very old.
What’s your objection to the violinist thought experiment? If you’re a utilitarian, perhaps you don’t think the waters here are very deep. It’s certainly a useful way of deflating and short-circuiting certain other intuitions that block scientific and medicinal progress in much of the developed world, though.
From SEP:
The thought experiment depends on your intuitions or your definition of moral obligations and wrongness, but the experiment doesn’t make these distinctions. It just pretends that everyone has same intuition and as such the experiment should remain analogous regardless (probably because Judith didn’t think anyone else could have different intuitions), and so then you have all these other philosophers and people arguing about this minutia and adding on further qualifications and modifications to the point where that they may as well be talking about actual abortion.
The thought experiment functions as an informal reductio ad absurdum of the argument ‘Fetuses are people. Therefore abortion is immoral.’ or ‘Fetuses are conscious. Therefore abortion is immoral.’ That’s all it’s doing. If you didn’t find the arguments compelling in the first place, then the reductio won’t be relevant to you. Likewise, if you think the whole moral framework underlying these anti-abortion arguments is suspect, then you may want to fight things out at the fundaments rather than getting into nitty-gritty details like this. The significance of the violin thought experiment is that you don’t need to question the anti-abortionist’s premises in order to undermine the most common anti-abortion arguments; they yield consequences all on their own that most anti-abortionists would find unacceptable.
That is the dialectical significance of the above argument. It has nothing to do with assuming that everyone found the original anti-abortion argument plausible. An initially implausible argument that’s sufficiently popular may still be worth analyzing and refuting.
I am unimpressed by your examples.
Can we first agree that some questions are not dissolved by observing that meanings are conventional? If I run up to you and say “My house is on fire, what should I do?”, and you tell me “The answer depends, in part, on what you mean by ‘house’ and ‘fire’...”, that will not save my possessions from destruction.
If I take your preceding comment at face value, then you are telling me
there is nothing to think about in pondering the nature of art, it’s just a matter of definition
there is nothing to think about regarding whether the universe exists, it’s just a matter of definition
there’s no question of whether artificial intelligence is the same thing as natural intelligence, it’s just a matter of definition
and that there’s no “house-on-fire” real issue lurking anywhere behind these topics. Is that really what you think?
Well, I’m sorry. Please fill out a conversational complaint form and put it in the box, and an HR representative will mail you a more detailed survey in six to eight weeks.
I agree entirely that meaningful questions exist, and made no claim to the contrary. I do not believe, however, that as an institution, modern philosophy is particularly good at identifying those questions.
In response to your questions,
Yes, absolutely.
Yes, mostly. There are different kinds of existence, but the answer you get out will depend entirely on your definitions.
Yes, mostly. There are different kinds of possible artificial intelligence, but the question of whether machines can -truly- be intelligent depends exclusively upon your definition of intelligence.
As a general rule, if you can’t imagine any piece of experimental evidence settling a question, it’s probably a definitional one.
The true nature of art, existence, and intelligence are all substantial topics—highly substantial! In each case, like the physical house-on-fire, there is an object of inquiry independent of the name we give it.
With respect to art—think of the analogous question concerning science. Would you be so quick to claim that whether something is science is purely a matter of definition?
With respect to existence—whether the universe is real—we can distinguish possibilities such as: there really is a universe containing billions of light-years of galaxies full of stars; there is a brain in a vat being fed illusory stimuli, with the real world actually being quite unlike the world described by known physics and astronomy; and even solipsistic metaphysical idealism—there is no matter at all, just a perceiving consciousness having experiences.
If I ponder whether the universe is real, I am trying to choose between these and other options. Since I know that the universe appears to be there, I also know that any viable scenario must contain “apparent universe” as an entity. To insist that the reality of the universe is just a matter of definition, you must say that “apparent universe” in all its forms is potentially worthy of the name “actual universe”. That’s certainly not true to what I would mean by “real”. If I ask whether the Andromeda galaxy is real, I mean whether there really is a vast tract of space populated with trillions of stars, etc. A data structure providing a small part of the cosmic backdrop in a simulated experience would not count.
With respect to intelligence—I think the root of the problem here is that you think you already know what intelligence in humans is—that it is fundamentally just computation—and that the boundary between smart computation and dumb computation is obviously arbitrary. It’s like thinking of a cloud as “water vapor”. Water vapor can congregate on a continuum of scales from invisibly small to kilometers in size, and a cloud is just a fuzzy naive category employed by humans for the water vapor they can see in the sky.
Intelligence, so the argument goes, is similarly a fuzzy naive category employed by humans for the computation they can see in human behavior. There would be some truth to that analysis of the concept… except that, in the longer run, we may find ourselves wanting to say that certain highly specific refinements of the original concept are the only reasonable ways of making it precise. Intelligence implies something like sophisticated insight; so it can’t apply to anything too simple (like a thermostat), and it can’t apply to algorithms that work through brute force.
And then there is the whole question of consciousness and its role in human intelligence. We may end up wishing to say that there is a fundamental distinction between conscious intelligence—sophisticated cognition which employs genuine insight, i.e. conscious insight, conscious awareness of salient facts and relations—and unconscious intelligence—where the “insight” is really a matter of computational efficiency. The topic of intelligence is the one where I would come closest to endorsing your semantic relativism, but that’s only because in this case, the “independent object of inquiry” appears to include heterogeneous phenomena (e.g. sophisticated conscious cognition, sophisticated unconscious cognition, sophisticated general problem-solving algorithms), and how we end up designating those phenomena once we obtain a mature understanding of their nature, might be somewhat contingent after all.
So what’s the difference between philosophy and science then?
Err… science deals with questions you can settle with evidence? I’m not sure what you’re getting at here.
How does your use of the label “philosophical” fit in with your uses of the categories “definitional” and “can be settled by experimental evidence”?