So with consciousness, is it a useful concept? Well it certainly labels something without which I would simply not care about this conversation at all, as well as a bunch of other things. I personally believe p-zombies are impossible, that building a working human except without consciousness would be like building a working gasoline engine except without heat. I mention this for context, I think my believe about p-zombies is actually pretty common.
About the statement “I shouldn’t eat chickens because they are conscious” You ask what is it about consciousness that makes it wrong to eat its possessor? You don’t really try to answer the question, but I think there is an answer: we shouldn’t eat things that don’t want us to eat them. Probably more to the point, we shouldn’t kill things that don’t want us to kill them, and I would imagine the Chicken is much more concerned with our killing it than what happens after that. And with that idea, if we relabel consciousness as zxc, but zxc is still that thing that allows something to want other things, then it still works to say we shouldn’t eat chickens because they are zxc and do not want us to kill them.
If I have somehow missed your point, I am sorry. I did hope it would be valuable to suggest that “we shouldn’t eat chickens because they don’t want us to kill them” was a more fundamental moral statement than appealing to the abstraction of their consciousness.
Yes. If we change “We shouldn’t eat chickens because they are conscious” to “We shouldn’t eat chickens because they want to not be eaten,” then this becomes another example where, once we cashed out what was meant, the term ‘consciousness’ could be circumvented entirely and be replaced with a less philosophically murky concept. In this particular case, how clear the concept of ‘wanting’ (as relating to chickens) is, might be disputed, but it seems like clearly a lesser mystery or lack of clarity than the monolith of ‘consciousness’.
we shouldn’t kill things that don’t want us to kill them
Every living thing “wants” not to be killed, even plants. This is part of the expressed preferences of their death-avoiding behavior. How does this help you assign quantitative moral value to killing some but not others?
You write that consciousness is “that thing that allows something to want other things”, but how do you define or measure the presence of “wanting” except behavioristically?
You write that consciousness is “that thing that allows something to want other things”, but how do you define or measure the presence of “wanting” except behavioristically?
With very high confidence I know what I want. And for the most part, I don’t infer what I want by observing my own behavior, I observe what I want through introspection. With pretty high confidence, I know some of what other people want when they tell me what they want.
Believing that a chicken doesn’t want to be killed is something for which there is less evidence than with humans. The chicken can’t tell us what it wants, but some people are willing to infer that chickens don’t want to be killed by observing their behavior, which they believe has a significant similarity to their own or other human’s behavior when they or another human are not wanting to be killed. Me, I figure the chicken is just running on automatic pilot and isn’t thinking about whether it will be killed or not, very possibly doesn’t have a concept of being killed at all, and is really demonstrating that it doesn’t want to be caught.
Every living thing “wants” not to be killed, even plants. This is part of the expressed preferences of their death-avoiding behavior. How does this help you assign quantitative moral value to killing some but not others?
Do apples express a preference for gravity by falling from trees? Do rocks express a preference for lowlands by traveling to lowlands during floods? The answer is no, not everything that happens is because the things involved in it happening wanted it that way. Without too much fear of your coming up with a meaningful counterexample, among things currently known by humans on earth the only things that might even conceivably want things are things that have central nervous systems.
With very high confidence I know what I want. And for the most part, I don’t infer what I want by observing my own behavior, I observe what I want through introspection. With pretty high confidence, I know some of what other people want when they tell me what they want.
With weak to moderate confidence I can expect you to be drastically overconfident in your self-insight into what you want from introspection. (Simply because the probability that you are human is high, human introspection is biased in predictable ways and the evidence supplied by your descriptions of your introspection is insufficient to overcome the base rate.)
The evidence is that humans don’t act in ways entirely consistent with their stated preferences. There is no evidence that their stated preferences are not their preferences. You have to assume that how humans acts says more about their preferences than what they say about their preferences. You go down that path and you conclude that apples want to fall from trees.
There is no evidence that their stated preferences are not their preferences.
That’s an incredibly strong claim (“no evidence”). You are giving rather a lot of privilege to the hypothesis that the public relations module of the brain is given unfiltered access to potentially politically compromising information like that and then chooses to divulge it publicly. This is in rather stark contrast to what I have read and what I have experienced.
I’d like to live in a world where what you said is true. It would have saved me years of frustration.
You have to assume that how humans acts says more about their preferences than what they say about their preferences.
Both provide useful information, but not necessarily directly. fMRIs can be fun too, albeit just as tricky to map to the ‘want’ concept.
With very high confidence I know what I want. And for the most part, I don’t infer what I want by observing my own behavior, I observe what I want through introspection.
There’s an aphorism that says, “how can I know what I think unless I say it?” This is very true in my experience. And I don’t experience “introspection” to be significantly different from “observation”; it just substitutes speaking out loud for speaking inside my own head, as it were. (Sometimes I also find that I think easier and more clearly if I speak out loud, quietly, to myself, or if I write my thoughts down.)
I’m careful of the typical mind fallacy and don’t want to say my experiences are universal or, indeed, even typical. But neither do I have reason to think that my experience is very strange and everyone else introspects in a qualitatively different way.
I know some of what other people want when they tell me what they want.
Speaking (in this case to other people) is a form of behavior intended (greatly simplifying) to make other people do what you tell them to do. This is precisely inferring “wants” from behavior designed to achieve those wants. (Unless you think language confers special status with regards to wanting.)
Believing that a chicken doesn’t want to be killed is something for which there is less evidence than with humans.
Both people and chickens try to avoid dying. People are much better at it, because they are much smarter. Does that mean people want to avoid dying much more than chickens do? That is just a question about the definition of the word “want”: no answer will tell us anything new about reality.
Me, I figure the chicken is just running on automatic pilot and isn’t thinking about whether it will be killed or not, very possibly doesn’t have a concept of being killed at all, and is really demonstrating that it doesn’t want to be
caught.
Does this contradict what you previously said about chickens?
we shouldn’t kill things that don’t want us to kill them
the only things that might even conceivably want things are things that have central nervous systems.
Can you please specify explicitly what you mean by “wanting”?
I know some of what other people want when they tell me what they want.
Speaking (in this case to other people) is a form of behavior intended (greatly simplifying) to make other people do what you tell them to do. This is precisely inferring “wants” from behavior designed to achieve those wants. (Unless you think language confers special status with regards to wanting.)
On the one hand you suggested that plants “want” not to be killed, presumably based on seeing their behavior of sucking up water and sunlight and putting down deeper roots etc. The behavior you talk about here is non-verbal behavior. In fact, your more precise conclusions from watching plants is that “some plants don’t want to be killed” as you watch them not die, while based purely on observation, to be logical you would have to conclude that “many plants don’t mind being killed as you watched them modify their behavior not one whit as a harvesting machine drove towards them and then cut them down.
So no, I don’t think we can conclude that a plant wanted to not be killed by watching it grow any more than we can conclude that a car engine wanted to get hot or that a rock wanted to sit still by watching them.
Both people and chickens try to avoid dying. People are much better at it, because they are much smarter. Does that mean people want to avoid dying much more than chickens do? That is just a question about the definition of the word “want”: no answer will tell us anything new about reality.
You have very little (not none, but very little) reason to think a chicken even thinks about dying. We have more reason to think a chicken does not want to be caught. We don’t know if it doesn’t want to be caught because it imagines us wringing its neck and boiling it. In fact, I would imagine most of us don’t imagine it thinks of things in such futuristic detail, even among those of us who think we ought not eat it.
Speaking (in this case to other people) is a form of behavior intended (greatly simplifying) to make other people do what you tell them to do.
That’s a lot to assert. I assert speaking is a form of behavior intended to communicated ideas, to transfer meaning from one mind to another. Is my assertion inferior to yours in any way? When I would lecture to 50 students about electromagnetic fields for 85 minutes at a time, what was I trying to get them to do?
Speaking is a rather particular “form of behavior.” Yes I like the shorthand of ignoring the medium and looking at the result, I tell you I want money, you have an idea that I want money as a result of my telling you that. Sure there is “behavior” in the chain, but the starting point is in my mind and the endpoint is in your mind and that is the relevant stuff in this case where we are talking about consciousness and wanting, which are states of mind.
This is precisely inferring “wants” from behavior designed to achieve those wants. (Unless you think language confers special status with regards to wanting.)
I tell you I want money and I want beautiful women to perform sexual favors for me. Here I am communicating wants to you, but how is my communication “designed to achieve those wants?” I submit it isn’t, that your ideas about what talking is for are woefully incomplete.
Can you please specify explicitly what you mean by “wanting”?
Its a state of mind, an idea with content about the world. It reads on what I am likely to do but not super directly as there are thousands (at least) of other things that also influence what I am going to do. But it is the state of mind that is “wanting.”
And so if a chicken wants to not be killed AND you think that something’s wanting something produces a moral obligation upon you to not thwart its desires, then you ought not catch a chicken (and kill it and eat it) if it doesn’t want to be caught. The actual questions, does a chicken want anything? Does it in particular want not to be caught? Does what a chicken wants create an obligation in me? These are all IMHO open questions. But the meaning of “a chicken wants to not be caught” seems pretty straightforward, much more straightforward than does figuring out whether it is true or not, and whether it matters or not.
Fruits are just parts of a plant, not whole living things. Similarly you might say that somatic cells “want” to die after a few divisions because otherwise they risk turning into a cancer.
Parasites that don’t die when they are eaten obviously don’t count for not wanting to be killed.
Take an apple off the tree, put it in the ground, there’s a decent chance it’ll grow into a new tree. How is that not a “whole living thing?” If some animal ate the apple first, derived metabolic benefits from the juicy part and shat out the seeds intact, that seed would be no less likely to grow. Possibly more so, with nutrient-rich excrement for it’s initial load of fertilizer.
I agree with the thrust of your first paragraph. But the second one (and to some extent the first) seems to be using a revealed preferences framework that I’m not sure fully captures wanting. E.g. can that framework handle akrasia, irrationality, etc.?
The word “wanting”, like “consciousness”, seems to me not to quite cut reality at its joints. Goal-directed behavior (or its absence) is a much clearer concept, but even then humans rarely have clear goals. As you point out, akrasia and irrationality are common.
So I would rather not use “wanting” if I can avoid it, unless the meaning is clear. For example, saying “I want ice cream now” is a statement about my thoughts and desires right now, and it gives some information about my likely actions; it leaves little room for misunderstanding.
Goal-directed behavior (or its absence) is a much clearer concept, but even then humans rarely have clear goals. As you point out, akrasia and irrationality are common.
This looks like a precision vs. accuracy/relevance tradeoff. For example, some goals that are not explicitly formulated may influence behavior in a limited way that affects actions only in some contexts, perhaps only hypothetical ones (such as those posited to elicit idealized values). Such goals are normatively important (contribute to idealized values), even though formulating what they could be or observing them is difficult.
Every living thing “wants” not to be killed, even plants.
Just not true. There is no sense in which a creature which voluntarily gets killed and has no chance of further mating (and no other behavioural expressions indicating life-desire) can be said to “want” not to be killed. Not even in some sloppy evolutionary anthropomorphic sense.
“Wanting” not to be killed is a useful heuristic in most cases but certainly not all of them.
Also, every use of the word “every” has exceptions.
Yes, inclusive fitness is a much better approximation than “every living thing tries to avoid death”. And gene’s-eye-view is better than that. And non-genetic replicators have their place. And evolved things are adaptation executers. And sometimes living beings are just so bad at avoiding death that their “expressed behavioral preferences” look like something else entirely.
I still think my generalization is a highly accurate one and makes the point I wanted to make.
So with consciousness, is it a useful concept? Well it certainly labels something without which I would simply not care about this conversation at all, as well as a bunch of other things. I personally believe p-zombies are impossible, that building a working human except without consciousness would be like building a working gasoline engine except without heat. I mention this for context, I think my believe about p-zombies is actually pretty common.
About the statement “I shouldn’t eat chickens because they are conscious” You ask what is it about consciousness that makes it wrong to eat its possessor? You don’t really try to answer the question, but I think there is an answer: we shouldn’t eat things that don’t want us to eat them. Probably more to the point, we shouldn’t kill things that don’t want us to kill them, and I would imagine the Chicken is much more concerned with our killing it than what happens after that. And with that idea, if we relabel consciousness as zxc, but zxc is still that thing that allows something to want other things, then it still works to say we shouldn’t eat chickens because they are zxc and do not want us to kill them.
If I have somehow missed your point, I am sorry. I did hope it would be valuable to suggest that “we shouldn’t eat chickens because they don’t want us to kill them” was a more fundamental moral statement than appealing to the abstraction of their consciousness.
Yes. If we change “We shouldn’t eat chickens because they are conscious” to “We shouldn’t eat chickens because they want to not be eaten,” then this becomes another example where, once we cashed out what was meant, the term ‘consciousness’ could be circumvented entirely and be replaced with a less philosophically murky concept. In this particular case, how clear the concept of ‘wanting’ (as relating to chickens) is, might be disputed, but it seems like clearly a lesser mystery or lack of clarity than the monolith of ‘consciousness’.
Every living thing “wants” not to be killed, even plants. This is part of the expressed preferences of their death-avoiding behavior. How does this help you assign quantitative moral value to killing some but not others?
You write that consciousness is “that thing that allows something to want other things”, but how do you define or measure the presence of “wanting” except behavioristically?
With very high confidence I know what I want. And for the most part, I don’t infer what I want by observing my own behavior, I observe what I want through introspection. With pretty high confidence, I know some of what other people want when they tell me what they want.
Believing that a chicken doesn’t want to be killed is something for which there is less evidence than with humans. The chicken can’t tell us what it wants, but some people are willing to infer that chickens don’t want to be killed by observing their behavior, which they believe has a significant similarity to their own or other human’s behavior when they or another human are not wanting to be killed. Me, I figure the chicken is just running on automatic pilot and isn’t thinking about whether it will be killed or not, very possibly doesn’t have a concept of being killed at all, and is really demonstrating that it doesn’t want to be caught.
Do apples express a preference for gravity by falling from trees? Do rocks express a preference for lowlands by traveling to lowlands during floods? The answer is no, not everything that happens is because the things involved in it happening wanted it that way. Without too much fear of your coming up with a meaningful counterexample, among things currently known by humans on earth the only things that might even conceivably want things are things that have central nervous systems.
With weak to moderate confidence I can expect you to be drastically overconfident in your self-insight into what you want from introspection. (Simply because the probability that you are human is high, human introspection is biased in predictable ways and the evidence supplied by your descriptions of your introspection is insufficient to overcome the base rate.)
The evidence is that humans don’t act in ways entirely consistent with their stated preferences. There is no evidence that their stated preferences are not their preferences. You have to assume that how humans acts says more about their preferences than what they say about their preferences. You go down that path and you conclude that apples want to fall from trees.
That’s an incredibly strong claim (“no evidence”). You are giving rather a lot of privilege to the hypothesis that the public relations module of the brain is given unfiltered access to potentially politically compromising information like that and then chooses to divulge it publicly. This is in rather stark contrast to what I have read and what I have experienced.
I’d like to live in a world where what you said is true. It would have saved me years of frustration.
Both provide useful information, but not necessarily directly. fMRIs can be fun too, albeit just as tricky to map to the ‘want’ concept.
There’s an aphorism that says, “how can I know what I think unless I say it?” This is very true in my experience. And I don’t experience “introspection” to be significantly different from “observation”; it just substitutes speaking out loud for speaking inside my own head, as it were. (Sometimes I also find that I think easier and more clearly if I speak out loud, quietly, to myself, or if I write my thoughts down.)
I’m careful of the typical mind fallacy and don’t want to say my experiences are universal or, indeed, even typical. But neither do I have reason to think that my experience is very strange and everyone else introspects in a qualitatively different way.
Speaking (in this case to other people) is a form of behavior intended (greatly simplifying) to make other people do what you tell them to do. This is precisely inferring “wants” from behavior designed to achieve those wants. (Unless you think language confers special status with regards to wanting.)
Both people and chickens try to avoid dying. People are much better at it, because they are much smarter. Does that mean people want to avoid dying much more than chickens do? That is just a question about the definition of the word “want”: no answer will tell us anything new about reality.
Does this contradict what you previously said about chickens?
Can you please specify explicitly what you mean by “wanting”?
On the one hand you suggested that plants “want” not to be killed, presumably based on seeing their behavior of sucking up water and sunlight and putting down deeper roots etc. The behavior you talk about here is non-verbal behavior. In fact, your more precise conclusions from watching plants is that “some plants don’t want to be killed” as you watch them not die, while based purely on observation, to be logical you would have to conclude that “many plants don’t mind being killed as you watched them modify their behavior not one whit as a harvesting machine drove towards them and then cut them down.
So no, I don’t think we can conclude that a plant wanted to not be killed by watching it grow any more than we can conclude that a car engine wanted to get hot or that a rock wanted to sit still by watching them.
You have very little (not none, but very little) reason to think a chicken even thinks about dying. We have more reason to think a chicken does not want to be caught. We don’t know if it doesn’t want to be caught because it imagines us wringing its neck and boiling it. In fact, I would imagine most of us don’t imagine it thinks of things in such futuristic detail, even among those of us who think we ought not eat it.
That’s a lot to assert. I assert speaking is a form of behavior intended to communicated ideas, to transfer meaning from one mind to another. Is my assertion inferior to yours in any way? When I would lecture to 50 students about electromagnetic fields for 85 minutes at a time, what was I trying to get them to do?
Speaking is a rather particular “form of behavior.” Yes I like the shorthand of ignoring the medium and looking at the result, I tell you I want money, you have an idea that I want money as a result of my telling you that. Sure there is “behavior” in the chain, but the starting point is in my mind and the endpoint is in your mind and that is the relevant stuff in this case where we are talking about consciousness and wanting, which are states of mind.
I tell you I want money and I want beautiful women to perform sexual favors for me. Here I am communicating wants to you, but how is my communication “designed to achieve those wants?” I submit it isn’t, that your ideas about what talking is for are woefully incomplete.
Its a state of mind, an idea with content about the world. It reads on what I am likely to do but not super directly as there are thousands (at least) of other things that also influence what I am going to do. But it is the state of mind that is “wanting.”
And so if a chicken wants to not be killed AND you think that something’s wanting something produces a moral obligation upon you to not thwart its desires, then you ought not catch a chicken (and kill it and eat it) if it doesn’t want to be caught. The actual questions, does a chicken want anything? Does it in particular want not to be caught? Does what a chicken wants create an obligation in me? These are all IMHO open questions. But the meaning of “a chicken wants to not be caught” seems pretty straightforward, much more straightforward than does figuring out whether it is true or not, and whether it matters or not.
There are fruits which “want” to be eaten. It’s part of their life cycle. Intestinal parasites, too, although that’s a bit more problematic.
Fruits are just parts of a plant, not whole living things. Similarly you might say that somatic cells “want” to die after a few divisions because otherwise they risk turning into a cancer.
Parasites that don’t die when they are eaten obviously don’t count for not wanting to be killed.
Take an apple off the tree, put it in the ground, there’s a decent chance it’ll grow into a new tree. How is that not a “whole living thing?” If some animal ate the apple first, derived metabolic benefits from the juicy part and shat out the seeds intact, that seed would be no less likely to grow. Possibly more so, with nutrient-rich excrement for it’s initial load of fertilizer.
Fine. But these fruit don’t want to be killed, just eaten.
I agree with the thrust of your first paragraph. But the second one (and to some extent the first) seems to be using a revealed preferences framework that I’m not sure fully captures wanting. E.g. can that framework handle akrasia, irrationality, etc.?
The word “wanting”, like “consciousness”, seems to me not to quite cut reality at its joints. Goal-directed behavior (or its absence) is a much clearer concept, but even then humans rarely have clear goals. As you point out, akrasia and irrationality are common.
So I would rather not use “wanting” if I can avoid it, unless the meaning is clear. For example, saying “I want ice cream now” is a statement about my thoughts and desires right now, and it gives some information about my likely actions; it leaves little room for misunderstanding.
This looks like a precision vs. accuracy/relevance tradeoff. For example, some goals that are not explicitly formulated may influence behavior in a limited way that affects actions only in some contexts, perhaps only hypothetical ones (such as those posited to elicit idealized values). Such goals are normatively important (contribute to idealized values), even though formulating what they could be or observing them is difficult.
Just not true. There is no sense in which a creature which voluntarily gets killed and has no chance of further mating (and no other behavioural expressions indicating life-desire) can be said to “want” not to be killed. Not even in some sloppy evolutionary anthropomorphic sense.
“Wanting” not to be killed is a useful heuristic in most cases but certainly not all of them.
Also, every use of the word “every” has exceptions.
Yes, inclusive fitness is a much better approximation than “every living thing tries to avoid death”. And gene’s-eye-view is better than that. And non-genetic replicators have their place. And evolved things are adaptation executers. And sometimes living beings are just so bad at avoiding death that their “expressed behavioral preferences” look like something else entirely.
I still think my generalization is a highly accurate one and makes the point I wanted to make.
Including this one.
Naturally. For instance true mathematical theorems saying that every X is Y have no exceptions.