I have gripes with EA’s that try to argue about which animals have consciousness. They assume way too readily that consciousness and valence can be inferred from behavior at all.
It seems quite obvious to me that these people equate their ability to empathize with an animal with the ability for the animal to be conscious, and it seems quite obvious to me that this is a case of mind projection fallacy. Empathy is just a simulation. You can’t actually see another mind.
If you’re going to make guesses about whether a species is conscious, you should first look at neural correlates of consciousness and valence and then try to find these correlates in animals. You don’t look at animal behavior at all. We have absolutely no reason to believe that behavior correlates with consciousness. That’s just your empathy getting in the way. The same empathy that attributes feelings to stuffed animals.
We have absolutely no reason to believe that behavior correlates with consciousness.
Not to be pedantic, but what else could consciousness possibly be, except for a way of describing the behavior of some object at a high level of abstraction?
If consciousness was not a behavior, but instead was some intrinsic property of a system, then you run into the exact same argument that David Chalmers uses to argue that philosophical zombies are conceivable. This argument was forcefully rebutted in the sequences.
ETA: When I say behavior, I mean it in the physical sense. A human who is paralyzed but nonetheless conscious would not be behaviorally identical to a dead human. Superficially yes, but behavior means more than seeing what goes on outside. While you might say that I’m simply using a different definition of behavior than you were, I think it’s still relevant, because any evolutionary reason for consciousness must necessarily show up in observational behavior, or else there is no benefit and we have a mystery.
Not to be pedantic, but what else could consciousness possibly be, except for a way of describing the behavior of some object at a high level of abstraction?
It could be something that is primarily apparent to the person that has it.
If consciousness was not a behavior, but instead was some intrinsic property of a system, then you run into the exact same argument that David Chalmers uses to argue that philosophical zombies are conceivable.
That runs together two claims: that consciousness is not behaviour, and that it is independent of physics. You don’t have to accept the second claim in order to accept the first.
And it remains the case that Chalmers doesn’t think zombies are really possible.
I think it’s still relevant, because any evolutionary reason for consciousness must necessarily show up in observational behavior, or else there is no benefit and we have a mystery.
“Primarily accessible to the person that has it” does not mean “no behavioural consequences”.
It could be something that is primarily apparent to the person that has it.
I’m not convinced that this definition is sufficiently clear, or that consciousness should be defined this way. Rather, it’s a property of consciousness that people claim that it’s “readily apparent”, but I am not convinced that it has this readily apparent quality to it.
In general, rather than taking the properties of consciousness at face value, I take Dennett’s approach of evaluating people’s claims about consciousness from a behavioral science perspective. From my perspective, once you’ve explained the structure, dynamics, and the behavior, you’ve explained everything.
And it remains the case that Chalmers doesn’t think zombies are really possible.
Are you sure? Chalmers argues in Chapter 3 of The Conscious Mind that zombies are logically possible. I am not really even sure what force the Zombie argument could hold if he thought it was not logically possible.
From my point of view, that’s missing the central point quite badly.
Could you let me know why? What about consciousness is missed by a purely behavioral description of an object (keeping in mind that what I mean by behavior is very broad, and includes things like the behavior of electrical signals)?
I have gripes with EA’s that try to argue about which animals have consciousness. They assume way too readily that consciousness and valence can be inferred from behavior at all.
I think people who refer to animal behavior in making statements about consciousness are making a claim more along the lines of “given that a being has a brain with superficial similarities to ours and was evolved via a process similar to our own evolution, we can take it’s behavior as higher level indicators of what its brain is doing and infer things about consciousness.” Otherwise, these people would also grant consciousness to all sorts of things we make with superficially human behavior but obviously different mechanisms (ie non-playable characters in MMOs, chatbots).
If you’re going to make guesses about whether a species is conscious, you should first look at neural correlates of consciousness and valence and then try to find these correlates in animals. You don’t look at animal behavior at all.
I read a lot more about consciousness back in the day and I’m not convinced that neural correlates are any better evidence for consciousness than behavior, given that the beings we’re considering already have brains. I’m no expert but per wikipedia on neural correlations of consciousness, we don’t have much in terms of neural correlates:
Given the absence of any accepted criterion of the minimal neuronal correlates necessary for consciousness, the distinction between a persistently vegetative patient who shows regular sleep-wave transitions and may be able to move or smile, and a minimally conscious patient who can communicate (on occasion) in a meaningful manner (for instance, by differential eye movements) and who shows some signs of consciousness, is often difficult.
Per Open Philanthropy’s 2017 report on consciousness on cortex-requiring views (CRVs), we’re not really sure how important having a cortex is for consciousness:
Several authors have summarized additional arguments against CRVs,157 but I don’t find any of them to be even moderately conclusive. I do, however, think all this is sufficient to conclude that the case for CRVs is unconvincing. Hence, I don’t think there is even a “moderately strong” case for the cortex as a necessary condition for phenomenal consciousness (in humans and animals). But, I could imagine the case becoming stronger (or weaker) with further research.
And from the same report, there aren’t really any clear biological factors that can be used to draw lines about consciousness:
How did my mind change during this investigation? First, during the first few months of this investigation, I raised my probability that a very wide range of animals might be conscious. However, this had more to do with a “negative” discovery than a “positive” one, in the following sense: Before I began this investigation, I hadn’t studied consciousness much, and I held out some hope that there would turn out to be compelling reasons to “draw lines” at certain points in phylogeny, for example between animals which do and don’t have a cortex, and that I could justify a relatively sharp drop in probability of consciousness for species falling “below” those lines. But, as mentioned above, I eventually lost hope that there would (at this time) be compelling arguments for drawing any such lines in phylogeny (short of having a nervous system at all).
Moreover, people who have done way more thorough research into correlates of consciousness than me use both (ie anatomical features as an example of neural correlates, motivational trade-offs, as an example of behavior). Given that animals already have a bunch of similarities to humans, it strikes me as a mistake not to consider behavior at all.
A reductio ad absurdum for this is the strong skeptical position: I have no particular reason to believe that anything is conscious. All configurations of quantum space are equally valuable, and any division into “entities” with different amounts of moral weight is ridiculous.
We have absolutely no reason to believe that behavior correlates with consciousness.
The strong version of this can’t be true. You claiming that you’re conscious is part of your behaviour. Hopefully, it’s approximately true that you would claim that you’re conscious iff you believe that you’re conscious. If behaviour doesn’t at all correlate with consciousness, it follows that your belief in consciousness doesn’t at all correlate with you being conscious. Which is a reductio, because the whole point with having beliefs is to correlate them with the truth.
I have gripes with EA’s that try to argue about which animals have consciousness. They assume way too readily that consciousness and valence can be inferred from behavior at all.
It seems quite obvious to me that these people equate their ability to empathize with an animal with the ability for the animal to be conscious, and it seems quite obvious to me that this is a case of mind projection fallacy. Empathy is just a simulation. You can’t actually see another mind.
If you’re going to make guesses about whether a species is conscious, you should first look at neural correlates of consciousness and valence and then try to find these correlates in animals. You don’t look at animal behavior at all. We have absolutely no reason to believe that behavior correlates with consciousness. That’s just your empathy getting in the way. The same empathy that attributes feelings to stuffed animals.
Not to be pedantic, but what else could consciousness possibly be, except for a way of describing the behavior of some object at a high level of abstraction?
If consciousness was not a behavior, but instead was some intrinsic property of a system, then you run into the exact same argument that David Chalmers uses to argue that philosophical zombies are conceivable. This argument was forcefully rebutted in the sequences.
ETA: When I say behavior, I mean it in the physical sense. A human who is paralyzed but nonetheless conscious would not be behaviorally identical to a dead human. Superficially yes, but behavior means more than seeing what goes on outside. While you might say that I’m simply using a different definition of behavior than you were, I think it’s still relevant, because any evolutionary reason for consciousness must necessarily show up in observational behavior, or else there is no benefit and we have a mystery.
It could be something that is primarily apparent to the person that has it.
That runs together two claims: that consciousness is not behaviour, and that it is independent of physics. You don’t have to accept the second claim in order to accept the first.
And it remains the case that Chalmers doesn’t think zombies are really possible.
“Primarily accessible to the person that has it” does not mean “no behavioural consequences”.
I’m not convinced that this definition is sufficiently clear, or that consciousness should be defined this way. Rather, it’s a property of consciousness that people claim that it’s “readily apparent”, but I am not convinced that it has this readily apparent quality to it.
In general, rather than taking the properties of consciousness at face value, I take Dennett’s approach of evaluating people’s claims about consciousness from a behavioral science perspective. From my perspective, once you’ve explained the structure, dynamics, and the behavior, you’ve explained everything.
Are you sure? Chalmers argues in Chapter 3 of The Conscious Mind that zombies are logically possible. I am not really even sure what force the Zombie argument could hold if he thought it was not logically possible.
It’s not intended to be a complete definition of consciousness, just a nudge away from behaviourism.
From my point of view, that’s missing the central point quite badly.
And elsewhere that they are metaphysically impossible.
Could you let me know why? What about consciousness is missed by a purely behavioral description of an object (keeping in mind that what I mean by behavior is very broad, and includes things like the behavior of electrical signals)?
What is missed is the way it seems from the inside,as I pointed out originally. I don’t have to put my head into an FMRI to know that I am conscious.
I think people who refer to animal behavior in making statements about consciousness are making a claim more along the lines of “given that a being has a brain with superficial similarities to ours and was evolved via a process similar to our own evolution, we can take it’s behavior as higher level indicators of what its brain is doing and infer things about consciousness.” Otherwise, these people would also grant consciousness to all sorts of things we make with superficially human behavior but obviously different mechanisms (ie non-playable characters in MMOs, chatbots).
I read a lot more about consciousness back in the day and I’m not convinced that neural correlates are any better evidence for consciousness than behavior, given that the beings we’re considering already have brains. I’m no expert but per wikipedia on neural correlations of consciousness, we don’t have much in terms of neural correlates:
Per Open Philanthropy’s 2017 report on consciousness on cortex-requiring views (CRVs), we’re not really sure how important having a cortex is for consciousness:
And from the same report, there aren’t really any clear biological factors that can be used to draw lines about consciousness:
Moreover, people who have done way more thorough research into correlates of consciousness than me use both (ie anatomical features as an example of neural correlates, motivational trade-offs, as an example of behavior). Given that animals already have a bunch of similarities to humans, it strikes me as a mistake not to consider behavior at all.
A reductio ad absurdum for this is the strong skeptical position: I have no particular reason to believe that anything is conscious. All configurations of quantum space are equally valuable, and any division into “entities” with different amounts of moral weight is ridiculous.
The strong version of this can’t be true. You claiming that you’re conscious is part of your behaviour. Hopefully, it’s approximately true that you would claim that you’re conscious iff you believe that you’re conscious. If behaviour doesn’t at all correlate with consciousness, it follows that your belief in consciousness doesn’t at all correlate with you being conscious. Which is a reductio, because the whole point with having beliefs is to correlate them with the truth.
Right, right. So there is a correlation.
I’ll just say that there is no reason to believe that this correlation is very strong.
I once won a mario kart tournament without feeling my hands.