I have gripes with EA’s that try to argue about which animals have consciousness. They assume way too readily that consciousness and valence can be inferred from behavior at all.
I think people who refer to animal behavior in making statements about consciousness are making a claim more along the lines of “given that a being has a brain with superficial similarities to ours and was evolved via a process similar to our own evolution, we can take it’s behavior as higher level indicators of what its brain is doing and infer things about consciousness.” Otherwise, these people would also grant consciousness to all sorts of things we make with superficially human behavior but obviously different mechanisms (ie non-playable characters in MMOs, chatbots).
If you’re going to make guesses about whether a species is conscious, you should first look at neural correlates of consciousness and valence and then try to find these correlates in animals. You don’t look at animal behavior at all.
I read a lot more about consciousness back in the day and I’m not convinced that neural correlates are any better evidence for consciousness than behavior, given that the beings we’re considering already have brains. I’m no expert but per wikipedia on neural correlations of consciousness, we don’t have much in terms of neural correlates:
Given the absence of any accepted criterion of the minimal neuronal correlates necessary for consciousness, the distinction between a persistently vegetative patient who shows regular sleep-wave transitions and may be able to move or smile, and a minimally conscious patient who can communicate (on occasion) in a meaningful manner (for instance, by differential eye movements) and who shows some signs of consciousness, is often difficult.
Per Open Philanthropy’s 2017 report on consciousness on cortex-requiring views (CRVs), we’re not really sure how important having a cortex is for consciousness:
Several authors have summarized additional arguments against CRVs,157 but I don’t find any of them to be even moderately conclusive. I do, however, think all this is sufficient to conclude that the case for CRVs is unconvincing. Hence, I don’t think there is even a “moderately strong” case for the cortex as a necessary condition for phenomenal consciousness (in humans and animals). But, I could imagine the case becoming stronger (or weaker) with further research.
And from the same report, there aren’t really any clear biological factors that can be used to draw lines about consciousness:
How did my mind change during this investigation? First, during the first few months of this investigation, I raised my probability that a very wide range of animals might be conscious. However, this had more to do with a “negative” discovery than a “positive” one, in the following sense: Before I began this investigation, I hadn’t studied consciousness much, and I held out some hope that there would turn out to be compelling reasons to “draw lines” at certain points in phylogeny, for example between animals which do and don’t have a cortex, and that I could justify a relatively sharp drop in probability of consciousness for species falling “below” those lines. But, as mentioned above, I eventually lost hope that there would (at this time) be compelling arguments for drawing any such lines in phylogeny (short of having a nervous system at all).
Moreover, people who have done way more thorough research into correlates of consciousness than me use both (ie anatomical features as an example of neural correlates, motivational trade-offs, as an example of behavior). Given that animals already have a bunch of similarities to humans, it strikes me as a mistake not to consider behavior at all.
I think people who refer to animal behavior in making statements about consciousness are making a claim more along the lines of “given that a being has a brain with superficial similarities to ours and was evolved via a process similar to our own evolution, we can take it’s behavior as higher level indicators of what its brain is doing and infer things about consciousness.” Otherwise, these people would also grant consciousness to all sorts of things we make with superficially human behavior but obviously different mechanisms (ie non-playable characters in MMOs, chatbots).
I read a lot more about consciousness back in the day and I’m not convinced that neural correlates are any better evidence for consciousness than behavior, given that the beings we’re considering already have brains. I’m no expert but per wikipedia on neural correlations of consciousness, we don’t have much in terms of neural correlates:
Per Open Philanthropy’s 2017 report on consciousness on cortex-requiring views (CRVs), we’re not really sure how important having a cortex is for consciousness:
And from the same report, there aren’t really any clear biological factors that can be used to draw lines about consciousness:
Moreover, people who have done way more thorough research into correlates of consciousness than me use both (ie anatomical features as an example of neural correlates, motivational trade-offs, as an example of behavior). Given that animals already have a bunch of similarities to humans, it strikes me as a mistake not to consider behavior at all.