Here’s a related illusionist-compatible evolutionary hypothesis about consciousness: consciousness evolved to give us certain resilient beliefs that are adaptive to have. For example, belief in your own consciousness contributes to the belief that death would be bad, and this belief is used when you reason and plan, especially to avoid death. The badness or undesirability of suffering (or the things that cause us suffering) is another such resilient belief. In general, we use reason and planning to pursue things we belive are good and prevent things we believe are bad. Many of the things we believe are good or bad have been shaped by evolution to cause us pleasure or suffering, so evolution was able to highjack our capacities for reason and planning to spread genes more.
Then this raises some questions: for what kinds of reasoning and planning would such beliefs actually be useful (over what we would do without them)? Is language necessary? How much? How sophisticated was the language of early Homo sapiens or earlier ancestors, and how much have our brains and cognitive capacities changed since then? Do animals trained to communicate more (chimps, gorillas, parrots, or even cats and dogs with word buttons) meet the bar?
When I think about an animal simulating outcomes (e.g. visualizing or reasoning about them) and deciding how to act based on whichever outcome seemed most desirable, I’m not sure you really need “beliefs” at all. The animal can react emotionally or with desire to the simulation, and then that reaction becomes associated with the option that generated it, so options will end up more or less attractive this way.
Also, somewhat of an aside: some illusions (including optical illusions, magic) are like lies of omission and disappear when you explain what’s missing, while others are lies of commission, and don’t disappear when you explain them (many optical illusions). Consciousness illusions seem more like the latter: people aren’t going to stop believing they’re conscious even if they understood how consciousness works. See
https://link.springer.com/article/10.1007/s10670-019-00204-4
I think some nonhuman animals also have some such rich illusions, like the rubber tail illusion in rodents and I think some optical illusions, but it’s not clear what this says about their consciousness under illusionism.
Here’s a related illusionist-compatible evolutionary hypothesis about consciousness: consciousness evolved to give us certain resilient beliefs that are adaptive to have. For example, belief in your own consciousness contributes to the belief that death would be bad, and this belief is used when you reason and plan, especially to avoid death. The badness or undesirability of suffering (or the things that cause us suffering) is another such resilient belief. In general, we use reason and planning to pursue things we belive are good and prevent things we believe are bad. Many of the things we believe are good or bad have been shaped by evolution to cause us pleasure or suffering, so evolution was able to highjack our capacities for reason and planning to spread genes more.
Then this raises some questions: for what kinds of reasoning and planning would such beliefs actually be useful (over what we would do without them)? Is language necessary? How much? How sophisticated was the language of early Homo sapiens or earlier ancestors, and how much have our brains and cognitive capacities changed since then? Do animals trained to communicate more (chimps, gorillas, parrots, or even cats and dogs with word buttons) meet the bar?
When I think about an animal simulating outcomes (e.g. visualizing or reasoning about them) and deciding how to act based on whichever outcome seemed most desirable, I’m not sure you really need “beliefs” at all. The animal can react emotionally or with desire to the simulation, and then that reaction becomes associated with the option that generated it, so options will end up more or less attractive this way.
Also, somewhat of an aside: some illusions (including optical illusions, magic) are like lies of omission and disappear when you explain what’s missing, while others are lies of commission, and don’t disappear when you explain them (many optical illusions). Consciousness illusions seem more like the latter: people aren’t going to stop believing they’re conscious even if they understood how consciousness works. See https://link.springer.com/article/10.1007/s10670-019-00204-4
I think some nonhuman animals also have some such rich illusions, like the rubber tail illusion in rodents and I think some optical illusions, but it’s not clear what this says about their consciousness under illusionism.