(I haven’t caught up on the entire thread, apologies if this is a repeat)
Assuming the “qualia is a misguided pseudoconcept” is true, do you have a sense of why people think that it’s real? i.e. taking the evidence of “Somehow, people end up saying sentences about how they have a sense of what it is like to perceive things. Why is that? What process would generate people saying words like that?” (This is not meant to be a gotcha, it just seems like a good question to ask)
No worries, it’s not a gotcha at all, and I already have some thoughts about this.
I was more interested in this topic back about seven or eight years ago, when I was actually studying it. I moved on to psychology and metaethics, and haven’t been actively reading about this stuff since about 2014.
I’m not sure it’d be ideal to try to dredge all that up, but I can roughly point towards something like Robbins and Jack (2006) as an example of the kind of research I’d employ to develop a type of debunking explanation for qualia intuitions. I am not necessarily claiming their specific account is correct, or rigorous, or sufficient all on its own, but it points to the kind of work cognitive scientists and philosophers could do that is at least in the ballpark.
Roughly, they attempt to offer an empirical explanation for the persistent of the explanatory gap (the problem of accounting for the consciousness by appeal to physical or at least nonconscious phenomena). Its persistence could be due to quirks in the way human cognition works. If so, it may be difficult to dispel certain kinds of introspective illusions.
Roughly, suppose we have multiple, distinct “mapping systems” that each independently operate to populate their own maps of the territory. Each of these systems evolved and currently functions to facilitate adaptive behavior. However, we may discover that when we go to formulate comprehensive and rigorous theories about how the world is, these maps seem to provide us with conflicting or confusing information.
Suppose one of these mapping systems was a “physical stuff” map. It populates our world with objects, and we have the overwhelming impression that there is “physical stuff” out there, that we can detect using our senses.
But suppose also we have a “important agents that I need to treat well” system, that detects and highlights certain agents within the world for whom it would be important to treat appropriately, a kind of “VIP agency mapping system” that recruited a host of appropriate functional responses: emotional reactions, adopting the intentional stance, cheater-detection systems, and so on.
On reflecting on the first system, we might come to form the view that the external world really is just this stuff described by physics, whatever that is. And that includes the VIP agents we interact with: they’re bags of meat! But this butts up against the overwhelming impression that they just couldn’t be. They must be more than just bags of meat. They have feelings! We may find ourselves incapable of shaking this impression, no matter how much of a reductionist or naturalist or whatever we might like to be.
What could be going on here is simply the inability for these two mapping systems to adequately talk to one another. We are host to divided minds with balkanized mapping systems, and may find that we simply cannot grok some of the concepts contained in one of our mapping systems in terms of the mapping system in the other. You might call this something like “internal failure to grok.” It isn’t that, say, I cannot grok some other person’s concepts, but that some of the cognitive systems I possess cannot grok each other.
You might call this something like “conceptual incommensurability.” And if we’re stuck with a cognitive architecture like this, certain intuitions may seem incorrigible, even if we could come up with a good model, based on solid evidence, that would explain why things would seem this way to us, without us having to suppose that it is that way.
(I haven’t caught up on the entire thread, apologies if this is a repeat)
Assuming the “qualia is a misguided pseudoconcept” is true, do you have a sense of why people think that it’s real? i.e. taking the evidence of “Somehow, people end up saying sentences about how they have a sense of what it is like to perceive things. Why is that? What process would generate people saying words like that?” (This is not meant to be a gotcha, it just seems like a good question to ask)
No worries, it’s not a gotcha at all, and I already have some thoughts about this.
I was more interested in this topic back about seven or eight years ago, when I was actually studying it. I moved on to psychology and metaethics, and haven’t been actively reading about this stuff since about 2014.
I’m not sure it’d be ideal to try to dredge all that up, but I can roughly point towards something like Robbins and Jack (2006) as an example of the kind of research I’d employ to develop a type of debunking explanation for qualia intuitions. I am not necessarily claiming their specific account is correct, or rigorous, or sufficient all on its own, but it points to the kind of work cognitive scientists and philosophers could do that is at least in the ballpark.
Roughly, they attempt to offer an empirical explanation for the persistent of the explanatory gap (the problem of accounting for the consciousness by appeal to physical or at least nonconscious phenomena). Its persistence could be due to quirks in the way human cognition works. If so, it may be difficult to dispel certain kinds of introspective illusions.
Roughly, suppose we have multiple, distinct “mapping systems” that each independently operate to populate their own maps of the territory. Each of these systems evolved and currently functions to facilitate adaptive behavior. However, we may discover that when we go to formulate comprehensive and rigorous theories about how the world is, these maps seem to provide us with conflicting or confusing information.
Suppose one of these mapping systems was a “physical stuff” map. It populates our world with objects, and we have the overwhelming impression that there is “physical stuff” out there, that we can detect using our senses.
But suppose also we have a “important agents that I need to treat well” system, that detects and highlights certain agents within the world for whom it would be important to treat appropriately, a kind of “VIP agency mapping system” that recruited a host of appropriate functional responses: emotional reactions, adopting the intentional stance, cheater-detection systems, and so on.
On reflecting on the first system, we might come to form the view that the external world really is just this stuff described by physics, whatever that is. And that includes the VIP agents we interact with: they’re bags of meat! But this butts up against the overwhelming impression that they just couldn’t be. They must be more than just bags of meat. They have feelings! We may find ourselves incapable of shaking this impression, no matter how much of a reductionist or naturalist or whatever we might like to be.
What could be going on here is simply the inability for these two mapping systems to adequately talk to one another. We are host to divided minds with balkanized mapping systems, and may find that we simply cannot grok some of the concepts contained in one of our mapping systems in terms of the mapping system in the other. You might call this something like “internal failure to grok.” It isn’t that, say, I cannot grok some other person’s concepts, but that some of the cognitive systems I possess cannot grok each other.
You might call this something like “conceptual incommensurability.” And if we’re stuck with a cognitive architecture like this, certain intuitions may seem incorrigible, even if we could come up with a good model, based on solid evidence, that would explain why things would seem this way to us, without us having to suppose that it is that way.
I forgot to add a reference to the Robbins and Jack citation above. Here it is:
Robbins, P., & Jack, A. I. (2006). The phenomenal stance. Philosophical studies, 127(1), 59-85.