Not sure about Rob’s view, but I think a lot of people start out from this question from a quasi-dualistic perspective: some entities have “internal experiences”, “what-it’s-like-to-be-them”, basically some sort of invisible canvas on which internal experiences, including pleasure and pain, are projected. Then later, it comes to seem that basically everything is physical. So then they reason like “well, everything else in reality has eventually been reduced to physical things, so I’m not sure how, but eventually we will find a way to reduce the invisible canvases as well”. Then in principle, once we know how that reduction works, it could turn out that humans do have something corresponding to an invisible canvas but cats don’t.
As you might guess, I think this view of consciousness is somewhat confused, but it’s a sensible enough starting point in the absence of a reductionist theory of consciousness. I think the actual reduction looks more like an unbundling of the various functions that the ‘invisible canvas’ served in our previous models. So it seems likely that cats have states they find aversive, that they try to avoid, they take in sensory input to build a local model of the world, perhaps a global neuronal workspace, etc., all of which inclines me to have a certain amount of sympathy with them. What they probably don’t have is the meta-learned machinery which would make them think there is a hard problem of consciousness, but this doesn’t intuitively feel like it should make me care about them less.
I’m an eliminativist about phenomenal consciousness. :) So I’m pretty far from the dualist perspective, as these things go...!
But discovering that there are no souls doesn’t cause me to stop caring about human welfare. In the same way, discovering that there is no phenomenal consciousness doesn’t cause me to stop caring about human welfare.
Nor does it cause me to decide that ‘human welfare’ is purely a matter of ‘whether the human is smiling, whether they say they’re happy, etc.‘. If someone trapped a suffering human brain inside a robot or flesh suit that perpetually smiles, and I learned of this fact, I wouldn’t go ‘Oh, well the part I care about is the external behavior, not the brain state’. I’d go ‘holy shit no’ and try to find a way to alleviate the brain’s suffering and give it a better way to communicate.
Smiling, saying you’re happy, etc. matter to me almost entirely because I believe they correlate with particular brain states (e.g., the closest neural correlate for the folk concept of ‘happiness’). I don’t need a full reduction of ‘happiness’ in order to know that it has something to do with the state of brains. Ditto ‘sentience’, to the extent there’s a nearest-recoverable-concept corresponding to the folk notion.
Not sure about Rob’s view, but I think a lot of people start out from this question from a quasi-dualistic perspective: some entities have “internal experiences”, “what-it’s-like-to-be-them”, basically some sort of invisible canvas on which internal experiences, including pleasure and pain, are projected. Then later, it comes to seem that basically everything is physical. So then they reason like “well, everything else in reality has eventually been reduced to physical things, so I’m not sure how, but eventually we will find a way to reduce the invisible canvases as well”. Then in principle, once we know how that reduction works, it could turn out that humans do have something corresponding to an invisible canvas but cats don’t.
As you might guess, I think this view of consciousness is somewhat confused, but it’s a sensible enough starting point in the absence of a reductionist theory of consciousness. I think the actual reduction looks more like an unbundling of the various functions that the ‘invisible canvas’ served in our previous models. So it seems likely that cats have states they find aversive, that they try to avoid, they take in sensory input to build a local model of the world, perhaps a global neuronal workspace, etc., all of which inclines me to have a certain amount of sympathy with them. What they probably don’t have is the meta-learned machinery which would make them think there is a hard problem of consciousness, but this doesn’t intuitively feel like it should make me care about them less.
I’m an eliminativist about phenomenal consciousness. :) So I’m pretty far from the dualist perspective, as these things go...!
But discovering that there are no souls doesn’t cause me to stop caring about human welfare. In the same way, discovering that there is no phenomenal consciousness doesn’t cause me to stop caring about human welfare.
Nor does it cause me to decide that ‘human welfare’ is purely a matter of ‘whether the human is smiling, whether they say they’re happy, etc.‘. If someone trapped a suffering human brain inside a robot or flesh suit that perpetually smiles, and I learned of this fact, I wouldn’t go ‘Oh, well the part I care about is the external behavior, not the brain state’. I’d go ‘holy shit no’ and try to find a way to alleviate the brain’s suffering and give it a better way to communicate.
Smiling, saying you’re happy, etc. matter to me almost entirely because I believe they correlate with particular brain states (e.g., the closest neural correlate for the folk concept of ‘happiness’). I don’t need a full reduction of ‘happiness’ in order to know that it has something to do with the state of brains. Ditto ‘sentience’, to the extent there’s a nearest-recoverable-concept corresponding to the folk notion.