Another way of seeing why this view is correct is to note that empathy can be evoked by fictional characters, by entities in dreams, etc. If I read a book or view a painting that makes me empathize with the fictional character, this does not make the fictional character sentient.
(It might be evidence that if the fictional character were real, it would be sentient. But that’s not sufficient for a strong ‘reduce everything to empathy’ view. Once you allow that empathy routinely misfires in this way—indeed, that empathy can be misfiring even while the empathizing person realizes this and is not inclined to treat the fictional character as a true moral patient in reality—you lose a lot of the original reason to think ‘it’s all about empathy’ in the first place.)
Good point! I agree that “I feel empathy towards X” is only sufficient to strongly[1] motivate me to help X is I also believe that X is “real”. But, I also believe that my interactions with cats are strong evidence that cats are “real”, despite my ignorance about the inner workings of cat brains. This is exactly the same as, my interactions with humans are strong evidence that humans are “real”, despite my ignorance about human brains. And, people justifiably knew that other people are “real” even before it was discovered that the brain is responsible for cognition.
The concept “mind” (insofar as it’s contentful and refers to anything at all) refers to various states or processes of brains. So there’s a straight line from ‘caring about cats’ welfare’ to ‘caring about cats’ minds’ to ‘caring about which states the cat’s brain is in’. If you already get off the train somewhere on that straight line, then I’m not sure why.
I agree that there’s a straight line[2]. But, the reason we know brains are relevant, is by observing that brain states are correlated with behavior. If instead of discovering that cognition runs on brains, we would discover it runs on transistor circuits, or computed somehow inside the liver, we would care about those transistor circuits / livers instead. So, your objection that “we don’t know enough about cat brains” is weak, since I do know that cat-brains produce cat-behavior, and given that correlation-with-behavior is the only reason we’re looking at brains in the first place, this knowledge counts for a lot, even if it’s far from a perfect picture of how cat brains work. I also don’t know have a perfect picture of how human brains work, but I know enough (from observing behavior!) to conclude that I care about humans.
I actually do feel some preference for fictional stories in which too-horrible things happen not to exist, even if I’m not consuming those stories, but that’s probably tangential.
I’m not sure I agree with “the concept of mind refers to various states or processes of brains”. We know that, for animals, there is a correspondence between minds and brains. But e.g. an AI can have a mind without having a brain. I guess you’re talking “brains” which are not necessarily biological? But then are “mind” and “brain” just synonyms? Or “brain” refers to some kind of strong reductionism? But, I can also imagine a different universe in which minds are ontologically fundamental ingredients of physics.
Good point! I agree that “I feel empathy towards X” is only sufficient to strongly[1] motivate me to help X is I also believe that X is “real”. But, I also believe that my interactions with cats are strong evidence that cats are “real”, despite my ignorance about the inner workings of cat brains. This is exactly the same as, my interactions with humans are strong evidence that humans are “real”, despite my ignorance about human brains. And, people justifiably knew that other people are “real” even before it was discovered that the brain is responsible for cognition.
I agree that there’s a straight line[2]. But, the reason we know brains are relevant, is by observing that brain states are correlated with behavior. If instead of discovering that cognition runs on brains, we would discover it runs on transistor circuits, or computed somehow inside the liver, we would care about those transistor circuits / livers instead. So, your objection that “we don’t know enough about cat brains” is weak, since I do know that cat-brains produce cat-behavior, and given that correlation-with-behavior is the only reason we’re looking at brains in the first place, this knowledge counts for a lot, even if it’s far from a perfect picture of how cat brains work. I also don’t know have a perfect picture of how human brains work, but I know enough (from observing behavior!) to conclude that I care about humans.
I actually do feel some preference for fictional stories in which too-horrible things happen not to exist, even if I’m not consuming those stories, but that’s probably tangential.
I’m not sure I agree with “the concept of mind refers to various states or processes of brains”. We know that, for animals, there is a correspondence between minds and brains. But e.g. an AI can have a mind without having a brain. I guess you’re talking “brains” which are not necessarily biological? But then are “mind” and “brain” just synonyms? Or “brain” refers to some kind of strong reductionism? But, I can also imagine a different universe in which minds are ontologically fundamental ingredients of physics.