Here’s a problem with that line of thought (besides the problem of consciousness itself). Our values were created by evolution. Why would evolution make us care more or less about others depending on whether they have consciousness? I mean, it would make sense, if instead of caring about negative reinforcement as experienced by conscious minds, we just cared about damage to our allies and kin. If our values do refer to consciousness, and assuming that the concept of consciousness has non-trivial information content, what selection pressure caused that information to come into existence?
Al always, I’m not sure if I completely understand your question, but here’s a stab at an answer anyway.
Evolution made us care about allies and kin, and also about other humans because they could become allies, or because caring is sometimes good for your image. But first you need to determine what a human is. Right now it’s a safe bet to say that the humans I interact with are all conscious. So if some entity has a chance of not being conscious, I begin to doubt it’s really a human and if I should care about it. An analogy: if I encounter a blue banana, I’ll have second thoughts about eating it, even though evolution didn’t give me a hardcoded drive to desire yellow bananas only. Yellowness (or consciousness) is just empirically correlated with the things evolution wants me to care about.
This analogy seems like a good one. Let me try extending it a bit. Suppose that in our ancestral environment the only things banana shaped were bananas, and the ability to perceive yellowness had no other fitness benefits. Then wouldn’t it be surprising that we even evolved the ability to perceive yellowness, much less to care about it?
In our actual EEA, there were no human-shaped objects that were not humans, so if caring about humans was adaptive, evolution could have just made us care about, say, human-shaped objects that are alive and act intelligently. Why did we evolve the ability (i.e., intuition) to determine whether something is conscious, and to care about that?
Did we? It’s not obvious to me that evolution actually programmed us to care about consciousness in particular rather than just (a subsection of?) current culture conditioning us that way. I’m dubious that all cultures that assigned a particular group of humans a moral status similar to that of animals did this this by way of convincing themselves that that group was not “conscious”, or had to overcome strong evolutionary programming. Also consider the moral weight that is assigned to clearly unconscious embryos by many people, or the moral weight apparently assigned to fictional characters by some.
Believing that other people are conscious doesn’t require any special selection pressure: it falls out of the general ability to understand their utterances as referring to something that’s “actually out there”, which is useful for other reasons. Also we seem to have a generalized adaptation that says “if all previously encountered instances possessed a certain trait, but this instance doesn’t, then begin doubting if this instance is genuine”.
I agree with the idea that something is conscious is probably a large part of whether or not I care about its pain, its in line with my current intuition.
Though, I also kind of think that making me care about its qualia makes something conscious.
This looks correct to me.
Here’s a problem with that line of thought (besides the problem of consciousness itself). Our values were created by evolution. Why would evolution make us care more or less about others depending on whether they have consciousness? I mean, it would make sense, if instead of caring about negative reinforcement as experienced by conscious minds, we just cared about damage to our allies and kin. If our values do refer to consciousness, and assuming that the concept of consciousness has non-trivial information content, what selection pressure caused that information to come into existence?
Al always, I’m not sure if I completely understand your question, but here’s a stab at an answer anyway.
Evolution made us care about allies and kin, and also about other humans because they could become allies, or because caring is sometimes good for your image. But first you need to determine what a human is. Right now it’s a safe bet to say that the humans I interact with are all conscious. So if some entity has a chance of not being conscious, I begin to doubt it’s really a human and if I should care about it. An analogy: if I encounter a blue banana, I’ll have second thoughts about eating it, even though evolution didn’t give me a hardcoded drive to desire yellow bananas only. Yellowness (or consciousness) is just empirically correlated with the things evolution wants me to care about.
This analogy seems like a good one. Let me try extending it a bit. Suppose that in our ancestral environment the only things banana shaped were bananas, and the ability to perceive yellowness had no other fitness benefits. Then wouldn’t it be surprising that we even evolved the ability to perceive yellowness, much less to care about it?
In our actual EEA, there were no human-shaped objects that were not humans, so if caring about humans was adaptive, evolution could have just made us care about, say, human-shaped objects that are alive and act intelligently. Why did we evolve the ability (i.e., intuition) to determine whether something is conscious, and to care about that?
Did we? It’s not obvious to me that evolution actually programmed us to care about consciousness in particular rather than just (a subsection of?) current culture conditioning us that way. I’m dubious that all cultures that assigned a particular group of humans a moral status similar to that of animals did this this by way of convincing themselves that that group was not “conscious”, or had to overcome strong evolutionary programming. Also consider the moral weight that is assigned to clearly unconscious embryos by many people, or the moral weight apparently assigned to fictional characters by some.
Believing that other people are conscious doesn’t require any special selection pressure: it falls out of the general ability to understand their utterances as referring to something that’s “actually out there”, which is useful for other reasons. Also we seem to have a generalized adaptation that says “if all previously encountered instances possessed a certain trait, but this instance doesn’t, then begin doubting if this instance is genuine”.
I agree with the idea that something is conscious is probably a large part of whether or not I care about its pain, its in line with my current intuition.
Though, I also kind of think that making me care about its qualia makes something conscious.
So I’m confused.