Could you possibly say a bit more about why the mirror test is inadequate as a test of possession of a self-concept?
Surely it is a reach to say that the mirror test, alone, with all of its methodological difficulties, can all by itself raise our probability estimate of a creature’s possessing self-awareness to near-certainty? I agree that it’s evidence, but calling it a test is pushing it, to say the least. To see just one reason why I might say this, consider that we can, right now, probably program a robot to pass such a test; such a robot would not be self-aware.
As for the rest of your post, I’d like to take this opportunity to object to a common mistake/ploy in such discussions:
“This general ethical principle/heuristic leads to absurdity if applied with the literal-mindedness of a particularly dumb algorithm, therefore reductio ad absurdum.”
Your argument here seems to be something like: “Adult humans are sometimes not self-aware, but we still care about them, even during those times. Is self-awareness therefore irrelevant??” No, of course it’s not. It’s a complex issue. But a chicken is never self-aware, so the point is moot.
Also:
In states of “blind” panic, reflective self-awareness and the capacity for any kind of meta-cognition is lost.
Please provide a citation for this, and I will response, as my knowledge of this topic (cognitive capacity during states of extreme panic) is not up to giving a considered answer.
Panic disorder is extraordinarily unpleasant.
Having experienced a panic attack on one or two occasions, I am inclined to agree. However, I did not lose my self-concept at those times.
Finally:
But I don’t think we are ethically entitled to induce [panic states in pigs/toddlers] - any more than we are ethically entitled to waterboard a normal adult human.
“Ethically entitled” is not a very useful phrase to use in isolation; utilitarianism[1] can only tell us which of two or more world-states to prefer. I’ve said that I prefer that dogs not be tortured, all else being equal, so if by that you mean that we ought to prefer not to induce panic states in pigs, then sure, I agree. The question is what happens when all else is not equal — which it pretty much never is.
[1] You are speaking from a utilitarian position, yes? If not, then that changes things; “ethically entitled” means something quite different to a deontologist, naturally.
Your argument here seems to be something like: “Adult humans are sometimes not self-aware, but we still care about them, even during those times. Is self-awareness therefore irrelevant??” No, of course it’s not. It’s a complex issue. But a chicken is never self-aware, so the point is moot.
Um, “Why don’t we stop caring about people who temporarily lose this supposed be-all and end-all of moral value” seems like a valid question, albeit one you hopefully are introspective enough to have an answer for.
Is the question “why don’t we temporarily stop caring about people who temporarily lose this etc.”?
If so, then maybe we should, if they really lose it. However, please tell me what actions would ensue from, or be made permissible by, a temporary cessation of caring, provided that I still care about that person after they return from this temporary loss of importance.
That depends on the details of your personal moral system, doesn’t it? As I said already, you may well be consistent on this point, but you have not explained how.
Surely it is a reach to say that the mirror test, alone, with all of its methodological difficulties, can all by itself raise our probability estimate of a creature’s possessing self-awareness to near-certainty? I agree that it’s evidence, but calling it a test is pushing it, to say the least. To see just one reason why I might say this, consider that we can, right now, probably program a robot to pass such a test; such a robot would not be self-aware.
As for the rest of your post, I’d like to take this opportunity to object to a common mistake/ploy in such discussions:
“This general ethical principle/heuristic leads to absurdity if applied with the literal-mindedness of a particularly dumb algorithm, therefore reductio ad absurdum.”
Your argument here seems to be something like: “Adult humans are sometimes not self-aware, but we still care about them, even during those times. Is self-awareness therefore irrelevant??” No, of course it’s not. It’s a complex issue. But a chicken is never self-aware, so the point is moot.
Also:
Please provide a citation for this, and I will response, as my knowledge of this topic (cognitive capacity during states of extreme panic) is not up to giving a considered answer.
Having experienced a panic attack on one or two occasions, I am inclined to agree. However, I did not lose my self-concept at those times.
Finally:
“Ethically entitled” is not a very useful phrase to use in isolation; utilitarianism[1] can only tell us which of two or more world-states to prefer. I’ve said that I prefer that dogs not be tortured, all else being equal, so if by that you mean that we ought to prefer not to induce panic states in pigs, then sure, I agree. The question is what happens when all else is not equal — which it pretty much never is.
[1] You are speaking from a utilitarian position, yes? If not, then that changes things; “ethically entitled” means something quite different to a deontologist, naturally.
Um, “Why don’t we stop caring about people who temporarily lose this supposed be-all and end-all of moral value” seems like a valid question, albeit one you hopefully are introspective enough to have an answer for.
Is the question “why don’t we temporarily stop caring about people who temporarily lose this etc.”?
If so, then maybe we should, if they really lose it. However, please tell me what actions would ensue from, or be made permissible by, a temporary cessation of caring, provided that I still care about that person after they return from this temporary loss of importance.
That depends on the details of your personal moral system, doesn’t it? As I said already, you may well be consistent on this point, but you have not explained how.