Suppose I prefer that brain B1 not be in state S1. Call C my confidence that state S2 of brain B2 is in important ways similar to B1 in S1. The higher C is, the more confident I am that I prefer B2 not be in S2. The lower C is, the less confident I am.
Is brain B1 your brain in this scenario? Or just… some brain? I ask because I think the relevant question is whether the person whose brain it is prefers that brain Bx be or not be in state Sx, and we need to first answer that, and only then move on to what our preferences are w.r.t. other beings’ brain states.
Anyway, it seemed to me like the claim that Zack_M_Davis was making was about the case where certain neural correlates (or other sorts of implementation details) of what we experience as “pain” and “suffering” (which, for us, might usefully be operationalized as “brain states we prefer not to be in”) are found in other life-forms, and we thus conclude that a) these beings are therefore also experiencing “pain” and “suffering” (i.e. are having the same subjective experiences), and b) that these beings, also, have antipreferences about those brain states...
Those conclusions are not entailed by the premises. We might expect them to be true for evolutionarily related life-forms, but my objection was to the claim of necessity.
Or, he could have been making the claim that we can usefully describe the category of “pain” and/or “suffering” in ways that do not depend on neural correlates or other implementation details (perhaps this would be a functional description of some sort, or a phenomenological one; I don’t know), and that if we then discover phenomena matching that category in other life-forms, we should conclude that they are bad.
I don’t think that conclusion is justified either… or rather, I don’t think it’s instructive. For instance, Alien Species X might have brain states that they prefer not to be in, but their subjective experience associated with those brain states bears no resemblance in any way to anything that we humans experience as pain or suffering: not phenomenologically, not culturally, not neurally, etc. The only justification for referring to these brain states as “suffering” is by definition. And we all know that arguing “by definition” makes a def out of I and… wait… hm… well, it’s bad.
My brain is certainly an example of a brain that I prefer not be in pain, though not the only example.
My confidence that brain B manifests a mind that experiences pain and suffering given certain implementation (or functional, or phenomenological,or whatever) details depends a lot on those details. As does my confidence that B’s mind antiprefers the experiential correlates of those details. I agree that there’s no strict entailment here, though, “merely” evidence.
That said, mere evidence can get us pretty far. I am not inclined to dismiss it.
Is brain B1 your brain in this scenario? Or just… some brain? I ask because I think the relevant question is whether the person whose brain it is prefers that brain Bx be or not be in state Sx, and we need to first answer that, and only then move on to what our preferences are w.r.t. other beings’ brain states.
Anyway, it seemed to me like the claim that Zack_M_Davis was making was about the case where certain neural correlates (or other sorts of implementation details) of what we experience as “pain” and “suffering” (which, for us, might usefully be operationalized as “brain states we prefer not to be in”) are found in other life-forms, and we thus conclude that a) these beings are therefore also experiencing “pain” and “suffering” (i.e. are having the same subjective experiences), and b) that these beings, also, have antipreferences about those brain states...
Those conclusions are not entailed by the premises. We might expect them to be true for evolutionarily related life-forms, but my objection was to the claim of necessity.
Or, he could have been making the claim that we can usefully describe the category of “pain” and/or “suffering” in ways that do not depend on neural correlates or other implementation details (perhaps this would be a functional description of some sort, or a phenomenological one; I don’t know), and that if we then discover phenomena matching that category in other life-forms, we should conclude that they are bad.
I don’t think that conclusion is justified either… or rather, I don’t think it’s instructive. For instance, Alien Species X might have brain states that they prefer not to be in, but their subjective experience associated with those brain states bears no resemblance in any way to anything that we humans experience as pain or suffering: not phenomenologically, not culturally, not neurally, etc. The only justification for referring to these brain states as “suffering” is by definition. And we all know that arguing “by definition” makes a def out of I and… wait… hm… well, it’s bad.
My brain is certainly an example of a brain that I prefer not be in pain, though not the only example.
My confidence that brain B manifests a mind that experiences pain and suffering given certain implementation (or functional, or phenomenological,or whatever) details depends a lot on those details. As does my confidence that B’s mind antiprefers the experiential correlates of those details. I agree that there’s no strict entailment here, though, “merely” evidence.
That said, mere evidence can get us pretty far. I am not inclined to dismiss it.