… or that you don’t understand what the category is (reasonably enough; I haven’t provided nearly enough information to identify it if the word “suffering” doesn’t reliably do so)
That one. Also, what term we should use for what categories of things and whether I know what you’re talking about is dependent on what claims are being made… I was objecting to Zack_M_Davis’s claim, which I take to be something either like:
“We humans have categories of experiences called ‘pain’ and ‘suffering’, which we consider to be bad. These things are implemented in our brains somehow. If we take that implementation and put it in another kind of brain (alternatively: if we find some other kind of brain where the same or similar implementation is present), then this brain is also necessarily having the same experiences, and we should consider them to be bad also.”
or...
“We humans have categories of experiences called ‘pain’ and ‘suffering’, which we consider to be bad. These things are implemented in our brains somehow. We can sensibly define these phenomena in an implementation-independent way, then if any other kind of brain implements these phenomena in some way that fits our defined category, we should consider them to be bad also.”
I don’t think either of those claims are justified. Do you think they are? If you do, I guess we’ll have to work out what you’re referring to when you say “suffering”, and whether that category is relevant to the above issue. (For the record, I, too, am less interested in semantics than in figuring out what we’re referring to.)
I don’t think either of those claims are justified. Do you think they are?
There are a lot of ill-defined terms in those claims, and depending on how I define them I either do or don’t. So let me back up a little.
Suppose I prefer that brain B1 not be in state S1. Call C my confidence that state S2 of brain B2 is in important ways similar to B1 in S1. The higher C is, the more confident I am that I prefer B2 not be in S2. The lower C is, the less confident I am.
So if you mean taking the implementation of pain and suffering (S1) from our brains (B1) and putting/finding them or similar (C is high) implementations (S2) in other brains (B2), then yes, I think that if (S1) pain and suffering are bad (I antiprefer them) for us (B1), that’s strong but not overwhelming evidence that (S2) pain and suffering are bad (I antiprefer them) for others (B2).
I don’t actually think understanding more clearly what we mean by pain and suffering (either S1 or S2) is particularly important here. I think the important term is C.
As long as C is high—that is, as long as we really are confident that the other brain has a “same or similar implementation”, as you say, along salient dimensions (such as manifesting similar subjective experience) -- then I’m pretty comfortable saying I prefer the other brain not experience pain and suffering. And if (S2,B2) is “completely identical” to (S1,B1), I’m “certain” I prefer B2 not be in S2.
But I’m not sure that’s actually what you mean when you say “same or similar implementation.” You might, for example, mean that they have anatomical points of correspondance, but you aren’t confident that they manifest similar experience, or something else along those lines. In which case C gets lower, and I become uncertain about my preferences with respect to (B2,S2).
Suppose I prefer that brain B1 not be in state S1. Call C my confidence that state S2 of brain B2 is in important ways similar to B1 in S1. The higher C is, the more confident I am that I prefer B2 not be in S2. The lower C is, the less confident I am.
Is brain B1 your brain in this scenario? Or just… some brain? I ask because I think the relevant question is whether the person whose brain it is prefers that brain Bx be or not be in state Sx, and we need to first answer that, and only then move on to what our preferences are w.r.t. other beings’ brain states.
Anyway, it seemed to me like the claim that Zack_M_Davis was making was about the case where certain neural correlates (or other sorts of implementation details) of what we experience as “pain” and “suffering” (which, for us, might usefully be operationalized as “brain states we prefer not to be in”) are found in other life-forms, and we thus conclude that a) these beings are therefore also experiencing “pain” and “suffering” (i.e. are having the same subjective experiences), and b) that these beings, also, have antipreferences about those brain states...
Those conclusions are not entailed by the premises. We might expect them to be true for evolutionarily related life-forms, but my objection was to the claim of necessity.
Or, he could have been making the claim that we can usefully describe the category of “pain” and/or “suffering” in ways that do not depend on neural correlates or other implementation details (perhaps this would be a functional description of some sort, or a phenomenological one; I don’t know), and that if we then discover phenomena matching that category in other life-forms, we should conclude that they are bad.
I don’t think that conclusion is justified either… or rather, I don’t think it’s instructive. For instance, Alien Species X might have brain states that they prefer not to be in, but their subjective experience associated with those brain states bears no resemblance in any way to anything that we humans experience as pain or suffering: not phenomenologically, not culturally, not neurally, etc. The only justification for referring to these brain states as “suffering” is by definition. And we all know that arguing “by definition” makes a def out of I and… wait… hm… well, it’s bad.
My brain is certainly an example of a brain that I prefer not be in pain, though not the only example.
My confidence that brain B manifests a mind that experiences pain and suffering given certain implementation (or functional, or phenomenological,or whatever) details depends a lot on those details. As does my confidence that B’s mind antiprefers the experiential correlates of those details. I agree that there’s no strict entailment here, though, “merely” evidence.
That said, mere evidence can get us pretty far. I am not inclined to dismiss it.
That one. Also, what term we should use for what categories of things and whether I know what you’re talking about is dependent on what claims are being made… I was objecting to Zack_M_Davis’s claim, which I take to be something either like:
“We humans have categories of experiences called ‘pain’ and ‘suffering’, which we consider to be bad. These things are implemented in our brains somehow. If we take that implementation and put it in another kind of brain (alternatively: if we find some other kind of brain where the same or similar implementation is present), then this brain is also necessarily having the same experiences, and we should consider them to be bad also.”
or...
“We humans have categories of experiences called ‘pain’ and ‘suffering’, which we consider to be bad. These things are implemented in our brains somehow. We can sensibly define these phenomena in an implementation-independent way, then if any other kind of brain implements these phenomena in some way that fits our defined category, we should consider them to be bad also.”
I don’t think either of those claims are justified. Do you think they are? If you do, I guess we’ll have to work out what you’re referring to when you say “suffering”, and whether that category is relevant to the above issue. (For the record, I, too, am less interested in semantics than in figuring out what we’re referring to.)
There are a lot of ill-defined terms in those claims, and depending on how I define them I either do or don’t. So let me back up a little.
Suppose I prefer that brain B1 not be in state S1.
Call C my confidence that state S2 of brain B2 is in important ways similar to B1 in S1.
The higher C is, the more confident I am that I prefer B2 not be in S2. The lower C is, the less confident I am.
So if you mean taking the implementation of pain and suffering (S1) from our brains (B1) and putting/finding them or similar (C is high) implementations (S2) in other brains (B2), then yes, I think that if (S1) pain and suffering are bad (I antiprefer them) for us (B1), that’s strong but not overwhelming evidence that (S2) pain and suffering are bad (I antiprefer them) for others (B2).
I don’t actually think understanding more clearly what we mean by pain and suffering (either S1 or S2) is particularly important here. I think the important term is C.
As long as C is high—that is, as long as we really are confident that the other brain has a “same or similar implementation”, as you say, along salient dimensions (such as manifesting similar subjective experience) -- then I’m pretty comfortable saying I prefer the other brain not experience pain and suffering. And if (S2,B2) is “completely identical” to (S1,B1), I’m “certain” I prefer B2 not be in S2.
But I’m not sure that’s actually what you mean when you say “same or similar implementation.” You might, for example, mean that they have anatomical points of correspondance, but you aren’t confident that they manifest similar experience, or something else along those lines. In which case C gets lower, and I become uncertain about my preferences with respect to (B2,S2).
Is brain B1 your brain in this scenario? Or just… some brain? I ask because I think the relevant question is whether the person whose brain it is prefers that brain Bx be or not be in state Sx, and we need to first answer that, and only then move on to what our preferences are w.r.t. other beings’ brain states.
Anyway, it seemed to me like the claim that Zack_M_Davis was making was about the case where certain neural correlates (or other sorts of implementation details) of what we experience as “pain” and “suffering” (which, for us, might usefully be operationalized as “brain states we prefer not to be in”) are found in other life-forms, and we thus conclude that a) these beings are therefore also experiencing “pain” and “suffering” (i.e. are having the same subjective experiences), and b) that these beings, also, have antipreferences about those brain states...
Those conclusions are not entailed by the premises. We might expect them to be true for evolutionarily related life-forms, but my objection was to the claim of necessity.
Or, he could have been making the claim that we can usefully describe the category of “pain” and/or “suffering” in ways that do not depend on neural correlates or other implementation details (perhaps this would be a functional description of some sort, or a phenomenological one; I don’t know), and that if we then discover phenomena matching that category in other life-forms, we should conclude that they are bad.
I don’t think that conclusion is justified either… or rather, I don’t think it’s instructive. For instance, Alien Species X might have brain states that they prefer not to be in, but their subjective experience associated with those brain states bears no resemblance in any way to anything that we humans experience as pain or suffering: not phenomenologically, not culturally, not neurally, etc. The only justification for referring to these brain states as “suffering” is by definition. And we all know that arguing “by definition” makes a def out of I and… wait… hm… well, it’s bad.
My brain is certainly an example of a brain that I prefer not be in pain, though not the only example.
My confidence that brain B manifests a mind that experiences pain and suffering given certain implementation (or functional, or phenomenological,or whatever) details depends a lot on those details. As does my confidence that B’s mind antiprefers the experiential correlates of those details. I agree that there’s no strict entailment here, though, “merely” evidence.
That said, mere evidence can get us pretty far. I am not inclined to dismiss it.