If pain and suffering are bad when implemented in my skull, then they also have to be bad when implemented elsewhere.
No they don’t. Are you saying it’s not possible to construct a mind for which pain and suffering are not bad? Or are you defining pain and suffering as bad things? In that case, I can respond the neural correlates of human pain and human suffering might not be bad when implemented in brains that differ from human brains in certain relevant ways (Edit: and would therefore not actually qualify as pain and suffering under your new definition).
There’s a difference between “it’s possible to construct a mind” and “other particular minds are likely to be constructed a certain way.” Our minds were build by the same forces that built other minds we know of. We should expect there to be similarities.
(I also would define it, not in terms of “pain and suffering” but “preference satisfaction and dissatisfaction”. I think I might consider “suffering” as dissatisfaction, by definition, although “pain” is more specific and might be valuable for some minds.)
I agree that expecting similarities is reasonable (although which similarities, and to what extent, is the key followup question). I was objecting to the assertion of (logical?) necessity, especially since we don’t even have so much as a strong certainty.
I don’t know that I’m comfortable with identifying “suffering” with “preference dissatisfaction” (btw, do you mean by this “failure to satisfy preferences” or “antisatisfaction of negative preferences”? i.e. if I like playing video games and I don’t get to play video games, am I suffering? Or am I only suffering if I am having experiences which I explicitly dislike, rather than simply an absence of experiences I like? Or do you claim those are the same thing?).
I can’t speak for Raemon, but I would certainly say that the condition described by “I like playing video games and am prohibited from playing video games” is a trivial but valid instance of the category /suffering/.
Is the difficulty that there’s a different word you’d prefer to use to refer to the category I’m nodding in the direction of, or that you think the category itself is meaningless, or that you don’t understand what the category is (reasonably enough; I haven’t provided nearly enough information to identify it if the word “suffering” doesn’t reliably do so) , or something else?
I’m usually indifferent to semantics, so if you’d prefer a different word, I’m happy to use whatever word you like when discussing the category with you.
… or that you don’t understand what the category is (reasonably enough; I haven’t provided nearly enough information to identify it if the word “suffering” doesn’t reliably do so)
That one. Also, what term we should use for what categories of things and whether I know what you’re talking about is dependent on what claims are being made… I was objecting to Zack_M_Davis’s claim, which I take to be something either like:
“We humans have categories of experiences called ‘pain’ and ‘suffering’, which we consider to be bad. These things are implemented in our brains somehow. If we take that implementation and put it in another kind of brain (alternatively: if we find some other kind of brain where the same or similar implementation is present), then this brain is also necessarily having the same experiences, and we should consider them to be bad also.”
or...
“We humans have categories of experiences called ‘pain’ and ‘suffering’, which we consider to be bad. These things are implemented in our brains somehow. We can sensibly define these phenomena in an implementation-independent way, then if any other kind of brain implements these phenomena in some way that fits our defined category, we should consider them to be bad also.”
I don’t think either of those claims are justified. Do you think they are? If you do, I guess we’ll have to work out what you’re referring to when you say “suffering”, and whether that category is relevant to the above issue. (For the record, I, too, am less interested in semantics than in figuring out what we’re referring to.)
I don’t think either of those claims are justified. Do you think they are?
There are a lot of ill-defined terms in those claims, and depending on how I define them I either do or don’t. So let me back up a little.
Suppose I prefer that brain B1 not be in state S1. Call C my confidence that state S2 of brain B2 is in important ways similar to B1 in S1. The higher C is, the more confident I am that I prefer B2 not be in S2. The lower C is, the less confident I am.
So if you mean taking the implementation of pain and suffering (S1) from our brains (B1) and putting/finding them or similar (C is high) implementations (S2) in other brains (B2), then yes, I think that if (S1) pain and suffering are bad (I antiprefer them) for us (B1), that’s strong but not overwhelming evidence that (S2) pain and suffering are bad (I antiprefer them) for others (B2).
I don’t actually think understanding more clearly what we mean by pain and suffering (either S1 or S2) is particularly important here. I think the important term is C.
As long as C is high—that is, as long as we really are confident that the other brain has a “same or similar implementation”, as you say, along salient dimensions (such as manifesting similar subjective experience) -- then I’m pretty comfortable saying I prefer the other brain not experience pain and suffering. And if (S2,B2) is “completely identical” to (S1,B1), I’m “certain” I prefer B2 not be in S2.
But I’m not sure that’s actually what you mean when you say “same or similar implementation.” You might, for example, mean that they have anatomical points of correspondance, but you aren’t confident that they manifest similar experience, or something else along those lines. In which case C gets lower, and I become uncertain about my preferences with respect to (B2,S2).
Suppose I prefer that brain B1 not be in state S1. Call C my confidence that state S2 of brain B2 is in important ways similar to B1 in S1. The higher C is, the more confident I am that I prefer B2 not be in S2. The lower C is, the less confident I am.
Is brain B1 your brain in this scenario? Or just… some brain? I ask because I think the relevant question is whether the person whose brain it is prefers that brain Bx be or not be in state Sx, and we need to first answer that, and only then move on to what our preferences are w.r.t. other beings’ brain states.
Anyway, it seemed to me like the claim that Zack_M_Davis was making was about the case where certain neural correlates (or other sorts of implementation details) of what we experience as “pain” and “suffering” (which, for us, might usefully be operationalized as “brain states we prefer not to be in”) are found in other life-forms, and we thus conclude that a) these beings are therefore also experiencing “pain” and “suffering” (i.e. are having the same subjective experiences), and b) that these beings, also, have antipreferences about those brain states...
Those conclusions are not entailed by the premises. We might expect them to be true for evolutionarily related life-forms, but my objection was to the claim of necessity.
Or, he could have been making the claim that we can usefully describe the category of “pain” and/or “suffering” in ways that do not depend on neural correlates or other implementation details (perhaps this would be a functional description of some sort, or a phenomenological one; I don’t know), and that if we then discover phenomena matching that category in other life-forms, we should conclude that they are bad.
I don’t think that conclusion is justified either… or rather, I don’t think it’s instructive. For instance, Alien Species X might have brain states that they prefer not to be in, but their subjective experience associated with those brain states bears no resemblance in any way to anything that we humans experience as pain or suffering: not phenomenologically, not culturally, not neurally, etc. The only justification for referring to these brain states as “suffering” is by definition. And we all know that arguing “by definition” makes a def out of I and… wait… hm… well, it’s bad.
My brain is certainly an example of a brain that I prefer not be in pain, though not the only example.
My confidence that brain B manifests a mind that experiences pain and suffering given certain implementation (or functional, or phenomenological,or whatever) details depends a lot on those details. As does my confidence that B’s mind antiprefers the experiential correlates of those details. I agree that there’s no strict entailment here, though, “merely” evidence.
That said, mere evidence can get us pretty far. I am not inclined to dismiss it.
No they don’t. Are you saying it’s not possible to construct a mind for which pain and suffering are not bad? Or are you defining pain and suffering as bad things?
I’d do it that way. It doesn’t strike me as morally urgent to prevent people with pain asymbolia from experiencing the sensation of “pain”. (Subjects report that they notice the sensation of pain, but they claim it doesn’t bother them.) I’d define suffering as wanting to get out of the state you’re in. If you’re fine with the state you’re in, it is not what I consider to be suffering.
So, a question for anyone who both agrees with that formulation and thinks that “we should care about the suffering of animals” (or some similar view):
Do you think that animals can “want to get out of the state they’re in”?
This varies from animal to animal. There’s a fair amount of research/examination into which animals appear to do so, some of which is linked to elsewhere in this discussion. (At least some examination was linked to in response to a statement about fish)
No they don’t. Are you saying it’s not possible to construct a mind for which pain and suffering are not bad? Or are you defining pain and suffering as bad things? In that case, I can respond the neural correlates of human pain and human suffering might not be bad when implemented in brains that differ from human brains in certain relevant ways (Edit: and would therefore not actually qualify as pain and suffering under your new definition).
There’s a difference between “it’s possible to construct a mind” and “other particular minds are likely to be constructed a certain way.” Our minds were build by the same forces that built other minds we know of. We should expect there to be similarities.
(I also would define it, not in terms of “pain and suffering” but “preference satisfaction and dissatisfaction”. I think I might consider “suffering” as dissatisfaction, by definition, although “pain” is more specific and might be valuable for some minds.)
Such as human masochists.
I agree that expecting similarities is reasonable (although which similarities, and to what extent, is the key followup question). I was objecting to the assertion of (logical?) necessity, especially since we don’t even have so much as a strong certainty.
I don’t know that I’m comfortable with identifying “suffering” with “preference dissatisfaction” (btw, do you mean by this “failure to satisfy preferences” or “antisatisfaction of negative preferences”? i.e. if I like playing video games and I don’t get to play video games, am I suffering? Or am I only suffering if I am having experiences which I explicitly dislike, rather than simply an absence of experiences I like? Or do you claim those are the same thing?).
I can’t speak for Raemon, but I would certainly say that the condition described by “I like playing video games and am prohibited from playing video games” is a trivial but valid instance of the category /suffering/.
Is the difficulty that there’s a different word you’d prefer to use to refer to the category I’m nodding in the direction of, or that you think the category itself is meaningless, or that you don’t understand what the category is (reasonably enough; I haven’t provided nearly enough information to identify it if the word “suffering” doesn’t reliably do so) , or something else?
I’m usually indifferent to semantics, so if you’d prefer a different word, I’m happy to use whatever word you like when discussing the category with you.
That one. Also, what term we should use for what categories of things and whether I know what you’re talking about is dependent on what claims are being made… I was objecting to Zack_M_Davis’s claim, which I take to be something either like:
“We humans have categories of experiences called ‘pain’ and ‘suffering’, which we consider to be bad. These things are implemented in our brains somehow. If we take that implementation and put it in another kind of brain (alternatively: if we find some other kind of brain where the same or similar implementation is present), then this brain is also necessarily having the same experiences, and we should consider them to be bad also.”
or...
“We humans have categories of experiences called ‘pain’ and ‘suffering’, which we consider to be bad. These things are implemented in our brains somehow. We can sensibly define these phenomena in an implementation-independent way, then if any other kind of brain implements these phenomena in some way that fits our defined category, we should consider them to be bad also.”
I don’t think either of those claims are justified. Do you think they are? If you do, I guess we’ll have to work out what you’re referring to when you say “suffering”, and whether that category is relevant to the above issue. (For the record, I, too, am less interested in semantics than in figuring out what we’re referring to.)
There are a lot of ill-defined terms in those claims, and depending on how I define them I either do or don’t. So let me back up a little.
Suppose I prefer that brain B1 not be in state S1.
Call C my confidence that state S2 of brain B2 is in important ways similar to B1 in S1.
The higher C is, the more confident I am that I prefer B2 not be in S2. The lower C is, the less confident I am.
So if you mean taking the implementation of pain and suffering (S1) from our brains (B1) and putting/finding them or similar (C is high) implementations (S2) in other brains (B2), then yes, I think that if (S1) pain and suffering are bad (I antiprefer them) for us (B1), that’s strong but not overwhelming evidence that (S2) pain and suffering are bad (I antiprefer them) for others (B2).
I don’t actually think understanding more clearly what we mean by pain and suffering (either S1 or S2) is particularly important here. I think the important term is C.
As long as C is high—that is, as long as we really are confident that the other brain has a “same or similar implementation”, as you say, along salient dimensions (such as manifesting similar subjective experience) -- then I’m pretty comfortable saying I prefer the other brain not experience pain and suffering. And if (S2,B2) is “completely identical” to (S1,B1), I’m “certain” I prefer B2 not be in S2.
But I’m not sure that’s actually what you mean when you say “same or similar implementation.” You might, for example, mean that they have anatomical points of correspondance, but you aren’t confident that they manifest similar experience, or something else along those lines. In which case C gets lower, and I become uncertain about my preferences with respect to (B2,S2).
Is brain B1 your brain in this scenario? Or just… some brain? I ask because I think the relevant question is whether the person whose brain it is prefers that brain Bx be or not be in state Sx, and we need to first answer that, and only then move on to what our preferences are w.r.t. other beings’ brain states.
Anyway, it seemed to me like the claim that Zack_M_Davis was making was about the case where certain neural correlates (or other sorts of implementation details) of what we experience as “pain” and “suffering” (which, for us, might usefully be operationalized as “brain states we prefer not to be in”) are found in other life-forms, and we thus conclude that a) these beings are therefore also experiencing “pain” and “suffering” (i.e. are having the same subjective experiences), and b) that these beings, also, have antipreferences about those brain states...
Those conclusions are not entailed by the premises. We might expect them to be true for evolutionarily related life-forms, but my objection was to the claim of necessity.
Or, he could have been making the claim that we can usefully describe the category of “pain” and/or “suffering” in ways that do not depend on neural correlates or other implementation details (perhaps this would be a functional description of some sort, or a phenomenological one; I don’t know), and that if we then discover phenomena matching that category in other life-forms, we should conclude that they are bad.
I don’t think that conclusion is justified either… or rather, I don’t think it’s instructive. For instance, Alien Species X might have brain states that they prefer not to be in, but their subjective experience associated with those brain states bears no resemblance in any way to anything that we humans experience as pain or suffering: not phenomenologically, not culturally, not neurally, etc. The only justification for referring to these brain states as “suffering” is by definition. And we all know that arguing “by definition” makes a def out of I and… wait… hm… well, it’s bad.
My brain is certainly an example of a brain that I prefer not be in pain, though not the only example.
My confidence that brain B manifests a mind that experiences pain and suffering given certain implementation (or functional, or phenomenological,or whatever) details depends a lot on those details. As does my confidence that B’s mind antiprefers the experiential correlates of those details. I agree that there’s no strict entailment here, though, “merely” evidence.
That said, mere evidence can get us pretty far. I am not inclined to dismiss it.
I’d do it that way. It doesn’t strike me as morally urgent to prevent people with pain asymbolia from experiencing the sensation of “pain”. (Subjects report that they notice the sensation of pain, but they claim it doesn’t bother them.) I’d define suffering as wanting to get out of the state you’re in. If you’re fine with the state you’re in, it is not what I consider to be suffering.
Ok, that seems workable to a first approximation.
So, a question for anyone who both agrees with that formulation and thinks that “we should care about the suffering of animals” (or some similar view):
Do you think that animals can “want to get out of the state they’re in”?
Yes?
This varies from animal to animal. There’s a fair amount of research/examination into which animals appear to do so, some of which is linked to elsewhere in this discussion. (At least some examination was linked to in response to a statement about fish)