I think the functional approach is ultimately correct, but that suffering is a much more complex way of being than any system having learnable negative feedback. A simple way to illustrate this is to notice that the amount of suffering changes drastically (in my own experience) across functionally very similar negative feedback loops. Walking along a path involves negative feedback (which I can learn from), but I don’t feel like I’m suffering at all when I notice I’m deviating slightly (and even if it turns out I am, the amount of suffering is still much much lower than for standard suffering). In fact, it suspiciously seems like the amount of suffering is correlated with experiences which would likely have harmed my reproductive fitness 20,000 years ago. It even disconnects from the sensation of pain: e.g. I suffer much less from painful experiences which I believe are healing compared to ones which I believe are harmful, even if the raw sensation feels the same. Another strange thing about suffering is that it increases the more attention I pay to a constant pain signal. On the other hand, emotional pain doesn’t seem (in my experience) to be as separated from suffering. Anyway, the point is that we need to vet definitions of what suffering is in just our own experiences before trying to use them on animals.
Upvoted for being a great reply with opinion, argument, example, and context.
I disagree though. I think a functional approach is ultimately the most likely to be adopted in the long term—being the only feasible one. But I think the correct answer is
The ability to suffer is determined in an anthropocentric way.
There is no natural category of suffering outside of humans. And that mainly because what is meant by suffering is no longer just something that goes on in the brain but also to a significant degree a social construct. A linguistic concept and a political agenda. Probably we will factor it into multiple clearly defined sub-parts earlier or later. One of them being a functional part like Adele seems to mean. But at that point that cuts out most of what is actually going on.
Suffering seems like a natural category to me, at least in the way it’s used to classify experiences.
Even if it is a social construct, that doesn’t mean that animals or AIs couldn’t experience a meaningfully similar experience. I’d be quite surprised if it turns out that e.g. chimps truly do not suffer in any of the common senses of the word.
For sure chimps perceive pain and avoid it. But there seem to be quite significant differences between pain and suffering. You mention this yourself:
It even disconnects from the sensation of pain: e.g. I suffer much less from painful experiences which I believe are healing compared to ones which I believe are harmful, even if the raw sensation feels the same. Another strange thing about suffering is that it increases the more attention I pay to a constant pain signal.
This and also my personal experience seems to imply that suffering (but not pain) depends on consciousness and maybe even social identity expectations.
Yeah, I meant what I said about chimps experiencing suffering. To the extent that consciousness and social identity are relevant, I believe chimps have those to a sufficient degree.
Maybe. Chimps and gorillas for sure have some consciousness. They can recognize themselves and they have social cognition. They can express frustration. I am not sure they can represent frustration.
Though arguing about whether that is required to call it suffering is haggling over the definition of a word. I don’t want to that. I want to defend the claim
The ability to suffer is determined in an anthropocentric way.
We may disagree on where to draw a line or how to assign weight to what we call suffering but the key point is not about the is but about the ought. And at least the ought is anthropocentric: Whether some structure in nature (‘suffering’) compels us to act in a certain way to it (‘minimize it’) is a social construct. It results from empathy and social expectations that are generalized.
Note that just saying this doesn’t make it less so. I do have empathy with chimps and other animals. I would do (some) things to reduce it. For sure if everybody around me agrees that reducing suffering is the right thing to do I would take that as strong evidence in its favor. I’m just aware of it.
PS. Thank you for continuing to discuss a controversial discussion.
Let me try to rephrase it in terms of something that can be done in a lab and see if I get your point correctly. We should conduct experiments with humans, identifying what causes sufferings with which intensity, and what happens in the brain during it. Then, if the animal has the same brain regions, it is capable to suffer, otherwise, it is not. But it won’t be the functional approach, we can’t extrapolate it blindly to the AI.
If we want the functional approach, we can only look at the behavior. What we do when we suffer, after it, etc. Then being suffers if it demonstrates the same behavior. Here the problem will be how to generalize human behavior to animals and AI.
I think the experiments you describe on humans is a reasonable start, but that you would then need to ask: “Why did suffering evolve as a distinct sensation from pain?” I don’t think you can determine the function of suffering without being able to answer that. Then you could look at other systems and see if something with the same functionality exists. I think that’s how you could generalize to both other animals and AI.
I think the functional approach is ultimately correct, but that suffering is a much more complex way of being than any system having learnable negative feedback. A simple way to illustrate this is to notice that the amount of suffering changes drastically (in my own experience) across functionally very similar negative feedback loops. Walking along a path involves negative feedback (which I can learn from), but I don’t feel like I’m suffering at all when I notice I’m deviating slightly (and even if it turns out I am, the amount of suffering is still much much lower than for standard suffering). In fact, it suspiciously seems like the amount of suffering is correlated with experiences which would likely have harmed my reproductive fitness 20,000 years ago. It even disconnects from the sensation of pain: e.g. I suffer much less from painful experiences which I believe are healing compared to ones which I believe are harmful, even if the raw sensation feels the same. Another strange thing about suffering is that it increases the more attention I pay to a constant pain signal. On the other hand, emotional pain doesn’t seem (in my experience) to be as separated from suffering. Anyway, the point is that we need to vet definitions of what suffering is in just our own experiences before trying to use them on animals.
Upvoted for being a great reply with opinion, argument, example, and context.
I disagree though. I think a functional approach is ultimately the most likely to be adopted in the long term—being the only feasible one. But I think the correct answer is
There is no natural category of suffering outside of humans. And that mainly because what is meant by suffering is no longer just something that goes on in the brain but also to a significant degree a social construct. A linguistic concept and a political agenda. Probably we will factor it into multiple clearly defined sub-parts earlier or later. One of them being a functional part like Adele seems to mean. But at that point that cuts out most of what is actually going on.
Thanks!
Suffering seems like a natural category to me, at least in the way it’s used to classify experiences.
Even if it is a social construct, that doesn’t mean that animals or AIs couldn’t experience a meaningfully similar experience. I’d be quite surprised if it turns out that e.g. chimps truly do not suffer in any of the common senses of the word.
For sure chimps perceive pain and avoid it. But there seem to be quite significant differences between pain and suffering. You mention this yourself:
This and also my personal experience seems to imply that suffering (but not pain) depends on consciousness and maybe even social identity expectations.
Yeah, I meant what I said about chimps experiencing suffering. To the extent that consciousness and social identity are relevant, I believe chimps have those to a sufficient degree.
Maybe. Chimps and gorillas for sure have some consciousness. They can recognize themselves and they have social cognition. They can express frustration. I am not sure they can represent frustration.
https://wiki.c2.com/?LeibnizianDefinitionOfConsciousness
Though arguing about whether that is required to call it suffering is haggling over the definition of a word. I don’t want to that. I want to defend the claim
We may disagree on where to draw a line or how to assign weight to what we call suffering but the key point is not about the is but about the ought. And at least the ought is anthropocentric: Whether some structure in nature (‘suffering’) compels us to act in a certain way to it (‘minimize it’) is a social construct. It results from empathy and social expectations that are generalized.
Note that just saying this doesn’t make it less so. I do have empathy with chimps and other animals. I would do (some) things to reduce it. For sure if everybody around me agrees that reducing suffering is the right thing to do I would take that as strong evidence in its favor. I’m just aware of it.
PS. Thank you for continuing to discuss a controversial discussion.
Let me try to rephrase it in terms of something that can be done in a lab and see if I get your point correctly. We should conduct experiments with humans, identifying what causes sufferings with which intensity, and what happens in the brain during it. Then, if the animal has the same brain regions, it is capable to suffer, otherwise, it is not. But it won’t be the functional approach, we can’t extrapolate it blindly to the AI.
If we want the functional approach, we can only look at the behavior. What we do when we suffer, after it, etc. Then being suffers if it demonstrates the same behavior. Here the problem will be how to generalize human behavior to animals and AI.
I think the experiments you describe on humans is a reasonable start, but that you would then need to ask: “Why did suffering evolve as a distinct sensation from pain?” I don’t think you can determine the function of suffering without being able to answer that. Then you could look at other systems and see if something with the same functionality exists. I think that’s how you could generalize to both other animals and AI.