Pain is the consequence of a perceived reduction in the probability that an agent will achieve its goals.
In biological organisms, physical pain [say, in response to limb being removed] is an evolutionary consequence of the fact that organisms with the capacity to feel physical pain avoided situations where their long-term goals [e.g. locomotion to a favourable position with the limb] which required the subsystem generating pain were harmed.
This definition applies equally to mental pain [say, the pain felt when being expelled from a group of allies] which impedes long term goals.
This suggests that any system that possesses both a set of goals and the capacity to understand how events influence their probability of achieving such goals should posses a capacity to feel pain. This also suggests that the amount of pain is proportional to the degree of “setbacks” and the degree to which “setbacks” are perceived.
I think this is a relatively robust argument for the inherent reality of pain not just in a broad spectrum biological organisms, but also in synthetic [including sufficiently advanced AI] agents.
We should strive to reduce the pain we cause in the agents we interact with.
I think pain is a little bit different than that. It’s the contrast between the current state and the goal state. This constrast motivates the agent to act, when the pain of contrast becomes bigger than the (predicted) pain of acting.
As a human, you can decrase your pain by thinking that everything will be okay, or you can increase your pain by doubting the process. But it is unlikely that you will allow yourself to stop hurting, because your brain fears that a lack of suffering would result in a lack of progress (some wise people contest this, claiming that wu wei is correct).
Another way you can increase your pain is by focusing more on the goal you want to achieve, sort of irritating/torturing yourself with the fact that the goal isn’t achieved, to which your brain will respond by increasing the pain felt by the contrast, urging action.
Do you see how this differs slightly from your definition? Chronic pain is not a continuous reduction in agency, but a continuous contrast between a bad state and a good state, which makes one feel pain which motivates them to solve it (exercise, surgery, resting, looking for painkillers, etc). This generalizes to other negative feelings, for instance to hunger, which exists with the purpose to be less pleasant than the search for food is, such that you seek food.
I warn you that avoiding negative emotions can lead to stagnation, since suffering leads to growth (unless we start wireheading, and making the avoidance of pain our new goal, because then we might seek hedonic pleasures and intoxicants)
I would certainly agree with part of what you are saying. Especially the point that many important lessons are taught by pain [correct me if this is misinterpreting your comment]. Indeed, as a parent for example, if your goal is for your child to gain the capacity for self sufficiency, a certain amount of painful lessons that reflect the inherent properties of the world are necessary to achieve such a goal.
On the other hand, I do not agree with your framing of pain as being the main motivator [again, correct me if required]. In fact, a wide variety of systems in the brain are concerned with calculating and granting rewards. Perhaps pain and pleasure are the two sides of the same coin, and reward maximisation and regret minimisation are identical. In practice however, I think they often lead to different solutions.
I also do not agree with your interpretation that chronic pain does not reduce agency. For family members of mine suffering from arthritis, their chronic pain renders them unable to do many basic activities, for example, access areas for which you would need to climb stairs. I would like to emphasise that it is not the disease which limits their “degrees of freedom” [at least in the short term], and were they to take a large amount of painkillers, they could temporarily climb stairs again.
Finally, I would suggest that your framing as a “contrast between the current state and the goal state” is basically an alternative way of talking about the transition probability from the current state to the goal state. In my opinion, this suggests that our conceptualisations of pain are overwhelmingly similar.
I think all criticism, all shaming, all guilt tripping, all punishments and rewards directed at children—is for the purpose of driving them to do certain actions. If your children do what you think is right, there’s no need to do much of anything.
A more general and correct statement would be “Pain is for the sake of change, and all change is painful”. But that change is for the sake of actions. I don’t think that’s too much of a simplification to be useful.
I think regret, too, is connected here. And there’s certainly times when it seems like pain is the problem rather than an attempt to solve it, but I think that’s a misunderstanding. And while chronic pain does reduce agency, it’s a constant pain and a constant reduction of agency (not cumulative). The pain persists until the problem is solved, even if the problem does not get worse. So it’s the body telling the brain “Hey, do something about this, the importance is 50 units of pain”, then you will do anything to solve it as long there’s a path with less than 50 units of pain which leads to a solution.
The pain does limit agency, but not because it’s a real limitation. It’s an artificial one that the body creates to prevent you from damaging yourself. So all important agency is still possible. If the estimated consequences of avoiding the task is more painful than doing the task, you do it. But it’s again the body is just estimating the cost/benefit of tasks and choosing the optimal action by making it the least painful action.
My explanation and yours are almost identical, but there’s some important differences. In my view, suffering is good, not bad. I really don’t want humanity to misunderstand this one fact, it has already had profound negative consequences. It’s phantom damage created to avoid real damage. An agent which is unable to feel physical pain and exhaustion would destroy itself, therefore physical pain and exhaustion are valuable and not problems to be solved. Emotions like suffering, exhaustion, annoyance, etc. function the same as physical pain, and once they get over a certain threshold they coerce you into taking an action. Physical pain comes from nerves, but emotional pain comes from your interpretation of reality. Your brain relies on you to tell what ought to be painful (so if you overestimate risk, it just believes you). And you don’t get to choose all your goals yourself, your brain wants you to fulfill your needs (prioritized by the hierarchy of needs). In short, the brain makes inaction painful, while keeping actions that it deems risky painful, and then messes with the weights/thresholds according to need. Just like with hunger (not eating is painful, but if all you have is stale or even moldy bread, then you need to be very hungry before you eat, and you will eat iff pain(hunger)>pain(eating the bread)).
An increase in power/agency feels a lot like happiness though, even according to Nietzsche who I’m not confident to argue against, so I get why you’d basically think that opposite of happiness is the opposite of agency (sorry if this summary does injustice to your point)
In biological organisms, physical pain [say, in response to limb being removed] is an evolutionary consequence of the fact that organisms with the capacity to feel physical pain avoided situations where their long-term goals [e.g. locomotion to a favourable position with the limb] which required the subsystem generating pain were harmed.
How many organisms other than humans have “long term goals”? Doesn’t that require a complex capacity for mental representation of possible future states?
Am I wrong in assuming that the capacity to experience “pain” is independent of an explicit awareness of what possibilities have been shifted as a result of the new sensory data? (i.e. having a limb cleaved from the rest of the body, stubbing your toe in the dark). The organism may not even be aware of those possibilities, only ‘aware’ of pain.
Note: I’m probably just having a fear of this sounding all too teleological and personifying evolution
Pain is the consequence of a perceived reduction in the probability that an agent will achieve its goals.
In biological organisms, physical pain [say, in response to limb being removed] is an evolutionary consequence of the fact that organisms with the capacity to feel physical pain avoided situations where their long-term goals [e.g. locomotion to a favourable position with the limb] which required the subsystem generating pain were harmed.
This definition applies equally to mental pain [say, the pain felt when being expelled from a group of allies] which impedes long term goals.
This suggests that any system that possesses both a set of goals and the capacity to understand how events influence their probability of achieving such goals should posses a capacity to feel pain. This also suggests that the amount of pain is proportional to the degree of “setbacks” and the degree to which “setbacks” are perceived.
I think this is a relatively robust argument for the inherent reality of pain not just in a broad spectrum biological organisms, but also in synthetic [including sufficiently advanced AI] agents.
We should strive to reduce the pain we cause in the agents we interact with.
It also suggests that there might some sort of conservation law for pain for agents.
Conservation of Pain if you will
I think pain is a little bit different than that. It’s the contrast between the current state and the goal state. This constrast motivates the agent to act, when the pain of contrast becomes bigger than the (predicted) pain of acting.
As a human, you can decrase your pain by thinking that everything will be okay, or you can increase your pain by doubting the process. But it is unlikely that you will allow yourself to stop hurting, because your brain fears that a lack of suffering would result in a lack of progress (some wise people contest this, claiming that wu wei is correct).
Another way you can increase your pain is by focusing more on the goal you want to achieve, sort of irritating/torturing yourself with the fact that the goal isn’t achieved, to which your brain will respond by increasing the pain felt by the contrast, urging action.
Do you see how this differs slightly from your definition? Chronic pain is not a continuous reduction in agency, but a continuous contrast between a bad state and a good state, which makes one feel pain which motivates them to solve it (exercise, surgery, resting, looking for painkillers, etc). This generalizes to other negative feelings, for instance to hunger, which exists with the purpose to be less pleasant than the search for food is, such that you seek food.
I warn you that avoiding negative emotions can lead to stagnation, since suffering leads to growth (unless we start wireheading, and making the avoidance of pain our new goal, because then we might seek hedonic pleasures and intoxicants)
I would certainly agree with part of what you are saying. Especially the point that many important lessons are taught by pain [correct me if this is misinterpreting your comment]. Indeed, as a parent for example, if your goal is for your child to gain the capacity for self sufficiency, a certain amount of painful lessons that reflect the inherent properties of the world are necessary to achieve such a goal.
On the other hand, I do not agree with your framing of pain as being the main motivator [again, correct me if required]. In fact, a wide variety of systems in the brain are concerned with calculating and granting rewards. Perhaps pain and pleasure are the two sides of the same coin, and reward maximisation and regret minimisation are identical. In practice however, I think they often lead to different solutions.
I also do not agree with your interpretation that chronic pain does not reduce agency. For family members of mine suffering from arthritis, their chronic pain renders them unable to do many basic activities, for example, access areas for which you would need to climb stairs. I would like to emphasise that it is not the disease which limits their “degrees of freedom” [at least in the short term], and were they to take a large amount of painkillers, they could temporarily climb stairs again.
Finally, I would suggest that your framing as a “contrast between the current state and the goal state” is basically an alternative way of talking about the transition probability from the current state to the goal state. In my opinion, this suggests that our conceptualisations of pain are overwhelmingly similar.
I think all criticism, all shaming, all guilt tripping, all punishments and rewards directed at children—is for the purpose of driving them to do certain actions. If your children do what you think is right, there’s no need to do much of anything.
A more general and correct statement would be “Pain is for the sake of change, and all change is painful”. But that change is for the sake of actions. I don’t think that’s too much of a simplification to be useful.
I think regret, too, is connected here. And there’s certainly times when it seems like pain is the problem rather than an attempt to solve it, but I think that’s a misunderstanding. And while chronic pain does reduce agency, it’s a constant pain and a constant reduction of agency (not cumulative). The pain persists until the problem is solved, even if the problem does not get worse. So it’s the body telling the brain “Hey, do something about this, the importance is 50 units of pain”, then you will do anything to solve it as long there’s a path with less than 50 units of pain which leads to a solution.
The pain does limit agency, but not because it’s a real limitation. It’s an artificial one that the body creates to prevent you from damaging yourself. So all important agency is still possible. If the estimated consequences of avoiding the task is more painful than doing the task, you do it. But it’s again the body is just estimating the cost/benefit of tasks and choosing the optimal action by making it the least painful action.
My explanation and yours are almost identical, but there’s some important differences. In my view,
suffering is good, not bad. I really don’t want humanity to misunderstand this one fact, it has already had profound negative consequences. It’s phantom damage created to avoid real damage. An agent which is unable to feel physical pain and exhaustion would destroy itself, therefore physical pain and exhaustion are valuable and not problems to be solved. Emotions like suffering, exhaustion, annoyance, etc. function the same as physical pain, and once they get over a certain threshold they coerce you into taking an action. Physical pain comes from nerves, but emotional pain comes from your interpretation of reality. Your brain relies on you to tell what ought to be painful (so if you overestimate risk, it just believes you). And you don’t get to choose all your goals yourself, your brain wants you to fulfill your needs (prioritized by the hierarchy of needs). In short, the brain makes inaction painful, while keeping actions that it deems risky painful, and then messes with the weights/thresholds according to need. Just like with hunger (not eating is painful, but if all you have is stale or even moldy bread, then you need to be very hungry before you eat, and you will eat iff pain(hunger)>pain(eating the bread)).
An increase in power/agency feels a lot like happiness though, even according to Nietzsche who I’m not confident to argue against, so I get why you’d basically think that opposite of happiness is the opposite of agency (sorry if this summary does injustice to your point)
How many organisms other than humans have “long term goals”? Doesn’t that require a complex capacity for mental representation of possible future states?
Am I wrong in assuming that the capacity to experience “pain” is independent of an explicit awareness of what possibilities have been shifted as a result of the new sensory data? (i.e. having a limb cleaved from the rest of the body, stubbing your toe in the dark). The organism may not even be aware of those possibilities, only ‘aware’ of pain.
Note: I’m probably just having a fear of this sounding all too teleological and personifying evolution