Well, in so much as the intelligence is not distracted and can opt to sit still play dead, there doesn’t seem to be a point in fainting. Any time i have somewhat notable injury (falling off bike, ripping the chin, and getting nasty case of road rash), the pain is less than pain of minor injuries.
Contrast the anticipation of pain with actual pain. Those feel very different. Maybe it is fair to say that the pain is instrumental in creating anticipation of pain, which acts more like utility for intelligent agent. It also serves as a warning signal, and for conditioning, and generally as something that stops you from eating yourself. (and perhaps for telling the intelligence what is and isn’t your body). The pain is supposed to encourage you to deal with the damage, but not to distract you from dealing with the damage.
Well, in so much as the intelligence is not distracted and can opt to sit still play dead, there doesn’t seem to be a point in fainting.
I don’t pretend to know exactly why nature does it—but I expect there’s a reason. It mat be that sometimes being conscious is actively bad. This is one of the reasons for administering anaesthetics—there are cases where a conscious individual in a lot of pain willl ineffectually flail around and get themselves into worse trouble—where they would be better off being quiet and still - “playing dead”.
As to why not “play dead” while remaining conscious—that’s a bit like having two “off” switches. There’s already an off switch. Building a second one that bypasses all the usual responses of the conscious mind while remaining conscious could be expensive. Perhaps not ideal for a rarely-used feature.
A lot of the time something is just a side effect. E.g. you select less aggressive foxes, you end up with foxes with floppy ears and white spots on fur.
With regards to flailing around that strikes me as more of reflex than utility driven behaviour. For playing dead, I mean, I can sit still when having teeth done without anaesthesia.
The problem with just fainting is that it is reflex—when conditions, do faint, when other conditions, flail around—not a proper utility maximizing agent behaviour—what are the consequences to flailing around, what are the consequences of sitting still, choose the one that has better consequences.
It seems to me that originally the pain was to train the neural network not to eat yourself, then it got re-used for other stuff, that it is not very suitable for.
The problem with just fainting is that it is reflex—when conditions, do faint, when other conditions, flail around—not a proper utility maximizing agent behaviour—what are the consequences to flailing around, what are the consequences of sitting still, choose the one that has better consequences.
Well, it’s a consequence of resource limitation. A supercomputer with moment-by-moment control over actions might never faint. However, when there’s a limited behavioural repertoire, with less precise control over what action to take—and a limited space in which to associate sensory stimulii and actions, occasionally fainting could become a more reasonable course of action.
It seems to me that originally the pain was to train the neural network not to eat yourself, then it got re-used for other stuff, that it is not very suitable for.
The pleasure-pain axis is basically much the same idea as a utility value—or perhaps the first derivative of a utility. The signal might be modulated by other systems a little, but that’s the essential nature of it.
Then why the anticipated pain feels so different from actual ongoing pain?
Also, I think it’s more of a consequence of resource limitations of a worm or a fish. We don’t have such severe limitations.
Other issue: consider 10 hours of harmless but intense pain vs perfectly painless lobotomy. I think most of us would try harder to avoid the latter than the former, and would prefer the pain. edit: furthermore, we could consciously and wilfully take a painkiller, but not lobotomy-fear-neutralizer.
Then why the anticipated pain feels so different from actual ongoing pain?
I’m not sure I understand why they should be similar. Anticipated pain may never happen. Combining anticipated pain with actual pain probably doesn’t happen more because that would “muddy” the reward signal. You want a “clear” reward signal to facilitate attributing the reward to the actions that led to it. Too much “smearing” of reward signals out over time doesn’t help with that.
I think it’s more of a consequence of resource limitations of a worm or a fish. We don’t have such severe limitations.
Maybe—though that’s probably not an easy hypothesis to test.
Well, the utility as in ‘utility function’ of an utility maximizing agent, is something that’s calculated in predicted future state. The pain is only calculated in the now. That’s a subtle distinction.
I think this lobotomy example (provided that subject knows what lobotomy is and what brain is and thus doesn’t want lobotomy) clarifies why I don’t think pain is working quite like an utility function’s output. The fear does work like proper utility function’s output. When you fear something you also don’t want to get rid of that fear (with some exceptions in the people who basically don’t fear correctly). And fear is all about future state.
Well, the utility as in ‘utility function’ of an utility maximizing agent, is something that’s calculated in predicted future state. The pain is only calculated in the now. That’s a subtle distinction.
It’s better to think of future utility being an extrapolation of current utility—and current utility being basically the same thing as the position of the pleasure-pain axis. Otherwise there is a danger of pointlessly duplicating concepts.
It is the cause of lots of problems to distinguish too much between utility and pleasure. The pleasure-pain axis is nature’s attempt to engineer a utility-based system. It did a pretty good job.
Of course, you should not take “pain” too literally—in the case of humans. Humans have modulations on pain that feed into their decision circuitry—but the result is still eventually collapsed down into one dimension—like a utility value.
It’s better to think of future utility being an extrapolation of current utility—and current utility being basically the same thing as the position of the pleasure-pain axis. Otherwise there is a danger of pointlessly duplicating concepts.
The danger here is in inventing terminology that is at odds with normally used terminology, resulting in confusion when reading texts written in standard terminology. I would rather describe human behaviour as ‘learning agent’ as per Russell & Norvig 2003 . where the pain is part of ‘critic’. You can see a diagram on wikipedia:
Ultimately, the overly broad definitions become useless.
We also have a bit of ‘reflex agent’ where pain makes you flinch away or flail around or faint (though i’d dare a guess that most people don’t faint even when pain has saturated and can’t increase any further).
Well, in so much as the intelligence is not distracted and can opt to sit still play dead, there doesn’t seem to be a point in fainting. Any time i have somewhat notable injury (falling off bike, ripping the chin, and getting nasty case of road rash), the pain is less than pain of minor injuries.
Contrast the anticipation of pain with actual pain. Those feel very different. Maybe it is fair to say that the pain is instrumental in creating anticipation of pain, which acts more like utility for intelligent agent. It also serves as a warning signal, and for conditioning, and generally as something that stops you from eating yourself. (and perhaps for telling the intelligence what is and isn’t your body). The pain is supposed to encourage you to deal with the damage, but not to distract you from dealing with the damage.
I don’t pretend to know exactly why nature does it—but I expect there’s a reason. It mat be that sometimes being conscious is actively bad. This is one of the reasons for administering anaesthetics—there are cases where a conscious individual in a lot of pain willl ineffectually flail around and get themselves into worse trouble—where they would be better off being quiet and still - “playing dead”.
As to why not “play dead” while remaining conscious—that’s a bit like having two “off” switches. There’s already an off switch. Building a second one that bypasses all the usual responses of the conscious mind while remaining conscious could be expensive. Perhaps not ideal for a rarely-used feature.
A lot of the time something is just a side effect. E.g. you select less aggressive foxes, you end up with foxes with floppy ears and white spots on fur.
With regards to flailing around that strikes me as more of reflex than utility driven behaviour. For playing dead, I mean, I can sit still when having teeth done without anaesthesia.
The problem with just fainting is that it is reflex—when conditions, do faint, when other conditions, flail around—not a proper utility maximizing agent behaviour—what are the consequences to flailing around, what are the consequences of sitting still, choose the one that has better consequences.
It seems to me that originally the pain was to train the neural network not to eat yourself, then it got re-used for other stuff, that it is not very suitable for.
Well, it’s a consequence of resource limitation. A supercomputer with moment-by-moment control over actions might never faint. However, when there’s a limited behavioural repertoire, with less precise control over what action to take—and a limited space in which to associate sensory stimulii and actions, occasionally fainting could become a more reasonable course of action.
The pleasure-pain axis is basically much the same idea as a utility value—or perhaps the first derivative of a utility. The signal might be modulated by other systems a little, but that’s the essential nature of it.
Then why the anticipated pain feels so different from actual ongoing pain?
Also, I think it’s more of a consequence of resource limitations of a worm or a fish. We don’t have such severe limitations.
Other issue: consider 10 hours of harmless but intense pain vs perfectly painless lobotomy. I think most of us would try harder to avoid the latter than the former, and would prefer the pain. edit: furthermore, we could consciously and wilfully take a painkiller, but not lobotomy-fear-neutralizer.
I’m not sure I understand why they should be similar. Anticipated pain may never happen. Combining anticipated pain with actual pain probably doesn’t happen more because that would “muddy” the reward signal. You want a “clear” reward signal to facilitate attributing the reward to the actions that led to it. Too much “smearing” of reward signals out over time doesn’t help with that.
Maybe—though that’s probably not an easy hypothesis to test.
Well, the utility as in ‘utility function’ of an utility maximizing agent, is something that’s calculated in predicted future state. The pain is only calculated in the now. That’s a subtle distinction.
I think this lobotomy example (provided that subject knows what lobotomy is and what brain is and thus doesn’t want lobotomy) clarifies why I don’t think pain is working quite like an utility function’s output. The fear does work like proper utility function’s output. When you fear something you also don’t want to get rid of that fear (with some exceptions in the people who basically don’t fear correctly). And fear is all about future state.
It’s better to think of future utility being an extrapolation of current utility—and current utility being basically the same thing as the position of the pleasure-pain axis. Otherwise there is a danger of pointlessly duplicating concepts.
It is the cause of lots of problems to distinguish too much between utility and pleasure. The pleasure-pain axis is nature’s attempt to engineer a utility-based system. It did a pretty good job.
Of course, you should not take “pain” too literally—in the case of humans. Humans have modulations on pain that feed into their decision circuitry—but the result is still eventually collapsed down into one dimension—like a utility value.
The danger here is in inventing terminology that is at odds with normally used terminology, resulting in confusion when reading texts written in standard terminology. I would rather describe human behaviour as ‘learning agent’ as per Russell & Norvig 2003 . where the pain is part of ‘critic’. You can see a diagram on wikipedia:
http://en.wikipedia.org/wiki/Intelligent_agent
Ultimately, the overly broad definitions become useless.
We also have a bit of ‘reflex agent’ where pain makes you flinch away or flail around or faint (though i’d dare a guess that most people don’t faint even when pain has saturated and can’t increase any further).