4. Similarly, I frequently hear about dreams that are scary or disorienting, but I don’t think I’ve ever heard of someone recalling having experienced severe pain from a dream, even when they remember dreaming that they were being physically damaged.
This may be for reasons of selection: if dreams were more unpleasant, people would be less inclined to go to sleep and their health would suffer. But it’s interesting that scary dreams are nonetheless common. This again seems to point toward ‘states that are further from the typical human state are much more likely to be capable of things like fear or distress, than to be capable of suffering-laden physical agony.’
My guess would have been that dreams involve hallucinated perceptual inputs (sight, sound, etc.) but dreams don’t involve hallucinated interoceptive input (pain, temperature, hunger, etc.).
It seems physiologically plausible—the insular cortex is the home of interoceptive inputs and can have its hyperparameters set to “don’t hallucinate”, while other parts of the cortex can have their hyperparameters set to “do hallucinate”.
It seems evolutionarily plausible because things like “when part of the body is cold, vasoconstrict” and “when the stomach is full, release more digestive enzymes” or whatever, are still very important to do during sleep, and would presumably get screwed up if the insular cortex was hallucinating.
It seems introspectively plausible because it’s not just pain, but also hunger, hot-or-cold, or muscle soreness … when I’m cold in real life, I feel like I have dreams where I’m cold, etc. etc.
I think fear reactions are an amygdala thing that doesn’t much involve interoceptive inputs or the insular cortex. So they can participate in the hallucinations.
I’ve had a few dreams in which someone shot me with a gun, and it physically hurt about as much as a moderate stubbed toe or something (though the pain was in my abdomen where I got shot, not my toe). But yeah, pain in dreams seems pretty rare for me unless it corresponds to something that’s true in real life, as you mention, like being cold, having an upset stomach, or needing to urinate.
Googling {pain in dreams}, I see a bunch of discussion of this topic. One paper says:
Although some theorists have suggested that pain sensations cannot be part of the dreaming world, research has shown that pain sensations occur in about 1% of the dreams in healthy persons and in about 30% of patients with acute, severe pain.
I would also add that the fear responses, while participating in the hallucinations, aren’t themselves hallucinated, not any more than wakeful fear is hallucinated, at any rate. They’re just emotional responses to the contents of our dreams.
Since pain involves both sensory and affective components which rarely come apart, and the sensory precedes the affective, it’s enough to not hallucinate the sensory.
I do feel like pain is a bit different from the other interoceptive inputs in that the kinds of automatic responses to it are more like those to emotions, but one potential similarity is that it was more fitness-enhancing for sharp pain (and other internal signals going haywire) to wake us, but not so for sight, sound or emotions. Loud external sounds still wake us, too, but maybe only much louder than what we dream.
It’s not clear that you intended otherwise, but I would also assume not that there’s something suppressing pain hallucination (like a hyperparameter), but that hallucination is costly and doesn’t happen by default, so only things useful and safe to hallucinate can get hallucinated.
Also, don’t the senses evoked in dreams mostly match what people can “imagine” internally while awake, i.e. mostly just sight and sound? There could be common mechanisms here. Can people imagine pains? I’ve also heard it claimed that our inner voices only have one volume, so maybe that’s also true of sound in dreams?
FWIW, I think I basically have aphantasia, so can’t visualize well, but I think my dreams have richer visual experiences.
the fear responses, while participating in the hallucinations, aren’t themselves hallucinated
Yeah, maybe I should have said “the amygdala responds to the hallucinations” or something.
pain is a bit different from the other interoceptive inputs in that the kinds of automatic responses to it are more like those to emotions…
“Emotions” is kinda a fuzzy term that means different things to different people, and more specifically, I’m not sure what you meant in this paragraph. The phrase “automatic responses…to emotions” strikes me as weird because I’d be more likely to say that an “emotion” is an automatic response (well, with lots of caveats), not that an “emotion” is a thing that elicits an automatic response.
not that there’s something suppressing pain hallucination (like a hyperparameter), but that hallucination is costly and doesn’t happen by default
Again I’m kinda confused here. You wrote “not…but” but these all seem simultaneously true and compatible to me. In particular, I think “hallucination is costly” energetically (as far as I know), and “hallucination is costly” evolutionarily (when done at the wrong times, e.g. while being chased by a lion). But I also think hallucination is controlled by an inference-algorithm hyperparameter. And I’m also inclined to say that the “default” value of this hyperparameter corresponds to “don’t hallucinate”, and during dreams the hyperparameter is moved to a non-”default” setting in some cortical areas but not others. Well, the word “default” here is kinda meaningless, but maybe it’s a useful way to think about things.
Hmm, maybe you’re imagining that there’s some special mechanism that’s active during dreams but otherwise inactive, and this mechanism specifically “injects” hallucinations into the input stream somehow. I guess if the story was like that, then I would sympathize with the idea that maybe we shouldn’t call it a “hyperparameter” (although calling it a hyperparameter wouldn’t really be “wrong” per se, just kinda unhelpful). However, I don’t think it’s a “mechanism” like that. I don’t think you need a special mechanism to generate random noise in biological neurons where the input would otherwise be. They’re already noisy. You just need to “lower SNR thresholds” (so to speak) such that the noise is treated as a meaningful signal that can constrain higher-level models, instead of being ignored. I could be wrong though.
I would also add that the fear responses, while participating in the hallucinations, aren’t themselves hallucinated, not any more than wakeful fear is hallucinated, at any rate. They’re just emotional responses to the contents of our dreams.
I disagree with this statement. For me, the contents of a dream seem only weakly correlated with whether I feel afraid during the dream. I’ve had many dreams with seemingly ordinary content (relative to the baseline of general dream weirdness) that were nevertheless extremely terrifying, and many dreams with relatively weird and disturbing content that were not frightening at all.
Another thing to consider regarding dreams is: Insects and fish don’t have dreams (I wonder if maybe it only is mammals and some birds that do).
As humans, we make motivational tradeoffs based on pain (“should I go out and freeze in the cold to feed my hunger?”, etc). And we change long-term behaviors based on pain (“earlier when I went there, I experienced pain, so I’d rather avoid that in the future”). And both of these things I just mentioned, are also observed among lobsters (as explained in the embedded video).
Something else that lobsters do and we also do: They change behavior (and body language) based on how “alpha” and “beta” they feel. As explained here at 1:21. And as humans we experience that our “qualia” can be tinged by such feelings.
So all animals that dream make motivational tradeoffs based on pain. But not all animals that make motivational tradeoffs based on pain have dreams.
Some hazy speculation from me: Maybe the feeling of pain is more “basic” (shared by older evolutionary ancestors) than some of the cognitive machinery that dreams help us maintain.
Here are some unorganized thoughts I have (these are related to your top-level post, but I’m including them here nonetheless):
Note the difference between cognitive tradeoffs and more simple mechanisms like “if bodily damage then scream”. If you think of consciousness as relating to a “non-local workspace” or something like that, then making tradeoffs seems like something that maybe could qualify.
It’s interesting to note that fish and lobsters rub painful parts of their body, and that anesthesia can make them stop doing that.
Many animals are social animals. For some examples of how fish can be social in sophisticated ways, see e.g. here. I have also heard it claimed that cockroaches are quite social animals. On Wikipedia they write: “Some species, such as the gregarious German cockroach, have an elaborate social structure involving common shelter, social dependence, information transfer and kin recognition (...) When reared in isolation, German cockroaches show behavior that is different from behavior when reared in a group. In one study, isolated cockroaches were less likely to leave their shelters and explore, spent less time eating, interacted less with conspecifics when exposed to them, and took longer to recognize receptive females. These effects might have been due either to reduced metabolic and developmental rates in isolated individuals or the fact that the isolated individuals had not had a training period to learn about what others were like via their antennae. Individual American cockroaches appear to have consistently different “personalities” regarding how they seek shelter. In addition, group personality is not simply the sum of individual choices, but reflects conformity and collective decision-making.”.
Many animals have language, including birds and fish and insects. (Or are we to only call something language if we can build complex phrases as humans do? If so, I think human children start reporting suffering before they have learned to properly speak or understand “language”.)
Admittedly non-toddler humans have a wider repertoire than other animals when it comes to reporting and describing suffering. Maybe that constitutes a qualitative leap of some kind, but I suspect that various other animals also can communicate in considerable nuance about how they feel.
Jeffrey Masson about pigs: “Piglets are particularly fond of play, just as human children are, and chase one another, play-fight, play-love, tumble down hills, and generally engage in a wide variety of enjoyable activities. (...) Though they are often fed garbage and eat it, their food choices—if allowed them—would not be dissimilar to our own. They get easily bored with the same food. They love melons, bananas, and apples, but if they are fed them for several days on end, they will set them aside and eat whatever other food is new first. (...) Much like humans, every single pig is an individual. (...) Some pigs are independent and tough, and don’t let the bad times get to them. Others are ultra-sensitive and succumb to sadness and even depression much more readily.”. Here is a video of what seems like a pig trying to help another pig.
I posit/guess/assume: Pain is often connected enough with language that we are able to report on how we feel using language—but also often not.
I posit/guess/assume: There are things that humans use suffering for that don’t rely on language, and that would work and be useful without language and is used the same way by evolutionary ancestors of ours that are fish/insects/etc.
I do find it a bit weird that we have so much consciousness, as this seems (based on gut feeling) like something that would be ineffective (use unnecessary energy). It seems that you have a similar intuition. But the resolution of this “paradox” that seems most plausible to me, is that for whatever reason evolution has made animals that use “conscious” processes for a lot of things. Calculators are more efficient than human brains at arithmetic, but nonetheless, humans often use “conscious” processes even for simple arithmetic. Why assume that it’s any different for e.g. crows when they do arithmetic, or when they come up with creative plans based on spatial visualization?
Bees are influenced by “emotion” in ways that overlap with how humans are influenced by emotion. And even springtails sometimes have behavior that seems somewhat sophisticated (see e.g. this mating ritual, which seems more complicated to me than “body damage registered, move away”).
Here is an example of fish acting as if they have an appreciation for visual sights. Humans also act as if they have an appreciation for visual sights. And so do bears it would seem. If e.g. your 1st cousin says “what a beautiful view”, you assume that he has evolved conscious processes that appreciate beauty (just like you). It would after all be weird if different evolutionary mechanisms had evolved for the two of you. The more evolutionary distance there is between someone, the weaker this kind of argument becomes, but I still think a good deal of it is left for distant cousins such as fish and bears.
I don’t disagree with “Conscious’ is incredibly complicated and weird. We have no idea how to build it.”. But you could also say “Lobsters are incredibly complicated and weird. We have no idea how to build a lobster.”
Reducing the risk of being singled out by predators can be an evolutionary disincentive against giving overt signals of pain/hunger/etc.
My guess would have been that dreams involve hallucinated perceptual inputs (sight, sound, etc.) but dreams don’t involve hallucinated interoceptive input (pain, temperature, hunger, etc.).
It seems physiologically plausible—the insular cortex is the home of interoceptive inputs and can have its hyperparameters set to “don’t hallucinate”, while other parts of the cortex can have their hyperparameters set to “do hallucinate”.
It seems evolutionarily plausible because things like “when part of the body is cold, vasoconstrict” and “when the stomach is full, release more digestive enzymes” or whatever, are still very important to do during sleep, and would presumably get screwed up if the insular cortex was hallucinating.
It seems introspectively plausible because it’s not just pain, but also hunger, hot-or-cold, or muscle soreness … when I’m cold in real life, I feel like I have dreams where I’m cold, etc. etc.
I think fear reactions are an amygdala thing that doesn’t much involve interoceptive inputs or the insular cortex. So they can participate in the hallucinations.
I’ve had a few dreams in which someone shot me with a gun, and it physically hurt about as much as a moderate stubbed toe or something (though the pain was in my abdomen where I got shot, not my toe). But yeah, pain in dreams seems pretty rare for me unless it corresponds to something that’s true in real life, as you mention, like being cold, having an upset stomach, or needing to urinate.
Googling {pain in dreams}, I see a bunch of discussion of this topic. One paper says:
I would also add that the fear responses, while participating in the hallucinations, aren’t themselves hallucinated, not any more than wakeful fear is hallucinated, at any rate. They’re just emotional responses to the contents of our dreams.
Since pain involves both sensory and affective components which rarely come apart, and the sensory precedes the affective, it’s enough to not hallucinate the sensory.
I do feel like pain is a bit different from the other interoceptive inputs in that the kinds of automatic responses to it are more like those to emotions, but one potential similarity is that it was more fitness-enhancing for sharp pain (and other internal signals going haywire) to wake us, but not so for sight, sound or emotions. Loud external sounds still wake us, too, but maybe only much louder than what we dream.
It’s not clear that you intended otherwise, but I would also assume not that there’s something suppressing pain hallucination (like a hyperparameter), but that hallucination is costly and doesn’t happen by default, so only things useful and safe to hallucinate can get hallucinated.
Also, don’t the senses evoked in dreams mostly match what people can “imagine” internally while awake, i.e. mostly just sight and sound? There could be common mechanisms here. Can people imagine pains? I’ve also heard it claimed that our inner voices only have one volume, so maybe that’s also true of sound in dreams?
FWIW, I think I basically have aphantasia, so can’t visualize well, but I think my dreams have richer visual experiences.
Yeah, maybe I should have said “the amygdala responds to the hallucinations” or something.
“Emotions” is kinda a fuzzy term that means different things to different people, and more specifically, I’m not sure what you meant in this paragraph. The phrase “automatic responses…to emotions” strikes me as weird because I’d be more likely to say that an “emotion” is an automatic response (well, with lots of caveats), not that an “emotion” is a thing that elicits an automatic response.
Again I’m kinda confused here. You wrote “not…but” but these all seem simultaneously true and compatible to me. In particular, I think “hallucination is costly” energetically (as far as I know), and “hallucination is costly” evolutionarily (when done at the wrong times, e.g. while being chased by a lion). But I also think hallucination is controlled by an inference-algorithm hyperparameter. And I’m also inclined to say that the “default” value of this hyperparameter corresponds to “don’t hallucinate”, and during dreams the hyperparameter is moved to a non-”default” setting in some cortical areas but not others. Well, the word “default” here is kinda meaningless, but maybe it’s a useful way to think about things.
Hmm, maybe you’re imagining that there’s some special mechanism that’s active during dreams but otherwise inactive, and this mechanism specifically “injects” hallucinations into the input stream somehow. I guess if the story was like that, then I would sympathize with the idea that maybe we shouldn’t call it a “hyperparameter” (although calling it a hyperparameter wouldn’t really be “wrong” per se, just kinda unhelpful). However, I don’t think it’s a “mechanism” like that. I don’t think you need a special mechanism to generate random noise in biological neurons where the input would otherwise be. They’re already noisy. You just need to “lower SNR thresholds” (so to speak) such that the noise is treated as a meaningful signal that can constrain higher-level models, instead of being ignored. I could be wrong though.
I disagree with this statement. For me, the contents of a dream seem only weakly correlated with whether I feel afraid during the dream. I’ve had many dreams with seemingly ordinary content (relative to the baseline of general dream weirdness) that were nevertheless extremely terrifying, and many dreams with relatively weird and disturbing content that were not frightening at all.
Makes sense to me, and seems like a good reason not to update (or to update less) from dreams to ‘pain is fragile’.
Another thing to consider regarding dreams is: Insects and fish don’t have dreams (I wonder if maybe it only is mammals and some birds that do).
As humans, we make motivational tradeoffs based on pain (“should I go out and freeze in the cold to feed my hunger?”, etc). And we change long-term behaviors based on pain (“earlier when I went there, I experienced pain, so I’d rather avoid that in the future”). And both of these things I just mentioned, are also observed among lobsters (as explained in the embedded video).
Something else that lobsters do and we also do: They change behavior (and body language) based on how “alpha” and “beta” they feel. As explained here at 1:21. And as humans we experience that our “qualia” can be tinged by such feelings.
So all animals that dream make motivational tradeoffs based on pain. But not all animals that make motivational tradeoffs based on pain have dreams.
Some hazy speculation from me: Maybe the feeling of pain is more “basic” (shared by older evolutionary ancestors) than some of the cognitive machinery that dreams help us maintain.
Here are some unorganized thoughts I have (these are related to your top-level post, but I’m including them here nonetheless):
Note the difference between cognitive tradeoffs and more simple mechanisms like “if bodily damage then scream”. If you think of consciousness as relating to a “non-local workspace” or something like that, then making tradeoffs seems like something that maybe could qualify.
It’s interesting to note that fish and lobsters rub painful parts of their body, and that anesthesia can make them stop doing that.
Many animals are social animals. For some examples of how fish can be social in sophisticated ways, see e.g. here. I have also heard it claimed that cockroaches are quite social animals. On Wikipedia they write: “Some species, such as the gregarious German cockroach, have an elaborate social structure involving common shelter, social dependence, information transfer and kin recognition (...) When reared in isolation, German cockroaches show behavior that is different from behavior when reared in a group. In one study, isolated cockroaches were less likely to leave their shelters and explore, spent less time eating, interacted less with conspecifics when exposed to them, and took longer to recognize receptive females. These effects might have been due either to reduced metabolic and developmental rates in isolated individuals or the fact that the isolated individuals had not had a training period to learn about what others were like via their antennae. Individual American cockroaches appear to have consistently different “personalities” regarding how they seek shelter. In addition, group personality is not simply the sum of individual choices, but reflects conformity and collective decision-making.”.
Many animals have language, including birds and fish and insects. (Or are we to only call something language if we can build complex phrases as humans do? If so, I think human children start reporting suffering before they have learned to properly speak or understand “language”.)
Admittedly non-toddler humans have a wider repertoire than other animals when it comes to reporting and describing suffering. Maybe that constitutes a qualitative leap of some kind, but I suspect that various other animals also can communicate in considerable nuance about how they feel.
Jeffrey Masson about pigs: “Piglets are particularly fond of play, just as human children are, and chase one another, play-fight, play-love, tumble down hills, and generally engage in a wide variety of enjoyable activities. (...) Though they are often fed garbage and eat it, their food choices—if allowed them—would not be dissimilar to our own. They get easily bored with the same food. They love melons, bananas, and apples, but if they are fed them for several days on end, they will set them aside and eat whatever other food is new first. (...) Much like humans, every single pig is an individual. (...) Some pigs are independent and tough, and don’t let the bad times get to them. Others are ultra-sensitive and succumb to sadness and even depression much more readily.”. Here is a video of what seems like a pig trying to help another pig.
I posit/guess/assume: Pain is often connected enough with language that we are able to report on how we feel using language—but also often not.
I posit/guess/assume: There are things that humans use suffering for that don’t rely on language, and that would work and be useful without language and is used the same way by evolutionary ancestors of ours that are fish/insects/etc.
I do find it a bit weird that we have so much consciousness, as this seems (based on gut feeling) like something that would be ineffective (use unnecessary energy). It seems that you have a similar intuition. But the resolution of this “paradox” that seems most plausible to me, is that for whatever reason evolution has made animals that use “conscious” processes for a lot of things. Calculators are more efficient than human brains at arithmetic, but nonetheless, humans often use “conscious” processes even for simple arithmetic. Why assume that it’s any different for e.g. crows when they do arithmetic, or when they come up with creative plans based on spatial visualization?
Bees are influenced by “emotion” in ways that overlap with how humans are influenced by emotion. And even springtails sometimes have behavior that seems somewhat sophisticated (see e.g. this mating ritual, which seems more complicated to me than “body damage registered, move away”).
Here is an example of fish acting as if they have an appreciation for visual sights. Humans also act as if they have an appreciation for visual sights. And so do bears it would seem. If e.g. your 1st cousin says “what a beautiful view”, you assume that he has evolved conscious processes that appreciate beauty (just like you). It would after all be weird if different evolutionary mechanisms had evolved for the two of you. The more evolutionary distance there is between someone, the weaker this kind of argument becomes, but I still think a good deal of it is left for distant cousins such as fish and bears.
I don’t disagree with “Conscious’ is incredibly complicated and weird. We have no idea how to build it.”. But you could also say “Lobsters are incredibly complicated and weird. We have no idea how to build a lobster.”
Reducing the risk of being singled out by predators can be an evolutionary disincentive against giving overt signals of pain/hunger/etc.