the fear responses, while participating in the hallucinations, aren’t themselves hallucinated
Yeah, maybe I should have said “the amygdala responds to the hallucinations” or something.
pain is a bit different from the other interoceptive inputs in that the kinds of automatic responses to it are more like those to emotions…
“Emotions” is kinda a fuzzy term that means different things to different people, and more specifically, I’m not sure what you meant in this paragraph. The phrase “automatic responses…to emotions” strikes me as weird because I’d be more likely to say that an “emotion” is an automatic response (well, with lots of caveats), not that an “emotion” is a thing that elicits an automatic response.
not that there’s something suppressing pain hallucination (like a hyperparameter), but that hallucination is costly and doesn’t happen by default
Again I’m kinda confused here. You wrote “not…but” but these all seem simultaneously true and compatible to me. In particular, I think “hallucination is costly” energetically (as far as I know), and “hallucination is costly” evolutionarily (when done at the wrong times, e.g. while being chased by a lion). But I also think hallucination is controlled by an inference-algorithm hyperparameter. And I’m also inclined to say that the “default” value of this hyperparameter corresponds to “don’t hallucinate”, and during dreams the hyperparameter is moved to a non-”default” setting in some cortical areas but not others. Well, the word “default” here is kinda meaningless, but maybe it’s a useful way to think about things.
Hmm, maybe you’re imagining that there’s some special mechanism that’s active during dreams but otherwise inactive, and this mechanism specifically “injects” hallucinations into the input stream somehow. I guess if the story was like that, then I would sympathize with the idea that maybe we shouldn’t call it a “hyperparameter” (although calling it a hyperparameter wouldn’t really be “wrong” per se, just kinda unhelpful). However, I don’t think it’s a “mechanism” like that. I don’t think you need a special mechanism to generate random noise in biological neurons where the input would otherwise be. They’re already noisy. You just need to “lower SNR thresholds” (so to speak) such that the noise is treated as a meaningful signal that can constrain higher-level models, instead of being ignored. I could be wrong though.
Yeah, maybe I should have said “the amygdala responds to the hallucinations” or something.
“Emotions” is kinda a fuzzy term that means different things to different people, and more specifically, I’m not sure what you meant in this paragraph. The phrase “automatic responses…to emotions” strikes me as weird because I’d be more likely to say that an “emotion” is an automatic response (well, with lots of caveats), not that an “emotion” is a thing that elicits an automatic response.
Again I’m kinda confused here. You wrote “not…but” but these all seem simultaneously true and compatible to me. In particular, I think “hallucination is costly” energetically (as far as I know), and “hallucination is costly” evolutionarily (when done at the wrong times, e.g. while being chased by a lion). But I also think hallucination is controlled by an inference-algorithm hyperparameter. And I’m also inclined to say that the “default” value of this hyperparameter corresponds to “don’t hallucinate”, and during dreams the hyperparameter is moved to a non-”default” setting in some cortical areas but not others. Well, the word “default” here is kinda meaningless, but maybe it’s a useful way to think about things.
Hmm, maybe you’re imagining that there’s some special mechanism that’s active during dreams but otherwise inactive, and this mechanism specifically “injects” hallucinations into the input stream somehow. I guess if the story was like that, then I would sympathize with the idea that maybe we shouldn’t call it a “hyperparameter” (although calling it a hyperparameter wouldn’t really be “wrong” per se, just kinda unhelpful). However, I don’t think it’s a “mechanism” like that. I don’t think you need a special mechanism to generate random noise in biological neurons where the input would otherwise be. They’re already noisy. You just need to “lower SNR thresholds” (so to speak) such that the noise is treated as a meaningful signal that can constrain higher-level models, instead of being ignored. I could be wrong though.