Rather, I’m confident that executing my research process will over time lead to something good.
Yeah, this is a sentiment I agree with and believe. I think that it makes sense to have a cognitive process that self-corrects and systematically moves towards solving whatever problem it is faced with. In terms of computability theory, one could imagine it as an effectively computable function that you expect will return you the answer—and the only ‘obstacle’ is time / compute invested.
I think being confident, i.e. not feeling hopeless in doing anything, is important. The important takeaway here is that you don’t need to be confident in any particular idea that you come up with. Instead, you can be confident in the broader picture of what you are doing, i.e. your processes.
I share your sentiment, although the causal model for it is different in my head. A generalized feeling of hopelessness is an indicator of mistaken assumptions and causal models in my head, and I use that as a cue to investigate why I feel that way. This usually results in me having hopelessness about specific paths, and a general purposefulness (for I have an idea of what I want to do next), and this is downstream of updates to my causal model that attempts to track reality as best as possible.
Note that when I said I disagree with your decisions, I specifically meant the sort of myopia in the glass shard story—and specifically because I believe that if your research process / cognition algorithm is fragile enough that you’d be willing to take physical damage to hold onto an inchoate thought, maybe consider making your cognition algorithm more robust.
On my current models of theoretical[1] insight-making, the beginning of an insight will necessarily—afaict—be “non-robust”/chaotic. I think it looks something like this:
A gradual build-up and propagation of salience wrt some tiny discrepancy between highly confident specific beliefs
This maybe corresponds to simultaneously-salient neural ensembles whose oscillations are inharmonic[2]
Or in the frame of predictive processing: unresolved prediction-error between successive layers
Immediately followed by a resolution of that discrepancy if the insight is successfwl
This maybe corresponds to the brain having found a combination of salient ensembles—including the originally inharmonic ensembles—whose oscillations are adequately harmonic.
Super-speculative but: If the “question phase” in step 1 was salient enough, and the compression in step 2 great enough, this causes an insight-frisson[3] and a wave of pleasant sensations across your scalp, spine, and associated sensory areas.
This maps to a fragile/chaotic high-energy “question phase” during which the violation of expectation is maximized (in order to adequately propagate the implications of the original discrepancy), followed by a compressive low-energy “solution phase” where correctness of expectation is maximized again.
In order to make this work, I think the brain is specifically designed to avoid being “robust”—though here I’m using a more narrow definition of the word than I suspect you intended. Specifically, there are several homeostatic mechanisms which make the brain-state hug the border between phase-transitions as tightly as possible. In other words, the brain maximizes dynamic correlation length between neurons[4], which is when they have the greatest ability to influence each other across long distances (aka “communicate”). This is called the critical brain hypothesis, and it suggests that good thinking is necessarily chaotic in some sense.
Another point is that insight-making is anti-inductive.[5] Theoretical reasoning is a frontier that’s continuously being exploited based on the brain’s native Value-of-Information-estimator, which means that the forests with the highest naively-calculated-VoI are also less likely to have any low-hanging fruit remaining. What this implies is that novel insights are likely to be very narrow targets—which means they could be really hard to hold on to for the brief moment between initial hunch and build-up of salience. (Concise handle: epistemic frontiers are anti-inductive.)
I scope my arguments only to “theoretical processing” (i.e. purely introspective stuff like math), and I don’t think they apply to “empirical processing”.
Harmonic (red) vs inharmonic (blue) waveforms. When a waveform is harmonic, efferent neural ensembles can quickly entrain to it and stay in sync with minimal metabolic cost. Alternatively, in the context of predictive processing, we can say that “top-down predictions” quickly “learn to predict” bottom-up stimuli.
I basically think musical pleasure (and aesthetic pleasure more generally) maps to 1) the build-up of expectations, 2) the violation of those expectations, and 3) the resolution of those violated expectations. Good art has to constantly balance between breaking and affirming automatic expectations. I think the aesthetic chills associates with insights are caused by the same structure as appogiaturas—the one-period delay of an expected tone at the end of a highly predictable sequence.
I think the term originates from Eliezer, but Q Home has more relevant discussion on it—also I’m just a big fan of their chaoticoptimal reasoning style in general. Can recommend! 🍵
I think what quila is pointing at is their belief in the supposed fragility of thoughts at the edge of research questions.
Yes, thanks for noticing and making it explicit. It seems I was modelling Johannes as having a similar cognition type, since it would explain their behavior, which actually had a different cause.
I believe that if your research process / cognition algorithm is fragile enough that you’d be willing to take physical damage[1] to hold onto an inchoate thought, maybe consider making your cognition algorithm more robust.
My main response to ‘try to change your cognition algorithm if it is fragile’ is to remind that human minds tend to work differently on unexpected dimensions. (Of course, you know this abstractly, and have probably read the same post about the ‘typical mind fallacy’. But the suggestion seems like harmful advice to follow for some of the minds it’s directed at.) (Alternatively, since you wrote ‘maybe’, this comment can be seen as describing a kind of case where it would be harmful)
My highest value mental states are fragile: they are hard to re-enter at will once left, and they take some subconscious effort to preserve/cultivate. They can also feel totally immersing and overwhelming, when I manage to enter them. (I don’t feel confident in my ability to qualitatively write more, as much as I would like to (maybe not here or now)).
This is analogous to Johannes’ situation in a way. They believe the problem they have of working too hard is less bad to have than the standard problem of not feeling motive to work. The specific irrational behavior their problem caused also ‘stands out’ more to onlookers, since it’s not typical. (One wouldn’t expect the top comment here if one described succumbing to akrasia; but if akrasia was rare in humans, such that the distribution over most probable causes included some worrying possibilities, we might)
In the same way, I feel like my cognition-algorithm is in a local optima which is better than the standard one, where one lesser-problem I face is that my highest output mental states are ‘fragile’, and because this is not typical it may (when read of in isolation) seem like a sign of ‘a negative deviation from the normal local optima, which this person would be better off if they corrected’.
From my inside perspective, I don’t want to try to avoid fragile mental states, because I think it would only be a possible change as a more general directional change away from ‘how my cognition works (at its best)’ towards ‘how human cognition typically works’.
(And because the fragility-of-thought feels like a small problem, once I learned to work around it, e.g learning to preserve states and augmenting with external notes. At least when compared to the problem most have of not having a chance at generating insights of a high enough quality as our situation necessitates.)
… although, if you knew of a method to reduce fragility while not reducing other things, then I’d love to try it :)
On ‘willing to take physical damage …’, footnoted because it seems like a minor point—This seems like another case of avoiding the typical-mind-fallacy being important, since different minds have different pain tolerances / levels of experienced pain from a cut.
How much does this line up with your model.
Quoted from the linked comment:
Yeah, this is a sentiment I agree with and believe. I think that it makes sense to have a cognitive process that self-corrects and systematically moves towards solving whatever problem it is faced with. In terms of computability theory, one could imagine it as an effectively computable function that you expect will return you the answer—and the only ‘obstacle’ is time / compute invested.
I share your sentiment, although the causal model for it is different in my head. A generalized feeling of hopelessness is an indicator of mistaken assumptions and causal models in my head, and I use that as a cue to investigate why I feel that way. This usually results in me having hopelessness about specific paths, and a general purposefulness (for I have an idea of what I want to do next), and this is downstream of updates to my causal model that attempts to track reality as best as possible.
Note that when I said I disagree with your decisions, I specifically meant the sort of myopia in the glass shard story—and specifically because I believe that if your research process / cognition algorithm is fragile enough that you’d be willing to take physical damage to hold onto an inchoate thought, maybe consider making your cognition algorithm more robust.
Edit: made it a post.
On my current models of theoretical[1] insight-making, the beginning of an insight will necessarily—afaict—be “non-robust”/chaotic. I think it looks something like this:
A gradual build-up and propagation of salience wrt some tiny discrepancy between highly confident specific beliefs
This maybe corresponds to simultaneously-salient neural ensembles whose oscillations are inharmonic[2]
Or in the frame of predictive processing: unresolved prediction-error between successive layers
Immediately followed by a resolution of that discrepancy if the insight is successfwl
This maybe corresponds to the brain having found a combination of salient ensembles—including the originally inharmonic ensembles—whose oscillations are adequately harmonic.
Super-speculative but: If the “question phase” in step 1 was salient enough, and the compression in step 2 great enough, this causes an insight-frisson[3] and a wave of pleasant sensations across your scalp, spine, and associated sensory areas.
This maps to a fragile/chaotic high-energy “question phase” during which the violation of expectation is maximized (in order to adequately propagate the implications of the original discrepancy), followed by a compressive low-energy “solution phase” where correctness of expectation is maximized again.
In order to make this work, I think the brain is specifically designed to avoid being “robust”—though here I’m using a more narrow definition of the word than I suspect you intended. Specifically, there are several homeostatic mechanisms which make the brain-state hug the border between phase-transitions as tightly as possible. In other words, the brain maximizes dynamic correlation length between neurons[4], which is when they have the greatest ability to influence each other across long distances (aka “communicate”). This is called the critical brain hypothesis, and it suggests that good thinking is necessarily chaotic in some sense.
Another point is that insight-making is anti-inductive.[5] Theoretical reasoning is a frontier that’s continuously being exploited based on the brain’s native Value-of-Information-estimator, which means that the forests with the highest naively-calculated-VoI are also less likely to have any low-hanging fruit remaining. What this implies is that novel insights are likely to be very narrow targets—which means they could be really hard to hold on to for the brief moment between initial hunch and build-up of salience. (Concise handle: epistemic frontiers are anti-inductive.)
I scope my arguments only to “theoretical processing” (i.e. purely introspective stuff like math), and I don’t think they apply to “empirical processing”.
Harmonic (red) vs inharmonic (blue) waveforms. When a waveform is harmonic, efferent neural ensembles can quickly entrain to it and stay in sync with minimal metabolic cost. Alternatively, in the context of predictive processing, we can say that “top-down predictions” quickly “learn to predict” bottom-up stimuli.
I basically think musical pleasure (and aesthetic pleasure more generally) maps to 1) the build-up of expectations, 2) the violation of those expectations, and 3) the resolution of those violated expectations. Good art has to constantly balance between breaking and affirming automatic expectations. I think the aesthetic chills associates with insights are caused by the same structure as appogiaturas—the one-period delay of an expected tone at the end of a highly predictable sequence.
I highly recommend this entire YT series!
I think the term originates from Eliezer, but Q Home has more relevant discussion on it—also I’m just a big fan of their
chaoticoptimal reasoning style in general. Can recommend! 🍵Yes, thanks for noticing and making it explicit. It seems I was modelling Johannes as having a similar cognition type, since it would explain their behavior, which actually had a different cause.
My main response to ‘try to change your cognition algorithm if it is fragile’ is to remind that human minds tend to work differently on unexpected dimensions. (Of course, you know this abstractly, and have probably read the same post about the ‘typical mind fallacy’. But the suggestion seems like harmful advice to follow for some of the minds it’s directed at.) (Alternatively, since you wrote ‘maybe’, this comment can be seen as describing a kind of case where it would be harmful)
My highest value mental states are fragile: they are hard to re-enter at will once left, and they take some subconscious effort to preserve/cultivate. They can also feel totally immersing and overwhelming, when I manage to enter them. (I don’t feel confident in my ability to qualitatively write more, as much as I would like to (maybe not here or now)).
This is analogous to Johannes’ situation in a way. They believe the problem they have of working too hard is less bad to have than the standard problem of not feeling motive to work. The specific irrational behavior their problem caused also ‘stands out’ more to onlookers, since it’s not typical. (One wouldn’t expect the top comment here if one described succumbing to akrasia; but if akrasia was rare in humans, such that the distribution over most probable causes included some worrying possibilities, we might)
In the same way, I feel like my cognition-algorithm is in a local optima which is better than the standard one, where one lesser-problem I face is that my highest output mental states are ‘fragile’, and because this is not typical it may (when read of in isolation) seem like a sign of ‘a negative deviation from the normal local optima, which this person would be better off if they corrected’.
From my inside perspective, I don’t want to try to avoid fragile mental states, because I think it would only be a possible change as a more general directional change away from ‘how my cognition works (at its best)’ towards ‘how human cognition typically works’.
(And because the fragility-of-thought feels like a small problem, once I learned to work around it, e.g learning to preserve states and augmenting with external notes. At least when compared to the problem most have of not having a chance at generating insights of a high enough quality as our situation necessitates.)
… although, if you knew of a method to reduce fragility while not reducing other things, then I’d love to try it :)
On ‘willing to take physical damage …’, footnoted because it seems like a minor point—This seems like another case of avoiding the typical-mind-fallacy being important, since different minds have different pain tolerances / levels of experienced pain from a cut.