I’m pretty sure that at some level what sorts of things your brain spits out into your consciousness and how useful that information is in the given situation, is something that you can’t fundamentally change. I expect this to be a hard-coded algorithm
I do think that tuning cognitive strategies (and practice in general) is relevant to improving the algorithm.
Practically hard-coded vs. Literally hard-coded
My introspective impression is less that there are “hard-coded algorithms” in the sense of hardware vs. software, but that it is mostly practically impossible to create major changes for humans.
Our access to unconscious decision-making is limited and there is a huge amount of decisions which one would need to focus on. I think this is a large reason why the realistic options for people are mostly i) only ever scratching the surface for a large number of directions for cognitive improvement, or ii) focussing really strongly on a narrow topic and becoming impressive in that topic alone[1].
Then, our motivational system is not really optimizing for this process and might well push in different directions. Our motivational system is part of the algorithm itself, which means that there is a boot strapping problem. People with unsuited motivations will never by motivated to change their way of thinking.
Why this matters
Probably we mostly agree on what this means for everyday decisions.
But with coming technology, some things might change.
Longevity/health might allow for more long-term improvement to be worthwhile (probably not enough by itself unless we reach astronomic lifespans)
technology might become more integrated into the brain. It does not seem impossible that “Your memory is unreliable. Let us use some tech and put it under a significant training regimen and make it reliable” will become possible at some point.
technologies like IVF could lead to “average people” having a higher starting point with regard to self-reflection and reliable cognition.
Also, this topic is relevant to AI takeoff. We do perceive that there is this in-principle possibility for significant improvement in our cognition, but notice that in practice current humans are not capable of pulling it off. This lets us imagine that beings who are somewhat beyond our cognitive abilities might hit this threshold and then execute the full cycle of reflective self-improvement.
Tune Your Cognitive Strategies proports to offer a technique which can improve that class of algorithm significantly.
Edit: Oh, no, you were meaning a different thing, and this probably goes into the inputs to the algorithm category?
I do think that tuning cognitive strategies (and practice in general) is relevant to improving the algorithm.
Practically hard-coded vs. Literally hard-coded
My introspective impression is less that there are “hard-coded algorithms” in the sense of hardware vs. software, but that it is mostly practically impossible to create major changes for humans.
Our access to unconscious decision-making is limited and there is a huge amount of decisions which one would need to focus on. I think this is a large reason why the realistic options for people are mostly i) only ever scratching the surface for a large number of directions for cognitive improvement, or ii) focussing really strongly on a narrow topic and becoming impressive in that topic alone[1].
Then, our motivational system is not really optimizing for this process and might well push in different directions. Our motivational system is part of the algorithm itself, which means that there is a boot strapping problem. People with unsuited motivations will never by motivated to change their way of thinking.
Why this matters
Probably we mostly agree on what this means for everyday decisions.
But with coming technology, some things might change.
Longevity/health might allow for more long-term improvement to be worthwhile (probably not enough by itself unless we reach astronomic lifespans)
technology might become more integrated into the brain. It does not seem impossible that “Your memory is unreliable. Let us use some tech and put it under a significant training regimen and make it reliable” will become possible at some point.
technologies like IVF could lead to “average people” having a higher starting point with regard to self-reflection and reliable cognition.
Also, this topic is relevant to AI takeoff. We do perceive that there is this in-principle possibility for significant improvement in our cognition, but notice that in practice current humans are not capable of pulling it off. This lets us imagine that beings who are somewhat beyond our cognitive abilities might hit this threshold and then execute the full cycle of reflective self-improvement.
I think this is the pragmatic argument for thinking in separate magisteria