I’ve read them both, plus a bunch of your other posts. I think understanding the brain is pretty important for analyzing Factored Cognition—my problem (and this is one I have in general) was that I find it almost impossibly difficult to just go and learn about a field I don’t yet know anything about without guidance. That’s why I had just accepted that I’m writing the sequence without engaging with the literature on neuroscience. Your posts have helped with that, though, so thanks.
Fortunately, insofar as I’ve understood things correctly, your framework (which I know is a selection of theories from the literature and not uncontroversial) appears to agree with everything I’ve written in the sequence. More generally, I find the generative model picture strongly aligns with introspection, which has been my guide so far. When I pay attention to how I think about a difficult problem, and I’ve done that a lot while writing the sequence, it feels very much like waiting for the right hypothesis/explanation to appear, and not like reasoning backward. The mechanism that gives an illusion of control is precisely the fact that we decompose and can think about subquestions, so that part is a sort of reasoning backward on a high level—but at bottom, I’m purely relying on my brain to just spit out explanations.
Anyway, now I can add some (albeit indirect) reference to the neuroscience literature into that part of the sequence, which is nice :-)
Thanks! Haha, nothing wrong with introspection! It’s valid data, albeit sometimes misinterpreted or overgeneralized. Anyway, looking forward to your future posts!
I’ve read them both, plus a bunch of your other posts. I think understanding the brain is pretty important for analyzing Factored Cognition—my problem (and this is one I have in general) was that I find it almost impossibly difficult to just go and learn about a field I don’t yet know anything about without guidance. That’s why I had just accepted that I’m writing the sequence without engaging with the literature on neuroscience. Your posts have helped with that, though, so thanks.
Fortunately, insofar as I’ve understood things correctly, your framework (which I know is a selection of theories from the literature and not uncontroversial) appears to agree with everything I’ve written in the sequence. More generally, I find the generative model picture strongly aligns with introspection, which has been my guide so far. When I pay attention to how I think about a difficult problem, and I’ve done that a lot while writing the sequence, it feels very much like waiting for the right hypothesis/explanation to appear, and not like reasoning backward. The mechanism that gives an illusion of control is precisely the fact that we decompose and can think about subquestions, so that part is a sort of reasoning backward on a high level—but at bottom, I’m purely relying on my brain to just spit out explanations.
Anyway, now I can add some (albeit indirect) reference to the neuroscience literature into that part of the sequence, which is nice :-)
Thanks! Haha, nothing wrong with introspection! It’s valid data, albeit sometimes misinterpreted or overgeneralized. Anyway, looking forward to your future posts!