I find it helpful to think about our brain’s understanding as lots of subroutines running in parallel. They mostly just sit around doing nothing. But sometimes they recognize a scenario for which they have something to say, and then they jump in and say it. So in chess, there’s a subroutine that says “If the board position has such-and-such characteristics, it’s worthwhile to consider protecting the queen.” There’s a subroutine that says “If the board position has such-and-characteristics, it’s worthwhile to consider moving the pawn.” And of course, once you consider moving the pawn, that brings to mind a different board position, and then new subroutines will recognize them, jump in, and have their say.
So if you take an imperfect rule, like “Python code runs the same on Windows and Mac”, the reason we can get by using this rule is because we have a whole ecosystem of subroutines on the lookout for exceptions to the rule. There’s the main subroutine that says “Python code runs the same on Windows and Mac.” But there’s another subroutine that says “If you’re sharing code between Windows and Mac, and there’s a file path variable, then it’s important to follow such-and-such best practices”. And yet another subroutine is sitting around looking for UI code, ready to interject that that can also be a cross-platform incompatibility. And yet another subroutine is watching for you to call a system library, etc. etc.
So, imagine you’re working on a team, and you go to a team meeting. You sit around for a while, not saying anything. But then someone suggests an idea that you happened to have tried last week, which turned out not to work. Of course, you would immediately jump in to share your knowledge with the rest of the meeting participants. Then you go back to sitting quietly and listening.
Factored cognition doesn’t work this way (and that’s why I’m cautiously pessimistic about it). Factored cognition is like you show up at the meeting, present a report, and then leave. If you would have had something important to say later on, too bad, you’ve already left the room. I’m skeptical that you can get very far in figuring things out if you’re operating under that constraint.
I think your whole understanding of the world and yourself and everything is a lot like that. There are countless millions of little subroutines, watching for certain cues, and ready to jump in and have their say when appropriate. (Kaj calls these things “subagents”, I more typically call them “generative models”, Kurzweil calls them “patterns”, Minsky calls this idea “society of mind”, etc.).
Factored cognition doesn’t work this way (and that’s why I’m cautiously pessimistic about it).
I come to similar conclusions in what is right now post #4 of the sequence (this is #-1). I haven’t read any of the posts you’ve linked, though, so I probably arrived at it through a very different process. I’m definitely going to read them now.
But don’t be too quick to write off Factored Cognition entirely based on that. The fact that it’s a problem doesn’t mean it’s unsolvable.
But don’t be too quick to write off Factored Cognition entirely based on that. The fact that it’s a problem doesn’t mean it’s unsolvable.
I agree. I’m always inclined to say something like “I’m a bit skeptical about factored cognition, but I guess maybe it could work, who knows, couldn’t hurt to try”, but then I remember that I don’t need to say that because practically everyone else thinks that too, even its most enthusiastic advocates, , as far as I can tell from my very light and casual familiarity with it.
I haven’t read any of the posts you’ve linked
Hmm, maybe if you were going to read just one of mine on this particular topic, it should be instead Can You Get AGI From A Transformer instead of the one I linked above. Meh, either way.
I’ve read them both, plus a bunch of your other posts. I think understanding the brain is pretty important for analyzing Factored Cognition—my problem (and this is one I have in general) was that I find it almost impossibly difficult to just go and learn about a field I don’t yet know anything about without guidance. That’s why I had just accepted that I’m writing the sequence without engaging with the literature on neuroscience. Your posts have helped with that, though, so thanks.
Fortunately, insofar as I’ve understood things correctly, your framework (which I know is a selection of theories from the literature and not uncontroversial) appears to agree with everything I’ve written in the sequence. More generally, I find the generative model picture strongly aligns with introspection, which has been my guide so far. When I pay attention to how I think about a difficult problem, and I’ve done that a lot while writing the sequence, it feels very much like waiting for the right hypothesis/explanation to appear, and not like reasoning backward. The mechanism that gives an illusion of control is precisely the fact that we decompose and can think about subquestions, so that part is a sort of reasoning backward on a high level—but at bottom, I’m purely relying on my brain to just spit out explanations.
Anyway, now I can add some (albeit indirect) reference to the neuroscience literature into that part of the sequence, which is nice :-)
Thanks! Haha, nothing wrong with introspection! It’s valid data, albeit sometimes misinterpreted or overgeneralized. Anyway, looking forward to your future posts!
I find it helpful to think about our brain’s understanding as lots of subroutines running in parallel. They mostly just sit around doing nothing. But sometimes they recognize a scenario for which they have something to say, and then they jump in and say it. So in chess, there’s a subroutine that says “If the board position has such-and-such characteristics, it’s worthwhile to consider protecting the queen.” There’s a subroutine that says “If the board position has such-and-characteristics, it’s worthwhile to consider moving the pawn.” And of course, once you consider moving the pawn, that brings to mind a different board position, and then new subroutines will recognize them, jump in, and have their say.
So if you take an imperfect rule, like “Python code runs the same on Windows and Mac”, the reason we can get by using this rule is because we have a whole ecosystem of subroutines on the lookout for exceptions to the rule. There’s the main subroutine that says “Python code runs the same on Windows and Mac.” But there’s another subroutine that says “If you’re sharing code between Windows and Mac, and there’s a file path variable, then it’s important to follow such-and-such best practices”. And yet another subroutine is sitting around looking for UI code, ready to interject that that can also be a cross-platform incompatibility. And yet another subroutine is watching for you to call a system library, etc. etc.
So, imagine you’re working on a team, and you go to a team meeting. You sit around for a while, not saying anything. But then someone suggests an idea that you happened to have tried last week, which turned out not to work. Of course, you would immediately jump in to share your knowledge with the rest of the meeting participants. Then you go back to sitting quietly and listening.
I think your whole understanding of the world and yourself and everything is a lot like that. There are countless millions of little subroutines, watching for certain cues, and ready to jump in and have their say when appropriate. (Kaj calls these things “subagents”, I more typically call them “generative models”, Kurzweil calls them “patterns”, Minsky calls this idea “society of mind”, etc.)
Factored cognition doesn’t work this way (and that’s why I’m cautiously pessimistic about it). Factored cognition is like you show up at the meeting, present a report, and then leave. If you would have had something important to say later on, too bad, you’ve already left the room. I’m skeptical that you can get very far in figuring things out if you’re operating under that constraint.
I come to similar conclusions in what is right now post #4 of the sequence (this is #-1). I haven’t read any of the posts you’ve linked, though, so I probably arrived at it through a very different process. I’m definitely going to read them now.
But don’t be too quick to write off Factored Cognition entirely based on that. The fact that it’s a problem doesn’t mean it’s unsolvable.
I agree. I’m always inclined to say something like “I’m a bit skeptical about factored cognition, but I guess maybe it could work, who knows, couldn’t hurt to try”, but then I remember that I don’t need to say that because practically everyone else thinks that too, even its most enthusiastic advocates, , as far as I can tell from my very light and casual familiarity with it.
Hmm, maybe if you were going to read just one of mine on this particular topic, it should be instead Can You Get AGI From A Transformer instead of the one I linked above. Meh, either way.
I’ve read them both, plus a bunch of your other posts. I think understanding the brain is pretty important for analyzing Factored Cognition—my problem (and this is one I have in general) was that I find it almost impossibly difficult to just go and learn about a field I don’t yet know anything about without guidance. That’s why I had just accepted that I’m writing the sequence without engaging with the literature on neuroscience. Your posts have helped with that, though, so thanks.
Fortunately, insofar as I’ve understood things correctly, your framework (which I know is a selection of theories from the literature and not uncontroversial) appears to agree with everything I’ve written in the sequence. More generally, I find the generative model picture strongly aligns with introspection, which has been my guide so far. When I pay attention to how I think about a difficult problem, and I’ve done that a lot while writing the sequence, it feels very much like waiting for the right hypothesis/explanation to appear, and not like reasoning backward. The mechanism that gives an illusion of control is precisely the fact that we decompose and can think about subquestions, so that part is a sort of reasoning backward on a high level—but at bottom, I’m purely relying on my brain to just spit out explanations.
Anyway, now I can add some (albeit indirect) reference to the neuroscience literature into that part of the sequence, which is nice :-)
Thanks! Haha, nothing wrong with introspection! It’s valid data, albeit sometimes misinterpreted or overgeneralized. Anyway, looking forward to your future posts!