I agree that, in the context of an agent built from an LLM and scaffolding (such as memory and critic systems), the LLM is analogous to the human System 1. But in general, LLMs capability profiles are rather different than those of humans humans (for example, no human is as well-read as even GPT-3.5, while LLMs have specific deficiencies that we don’t, for example around counting, character/word representations of text, instruction following, and so forth). So the detailed “System 1” capabilities of such an architecture might not look much like human System 1 capabilities — especially if the LLM was dramatically larger than current ones. For example, for a sufficiently large LLM trained using current techniques I’d expect “produce flawless well-edited text, at a quality that would take a team of typical humans days or weeks” to be a very rapid System 1 activity.
I agree that, in the context of an agent built from an LLM and scaffolding (such as memory and critic systems), the LLM is analogous to the human System 1. But in general, LLMs capability profiles are rather different than those of humans humans (for example, no human is as well-read as even GPT-3.5, while LLMs have specific deficiencies that we don’t, for example around counting, character/word representations of text, instruction following, and so forth). So the detailed “System 1” capabilities of such an architecture might not look much like human System 1 capabilities — especially if the LLM was dramatically larger than current ones. For example, for a sufficiently large LLM trained using current techniques I’d expect “produce flawless well-edited text, at a quality that would take a team of typical humans days or weeks” to be a very rapid System 1 activity.