cognition can be seen as acting (almost) only on the tape
… If not for the existence of non-verbal cognition, which works perfectly well even without a “tape”. Suggesting that the tape isn’t a crucial component, that the heavy lifting can be done by the abstract algorithm alone, and therefore that even in supposed verbal thinkers, that algorithm is likely what’s doing the actual heavy lifting.
In my view, there’s an actual stream of abstract cognition, and a “translator” function mapping from that stream to human language. When we’re doing verbal thinking, we’re constantly running the translator on our actual cognition, which has various benefits (e. g., it’s easier to translate our thoughts to other humans); but the items in the natural-language monologue are compressed versions of the items in the abstract monologue, and they’re strictly downstream of the abstract stream.
There’s a “stream” of abstract thought, or “abstract monologue”
The cognition algorithm operates on/produces the abstract stream
Natural language is a compressed stream of the abstract stream
Which seems to me the same thing I said above, unless maybe you are also implying either or both of these additional statements:
a) The abstract cognition algorithm can not be seen as operating mostly autoregressively on its “abstract monologue”;
b) The abstract monologue can not be translated to a longer, but boundedly longer, natural language stream (without claiming that this is what happens typically when someone verbalizes).
Which of (a), (b) do you endorse, eventually with amendments?
Which of (a), (b) do you endorse, eventually with amendments?
I don’t necessarily endorse either. But “boundedly longer” is what does a lot of work there. As I’d mentioned, cognition can also be translated into a finitely long sequence of NAND gates. The real question isn’t “is there a finitely-long translation?”, but how much longer that translation is.
And I’m not aware of any strong evidence suggesting that natural language is close enough to human cognition that the resultant stream would not be much longer. Long enough to be ruinously compute-intensive (effectively as ruinous as translating it into NAND-gate sequences).
Indeed, I’d say there’s plenty of evidence to the contrary, given how central miscommunication is to the human experience.
… If not for the existence of non-verbal cognition, which works perfectly well even without a “tape”. Suggesting that the tape isn’t a crucial component, that the heavy lifting can be done by the abstract algorithm alone, and therefore that even in supposed verbal thinkers, that algorithm is likely what’s doing the actual heavy lifting.
In my view, there’s an actual stream of abstract cognition, and a “translator” function mapping from that stream to human language. When we’re doing verbal thinking, we’re constantly running the translator on our actual cognition, which has various benefits (e. g., it’s easier to translate our thoughts to other humans); but the items in the natural-language monologue are compressed versions of the items in the abstract monologue, and they’re strictly downstream of the abstract stream.
So you think
There’s a “stream” of abstract thought, or “abstract monologue”
The cognition algorithm operates on/produces the abstract stream
Natural language is a compressed stream of the abstract stream
Which seems to me the same thing I said above, unless maybe you are also implying either or both of these additional statements:
a) The abstract cognition algorithm can not be seen as operating mostly autoregressively on its “abstract monologue”;
b) The abstract monologue can not be translated to a longer, but boundedly longer, natural language stream (without claiming that this is what happens typically when someone verbalizes).
Which of (a), (b) do you endorse, eventually with amendments?
I don’t necessarily endorse either. But “boundedly longer” is what does a lot of work there. As I’d mentioned, cognition can also be translated into a finitely long sequence of NAND gates. The real question isn’t “is there a finitely-long translation?”, but how much longer that translation is.
And I’m not aware of any strong evidence suggesting that natural language is close enough to human cognition that the resultant stream would not be much longer. Long enough to be ruinously compute-intensive (effectively as ruinous as translating it into NAND-gate sequences).
Indeed, I’d say there’s plenty of evidence to the contrary, given how central miscommunication is to the human experience.