They can certainly use answer text as a scratchpad (even nonfunctional text that gives more space for hidden activations to flow). But they don’t without explicit training. Actually maybe they do- maybe RLHF incentivizes a verbose style to give more room for thought. But I think even “thinking step by step,” there are still plenty of issues.
Tokenization is definitely a contributor. But that doesn’t really support the notion that there’s an underlying human-like cognitive algorithm behind human-like text output. The point is the way it adds numbers is very inhuman, despite producing human-like output on the most common/easy cases.
They can certainly use answer text as a scratchpad (even nonfunctional text that gives more space for hidden activations to flow). But they don’t without explicit training. Actually maybe they do- maybe RLHF incentivizes a verbose style to give more room for thought. But I think even “thinking step by step,” there are still plenty of issues.
Tokenization is definitely a contributor. But that doesn’t really support the notion that there’s an underlying human-like cognitive algorithm behind human-like text output. The point is the way it adds numbers is very inhuman, despite producing human-like output on the most common/easy cases.
I definitely agree that it doesn’t give reason to support a human-like algorithm, I was focusing in on the part about adding numbers reliably.