I do think the human brain uses two very different algorithms/architectures for thought generation and assessment. But this falls within the “things I’m not trying to justify in this post” category. I think if you reject the conclusion based on this, that’s completely fair. (I acknowledged in the post that the central claim has a shaky foundation. I think the model should get some points because it does a good job retroactively predicting LLM performance—like, why LLMs aren’t already superhuman—but probably not enough points to convince anyone.)
I do think the human brain uses two very different algorithms/architectures for thought generation and assessment. But this falls within the “things I’m not trying to justify in this post” category. I think if you reject the conclusion based on this, that’s completely fair. (I acknowledged in the post that the central claim has a shaky foundation. I think the model should get some points because it does a good job retroactively predicting LLM performance—like, why LLMs aren’t already superhuman—but probably not enough points to convince anyone.)