HLE and benchmarks like it are cool, but they fail to test the major deficits of language models, like how they can only remember things by writing them down onto a scratchpad like the memento guy.
A scratch pad for thinking, in my view, is hardly a deficit at all! Quite the opposite. In the case of people, some level of conscious reflection is important and probably necessary for higher-level thought. To clarify, I am not saying consciousness itself is in play here. I’m saying some feedback loop is probably necessary — where the artifacts of thinking, reasoning, or dialogue can themselves become objects of analysis.
My claim might be better stated this way: if we want an agent to do sufficiently well on higher-level reasoning tasks, it is probably necessary for them to operate at various levels of abstraction, and we shouldn’t be surprised if this is accomplished by way of observable artifacts used to bridge different layers. Whether the mechanism is something akin to chain of thought or something else seems incidental to the question of intelligence (by which I mean assessing an agent’s competence at a task, which follows Stuart Russell’s definition).
I don’t think the author would disagree, but this leaves me wondering why they wrote the last part of the sentence above. What am I missing?
A scratch pad for thinking, in my view, is hardly a deficit at all! Quite the opposite. In the case of people, some level of conscious reflection is important and probably necessary for higher-level thought. To clarify, I am not saying consciousness itself is in play here. I’m saying some feedback loop is probably necessary — where the artifacts of thinking, reasoning, or dialogue can themselves become objects of analysis.
My claim might be better stated this way: if we want an agent to do sufficiently well on higher-level reasoning tasks, it is probably necessary for them to operate at various levels of abstraction, and we shouldn’t be surprised if this is accomplished by way of observable artifacts used to bridge different layers. Whether the mechanism is something akin to chain of thought or something else seems incidental to the question of intelligence (by which I mean assessing an agent’s competence at a task, which follows Stuart Russell’s definition).
I don’t think the author would disagree, but this leaves me wondering why they wrote the last part of the sentence above. What am I missing?