First, reasoning at a vague level about “impressiveness” just doesn’t and shouldn’t be expected to work. Because 2024 AIs don’t do things the way humans do, they’ll generalize different, so you can’t make inferences between “it can do X” to “it can do Y” like you can with humans:
There is a broken inference. When talking to a human, if the human emits certain sentences about (say) category theory, that strongly implies that they have “intuitive physics” about the underlying mathematical objects. They can recognize the presence of the mathematical structure in new contexts, they can modify the idea of the object by adding or subtracting properties and have some sense of what facts hold of the new object, and so on. This inference——emitting certain sentences implies intuitive physics——doesn’t work for LLMs.
Second, 2024 is specifically trained on short, clear, measurable tasks. Those tasks also overlap with legible stuff—stuff that’s easy for humans to check. In other words, they are, in a sense, specifically trained to trick your sense of how impressive they are—they’re trained on legible stuff, with not much constraint on the less-legible stuff (and in particular, on the stuff that becomes legible but only in total failure on more difficult / longer time-horizon stuff).
The broken inference is broken because these systems are optimized for being able to perform all the tasks that don’t take a long time, are clearly scorable, and have lots of data showing performance. There’s a bunch of stuff that’s really important——and is a key indicator of having underlying generators of understanding——but takes a long time, isn’t clearly scorable, and doesn’t have a lot of demonstration data. But that stuff is harder to talk about and isn’t as intuitively salient as the short, clear, demonstrated stuff.
I don’t know a good description of what in general 2024 AI should be good at and not good at. But two remarks, from https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce.
First, reasoning at a vague level about “impressiveness” just doesn’t and shouldn’t be expected to work. Because 2024 AIs don’t do things the way humans do, they’ll generalize different, so you can’t make inferences between “it can do X” to “it can do Y” like you can with humans:
Second, 2024 is specifically trained on short, clear, measurable tasks. Those tasks also overlap with legible stuff—stuff that’s easy for humans to check. In other words, they are, in a sense, specifically trained to trick your sense of how impressive they are—they’re trained on legible stuff, with not much constraint on the less-legible stuff (and in particular, on the stuff that becomes legible but only in total failure on more difficult / longer time-horizon stuff).