At the brain-level I’d agree, but at the organism level I’m less sure. Today’s LLMs may indeed be more general than a cat brain. But I’m not sure they’re more general than the cat as a whole. The cat (or lizard) has an entire repertoire of adaptive features built into the rest of the organism’s physiology (not just the brain). Prof. Michael Levin has a great talk on this topic; the first 2-3 minutes give a good overview.
I’m not sure if we should evaluate generality as being what the brain-part itself could do (where, I agree, the LLM brain is more general), or about what the organism can do (here, I think the cat and the lizard can potentially do more). The biological and cellular machinery are a whole lot more adaptive under the hood.
And I suppose even at the whole-organism level it’s sort of a tough call which one’s more general!
Hm, if I look in your table (https://www.lesswrong.com/posts/3nMpdmt8LrzxQnkGp/ai-timelines-via-cumulative-optimization-power-less-long?curius=1279#The_Table), are you saying that LLMs (GPT-3, Chinchilla) are more general in their capabilities than a cat brain or a lizard brain?
At the brain-level I’d agree, but at the organism level I’m less sure. Today’s LLMs may indeed be more general than a cat brain. But I’m not sure they’re more general than the cat as a whole. The cat (or lizard) has an entire repertoire of adaptive features built into the rest of the organism’s physiology (not just the brain). Prof. Michael Levin has a great talk on this topic; the first 2-3 minutes give a good overview.
I’m not sure if we should evaluate generality as being what the brain-part itself could do (where, I agree, the LLM brain is more general), or about what the organism can do (here, I think the cat and the lizard can potentially do more). The biological and cellular machinery are a whole lot more adaptive under the hood.
And I suppose even at the whole-organism level it’s sort of a tough call which one’s more general!