Certainly state-of-the-art LLMs do an enormous number of tasks that, from a user perspective, count as general reasoning. They can handle plenty of mathematical and scientific problems; they can write decent code; they can certainly hold coherent conversations.; they can answer many counterfactual questions; they even predict Supreme Court decisions pretty well. What are we even talking about when we question how general they are?
We are clearly talking about something very different from this when we say animals are general. Animals can do none of those things. So are animals, except for humans, really narrow systems, not general ones? Or are we improperly mixing generality with intelligence when we talk about AI generality?
I do very much duck that question here (as I say in a footnote, feel free to mentally substitute ‘the sort of reasoning that would be needed to solve the problems described here’). I notice that the piece you link itself links to an Arbital piece on general intelligence, which I haven’t seen before but am now interested to read.
I would like to see attempts to come up with a definition of “generality”. Animals seem to be very general, despite not being very intelligent compared to us.
We are clearly talking about something very different from this when we say animals are general. Animals can do none of those things. So are animals, except for humans, really narrow systems, not general ones? Or are we improperly mixing generality with intelligence when we talk about AI generality?
I do very much duck that question here (as I say in a footnote, feel free to mentally substitute ‘the sort of reasoning that would be needed to solve the problems described here’). I notice that the piece you link itself links to an Arbital piece on general intelligence, which I haven’t seen before but am now interested to read.