I am no expert, but I agree with you. They are cool, they could be a component in something. But they seem like they are only doing part of the “intelligence thing”.
I don’t know if anyone else has spoken about this, but since thinking about LLMs a little I am starting to feel like their something analagoss to a small LLM (SLM?) embedded somewhere as a component in humans. I think I see it when someone gets asked (in person) a question, and they start giving an answer immediately, then suddenly interrupt themselves to give an opposite answer. Usually the trend is that the first answer was the “social answer”, something like “In this situation the thing my character does is to agree enthusiastically that your project that you are clearly super excited about is cool, and tell you I will work on it full steam.” Then some other parts of the self kicks in: “Wait, after 30 seconds of consideration I have realised that this idea can never work. Let me prove it to you.” At least to me it even feels like that, their is some “conversation continuer” component. Obviously the build-up of an AI doesn’t have to mirror that of a human intelligence, but if we want to build something “human level” then it stands to reason that it would end up with specialized components for the same sorts of things humans have specialized components for.
But they seem like they are only doing part of the “intelligence thing”.
I want to be careful here; there is some evidence to suggest that they are doing (or at least capable of doing) a huge portion of the “intelligence thing”, including planning, induction, and search, and even more if you include minor external capabilities like storage.
I don’t know if anyone else has spoken about this, but since thinking about LLMs a little I am starting to feel like their something analagoss to a small LLM (SLM?) embedded somewhere as a component in humans
I know that the phenomenon has been studied for reading and listening (I personally get a kick out of garden-path sentences); the relevant fields are “natural language processing” and “computational linguistics”. I don’t know know of any work specifically that addressed it in the “speaking” setting.
if we want to build something “human level” then it stands to reason that it would end up with specialized components for the same sorts of things humans have specialized components for.
Soft disagree. We’re actively building the specialized components because that’s what we want, not because that’s particularly useful for AGI.
I am no expert, but I agree with you. They are cool, they could be a component in something. But they seem like they are only doing part of the “intelligence thing”.
Eliezer seems to think they can do more: https://www.lesswrong.com/posts/qkAWySeomp3aoAedy/?commentId=KQxaMGHoXypdQpbtH.
I don’t know if anyone else has spoken about this, but since thinking about LLMs a little I am starting to feel like their something analagoss to a small LLM (SLM?) embedded somewhere as a component in humans. I think I see it when someone gets asked (in person) a question, and they start giving an answer immediately, then suddenly interrupt themselves to give an opposite answer. Usually the trend is that the first answer was the “social answer”, something like “In this situation the thing my character does is to agree enthusiastically that your project that you are clearly super excited about is cool, and tell you I will work on it full steam.” Then some other parts of the self kicks in: “Wait, after 30 seconds of consideration I have realised that this idea can never work. Let me prove it to you.” At least to me it even feels like that, their is some “conversation continuer” component. Obviously the build-up of an AI doesn’t have to mirror that of a human intelligence, but if we want to build something “human level” then it stands to reason that it would end up with specialized components for the same sorts of things humans have specialized components for.
I want to be careful here; there is some evidence to suggest that they are doing (or at least capable of doing) a huge portion of the “intelligence thing”, including planning, induction, and search, and even more if you include minor external capabilities like storage.
I know that the phenomenon has been studied for reading and listening (I personally get a kick out of garden-path sentences); the relevant fields are “natural language processing” and “computational linguistics”. I don’t know know of any work specifically that addressed it in the “speaking” setting.
Soft disagree. We’re actively building the specialized components because that’s what we want, not because that’s particularly useful for AGI.