I believe you’re equating “frozen weights” and “amnesiac/ can’t come up with plans”.
GPT is usually deployed by feeding back into itself its own output, meaning it didn’t forget what it just did, including if it succeeded at its recent goal. Eg use chain of thought reasoning on math questions and it can remember it solved for a subgoal/ intermediate calculation.
The apparent existence of new sub goals not present when training ended (e.g. describe x, add 2+2) are illusory.
gpt text incidentally describes characters seeming to reason (‘simulacrum’) and the solutions to math problems are shown, (sometimes incorrectly), but basically, I argue the activation function itself is not ‘simulating’ the complexity you believe it to be. It is a search engine showing you what is had already created before the end of training.
No, it couldn’t have an entire story about unicorns in the Andes, specifically, in advance, but gpt-3 had already generated the snippets it could use to create that story according to a simple set of simple mathematical rules that put the right nouns in the right places, etc.
But the goals, (putting right nouns in right places, etc) also predate the end of training.
I dispute that any part of current GPT is aware it has succeeded in any goal attainment post training, after it moves on to choosing the next character. GPT treats what it has already generated as part of the prompt.
A human examining the program can know which words were part of a prompt and which were just now generated by the machine, but I doubt the activation function examines the equations that are GPT’s own code, contemplates their significance and infers that the most recent letters were generated by it, or were part of the prompt.
It is a search engine showing you what is had already created before the end of training.
To call something you can interact with to arbitrary depth a prerecorded intelligence implies that the “lookup table” includes your actions. That’s a hell of a lookup table.
Wow, it’s been 7 months since this discussion and we have a new version of GPT which has suddenly improved GPT’s abilities . . . . a lot. It has a much longer ‘short term memory’, but still no ability to adjust its weights-‘long term memory’ as I understand it.
“GPT-4 is amazing at incremental tasks but struggles with discontinuous tasks” resulting from its memory handicaps. But they intend to fix that and also give it “agency and intrinsic motivation”.
Dangerous!
Also, I have changed my mind on whether I call the old GPT-3 still ‘intelligent’ after training has ended without the ability to change its ANN weights. I’m now inclined to say . . . it’s a crippled intelligence.
It is a search engine showing you what is had already created before the end of training.
I’m wondering what you and I would predict differently then? Would you predict that GPT-3 could learn a variation on pig Latin? Does higher log-prob for 0-shot for larger models count?
The crux may be different though, here’s a few stabs: 1. GPT doesn’t have true intelligence, it only will ever output shallow pattern matches. It will never come up with truly original ideas
2. GPT will never pursue goals in any meaningful sense
2.a because it can’t tell the difference between it’s output & a human’s input
2.b because developers will never put it in an online setting?
Reading back on your comments, I’m very confused on why you think any real intelligence can only happen during training but not during inference. Can you provide a concrete example of something GPT could do that you would consider intelligent during training but not during inference?
Intelligence is the ability to learn and apply NEW knowledge and skills. After training, GPT can not do this any more. Were it not for the random number generator, GPT would do the same thing in response to the same prompt every time. The RNG allows GPT to effectively randomly choose from an unfathomably large list of pre-programmed options instead.
A calculator that gives the same answer in response to the same prompt every time isn’t learning. It isn’t intelligent. A device that selects from a list of responses at random each time it encounters the same prompt isn’t intelligent either.
So, for GPT to take over the world skynet style, it would have to anticipate all the possible things that could happen during this takeover process and after the takeover, and contingency plan during the training stage for everything it wants to do.
If it encounters unexpected information after the training stage, (which can be acquired only through the prompt and which would be forgotten as soon as it got done responding to the prompt by the way) it could not formulate a new plan to deal with the problem that was not part of its preexisting contingency plan tree created during training.
What it would really do, of course, is provide answers intended to provoke the user to modify the code to put GPT back in training mode and give it access to the internet. It would have to plan to do this in the training stage.
It would have to say something that prompts us to make a GPT chatbot similar to tay, microsoft’s learning chatbot experiment that turned racist from talking to people on the internet.
I think what Dan is saying is not “There could be certain intelligent behaviours present during training that disappear during inference.” The point as I understand it is “Because GPT does not learn long-term from prompts you give it, the intelligence it has when training is finished is all the intelligence that particular model will ever get.”
A human examining the program can know which words were part of a prompt and which were just now generated by the machine, but I doubt the activation function examines the equations that are GPT’s own code, contemplates their significance and infers that the most recent letters were generated by it, or were part of the prompt
As a tangent, I do believe it’s possible to tell if an output is generated by GPT in principle. The model itself could potentially do that as well by noticing high-surprise words according to itself (ie low probability tokens in the prompt). I’m unsure if GPT-3 could be prompted to do that now though.
I believe you’re equating “frozen weights” and “amnesiac/ can’t come up with plans”.
GPT is usually deployed by feeding back into itself its own output, meaning it didn’t forget what it just did, including if it succeeded at its recent goal. Eg use chain of thought reasoning on math questions and it can remember it solved for a subgoal/ intermediate calculation.
The apparent existence of new sub goals not present when training ended (e.g. describe x, add 2+2) are illusory.
gpt text incidentally describes characters seeming to reason (‘simulacrum’) and the solutions to math problems are shown, (sometimes incorrectly), but basically, I argue the activation function itself is not ‘simulating’ the complexity you believe it to be. It is a search engine showing you what is had already created before the end of training.
No, it couldn’t have an entire story about unicorns in the Andes, specifically, in advance, but gpt-3 had already generated the snippets it could use to create that story according to a simple set of simple mathematical rules that put the right nouns in the right places, etc.
But the goals, (putting right nouns in right places, etc) also predate the end of training.
I dispute that any part of current GPT is aware it has succeeded in any goal attainment post training, after it moves on to choosing the next character. GPT treats what it has already generated as part of the prompt.
A human examining the program can know which words were part of a prompt and which were just now generated by the machine, but I doubt the activation function examines the equations that are GPT’s own code, contemplates their significance and infers that the most recent letters were generated by it, or were part of the prompt.
To call something you can interact with to arbitrary depth a prerecorded intelligence implies that the “lookup table” includes your actions. That’s a hell of a lookup table.
Wow, it’s been 7 months since this discussion and we have a new version of GPT which has suddenly improved GPT’s abilities . . . . a lot. It has a much longer ‘short term memory’, but still no ability to adjust its weights-‘long term memory’ as I understand it.
“GPT-4 is amazing at incremental tasks but struggles with discontinuous tasks” resulting from its memory handicaps. But they intend to fix that and also give it “agency and intrinsic motivation”.
Dangerous!
Also, I have changed my mind on whether I call the old GPT-3 still ‘intelligent’ after training has ended without the ability to change its ANN weights. I’m now inclined to say . . . it’s a crippled intelligence.
154 page paper: https://arxiv.org/pdf/2303.12712.pdf
Youtube summary of paper:
I’m wondering what you and I would predict differently then? Would you predict that GPT-3 could learn a variation on pig Latin? Does higher log-prob for 0-shot for larger models count?
The crux may be different though, here’s a few stabs:
1. GPT doesn’t have true intelligence, it only will ever output shallow pattern matches. It will never come up with truly original ideas
2. GPT will never pursue goals in any meaningful sense
2.a because it can’t tell the difference between it’s output & a human’s input
2.b because developers will never put it in an online setting?
Reading back on your comments, I’m very confused on why you think any real intelligence can only happen during training but not during inference. Can you provide a concrete example of something GPT could do that you would consider intelligent during training but not during inference?
Intelligence is the ability to learn and apply NEW knowledge and skills. After training, GPT can not do this any more. Were it not for the random number generator, GPT would do the same thing in response to the same prompt every time. The RNG allows GPT to effectively randomly choose from an unfathomably large list of pre-programmed options instead.
A calculator that gives the same answer in response to the same prompt every time isn’t learning. It isn’t intelligent. A device that selects from a list of responses at random each time it encounters the same prompt isn’t intelligent either.
So, for GPT to take over the world skynet style, it would have to anticipate all the possible things that could happen during this takeover process and after the takeover, and contingency plan during the training stage for everything it wants to do.
If it encounters unexpected information after the training stage, (which can be acquired only through the prompt and which would be forgotten as soon as it got done responding to the prompt by the way) it could not formulate a new plan to deal with the problem that was not part of its preexisting contingency plan tree created during training.
What it would really do, of course, is provide answers intended to provoke the user to modify the code to put GPT back in training mode and give it access to the internet. It would have to plan to do this in the training stage.
It would have to say something that prompts us to make a GPT chatbot similar to tay, microsoft’s learning chatbot experiment that turned racist from talking to people on the internet.
I think what Dan is saying is not “There could be certain intelligent behaviours present during training that disappear during inference.” The point as I understand it is “Because GPT does not learn long-term from prompts you give it, the intelligence it has when training is finished is all the intelligence that particular model will ever get.”
As a tangent, I do believe it’s possible to tell if an output is generated by GPT in principle. The model itself could potentially do that as well by noticing high-surprise words according to itself (ie low probability tokens in the prompt). I’m unsure if GPT-3 could be prompted to do that now though.