Whether it has a global “plan” is irrelevant as long as it behaves like someone with a global plan (which it does). Consider the thought experiment where I show you a block of text and ask you to come up with the next word. After you come up with the next word, I rewind your brain to before the point where I asked you (so you have no memory of coming up with that word) and repeat ad infinitum. If you are skeptical of the “rewinding” idea, just imagine a simulated brain and we’re restarting the simulation each time. You couldn’t have had a global plan because you had no memory of each previous step. Yet the output would still be totally logical. And as long as you’re careful about each word choice at each step, it is scientifically indistinguishable from someone with a “global plan”. That is similar to what GPT is doing.
Whether it has a global “plan” is irrelevant as long as it behaves like someone with a global plan (which it does). Consider the thought experiment where I show you a block of text and ask you to come up with the next word. After you come up with the next word, I rewind your brain to before the point where I asked you (so you have no memory of coming up with that word) and repeat ad infinitum. If you are skeptical of the “rewinding” idea, just imagine a simulated brain and we’re restarting the simulation each time. You couldn’t have had a global plan because you had no memory of each previous step. Yet the output would still be totally logical. And as long as you’re careful about each word choice at each step, it is scientifically indistinguishable from someone with a “global plan”. That is similar to what GPT is doing.