I believe this is a non-scientific question, similar in vein to philosophical zombie questions. Person A says “gpt did come up with a number by that point” and person b says “gpt did not come up with a number by that point”, but as long as it still outputs the correct responses after that point, neither person can be proven correct. This is why real-world scientific results of assessing these AI capabilities are way more informative than intuitive ideas of what they’re supposed to be able to do (even if they’re only programmed to predict the next word, it’s wrong to assume a priori that a next-word predictor is incapable of specific tasks, or declare these achievements to be “faked intelligence” when it gets it right).
Max Loh
Karma: 0
Whether it has a global “plan” is irrelevant as long as it behaves like someone with a global plan (which it does). Consider the thought experiment where I show you a block of text and ask you to come up with the next word. After you come up with the next word, I rewind your brain to before the point where I asked you (so you have no memory of coming up with that word) and repeat ad infinitum. If you are skeptical of the “rewinding” idea, just imagine a simulated brain and we’re restarting the simulation each time. You couldn’t have had a global plan because you had no memory of each previous step. Yet the output would still be totally logical. And as long as you’re careful about each word choice at each step, it is scientifically indistinguishable from someone with a “global plan”. That is similar to what GPT is doing.