There seems to be an interesting difference between the “simulators” view and the “story-generators” view. Namely, if GPT-N is just going to get better at generating stories of the same kind that already exist, then why be afraid of it? But if it’s going to get better at simulating how people talk, then we should be very afraid, because a simulation of smart people talking and making detailed plans at high speed would be basically a superintelligence.
I don’t know what you mean by “GPT-N” but if you mean “the same thing they do now, but scaled up,” I’m doubtful that it will happen that way.
Language models are made using fill-in-the-blank training, which is about imitation. Some things can be learned that way, but to get better at doing hard things (like playing Go at superhuman level) you need training that’s about winning increasingly harder competitions. Beyond a certain point, imitating game transcripts doesn’t get any harder, so becomes more like learning stage sword fighting.
Also, “making detailed plans at high speed” is similar to “writing extremely long documents.” There are limits on how far back a language model can look in the chat transcript. It’s difficult to increase because it’s an O(N-squared) algorithm, though I’ve seen a paper claiming it can be improved.
Language models aren’t particularly good at reasoning, let alone long chains of reasoning, so it’s not clear that using them to generate longer documents will result in them getting better results.
So there might not be much incentive for researchers to work on language models that can write extremely long documents.
A low superintelligence, you are proposing an accuracy no better than samples of actual smart people (with all these fictional people who are not actually smart adding noise). At best it would be human top scientist narrative simulation with faster speed.
Since no minds eye, working memory, 3d reasoning, vision, or drawing it would be crippled. Before AI labs add all that which they will soon enough.
There seems to be an interesting difference between the “simulators” view and the “story-generators” view. Namely, if GPT-N is just going to get better at generating stories of the same kind that already exist, then why be afraid of it? But if it’s going to get better at simulating how people talk, then we should be very afraid, because a simulation of smart people talking and making detailed plans at high speed would be basically a superintelligence.
I don’t know what you mean by “GPT-N” but if you mean “the same thing they do now, but scaled up,” I’m doubtful that it will happen that way.
Language models are made using fill-in-the-blank training, which is about imitation. Some things can be learned that way, but to get better at doing hard things (like playing Go at superhuman level) you need training that’s about winning increasingly harder competitions. Beyond a certain point, imitating game transcripts doesn’t get any harder, so becomes more like learning stage sword fighting.
Also, “making detailed plans at high speed” is similar to “writing extremely long documents.” There are limits on how far back a language model can look in the chat transcript. It’s difficult to increase because it’s an O(N-squared) algorithm, though I’ve seen a paper claiming it can be improved.
Language models aren’t particularly good at reasoning, let alone long chains of reasoning, so it’s not clear that using them to generate longer documents will result in them getting better results.
So there might not be much incentive for researchers to work on language models that can write extremely long documents.
Vaguely descriptive frames can be taken as prescriptive, motivating particular design changes.
A low superintelligence, you are proposing an accuracy no better than samples of actual smart people (with all these fictional people who are not actually smart adding noise). At best it would be human top scientist narrative simulation with faster speed.
Since no minds eye, working memory, 3d reasoning, vision, or drawing it would be crippled. Before AI labs add all that which they will soon enough.