Language models can generate superior text compared to their input
There’s a frequent misconception that assumes that a large language model will never achieve superhuman text creation ability because such models try to create texts that are maximally unsurprising. This article will explain why that assumption is wrong.
In 1906, Sir Francis Galton conducted an experiment at a fair, where he asked fair-goers to guess the weight of an ox in a weight-judging competition. The median of 787 guesses was 1,207 pounds, while the actual weight of the ox was 1,198 pounds. The error in making guesses was a result of a combination of systematic bias and random noise. The fair-goers, having knowledge of oxen, had no bias in their guesses, thus the error was entirely due to random noise. By polling the 787 guesses, Galton averaged out the random noise of each individual guess.
This phenomenon was coined wisdom of the crowd. In areas where reasoning errors are mostly random noise, crowds are smarter than individual members of the crowd. By training on large data sets, large language models can access the wisdom of the crowd. The ceiling of the ability of a large language model is the wisdom of the crowd instead of the wisdom of individual members of the crowd.
The fact that each word of a text is massively unsurprising based on preceding words in the text does not imply that the text overall would be massively unsurprising. If you have a text you can calculate for every word in the text the likelihood (Ltext) how likely it would follow the preceding words in the text. You can also calculate the likelihood (Lideal) of the most likely word that would follow the preceding text.
Lideal—Ltext is noise. If you look at a given text you can calculate the average of the noise for each word. A well-trained large language model is able to produce texts with a lot less noise than the average of the text in its training corpus.
For further reading, Kahneman wrote Noise: A Flaw in Human Judgment which goes into more detail on how a machine learning model can eliminate noise and thus make better decisions than the average of its training data.
Arbitrarily good prediction of human-generated text can demand arbitrarily high superhuman intelligence.
Simple demonstration #1: Somewhere on the net, probably even in the GPT training sets, is a list of <hash, plaintext> pairs, in that order.
Simple demonstration #2: Train on only science papers up until 2010, each preceded by date and title, and then ask the model to generate starting from titles and dates in 2020.
My reply to a similar statement Eliezer made on Twitter today:
The 2020 extrapolation example gets at a more realistic class of capability that even GPT-3 has to a nonzero extent, and which will scale more continuously in the current regime with practical implications.
It’s not clear that it’s possible for a transformer model to do #2 no matter how much training went into it.
It’d take less computing power than #1.
Scientific papers describe facts about the real world that aren’t fully determined by previous scientific papers.
Take for example the scientific papers describing a new species of bacteria that was unknown a decade earlier. Nothing in the training data describes it. You can also not determine the properties of the species based on first principles.
On the other hand, it might be possible to figure out an algorithm that does create texts that fit to given hash values.
If you are intelligent enough, you can deduce the laws of the universe from a surprisingly small amount of data. In the vein of your example, there is the story of Darwin deducing the existence of a moth with a long proboscis after seeing an orchid with a particular shape, and proving to be right. Perhaps papers from pre-2010 don’t have the right models, but maybe they have enough information and data for a sufficiently intelligent being to piece together from them whatever is missing?
You can piece together some things, but there’s a lot of randomness in our world. A lot of important science is about discovering black swans.
Some things is enough, you’d still get less loss if you’re just right about the stuff that can be pieced together.
Sure. What isn’t clear is that you get a real paper from 2020, not a piece of fiction that could have been written in 2010. (Or just a typo filled science paper)
Arbitrarily superintelligent non-causally-trained models will probably still fail at this. IID breaks that kind of prediction. you’d need to train them in a way that makes causally invalid models implausible hypotheses.
But, also, if you did that, then yes, agreed.
These demonstrations seem like grossly over-simplified conjectures. Is this just a thought experiment or actual research interests in the field?
They’re folk theorems, not conjectures. The demonstration is that, in principle, you can go on reducing the losses at prediction of human-generated text by spending more and more and more intelligence, far far past the level of human intelligence or even what we think could be computed by using all the negentropy in the reachable universe. There’s no realistic limit on required intelligence inherent in the training problem; any limits on the intelligence of the system come from the limitations of the trainer, not the loss being minimized as far as theoretically possible by a moderate level of intelligence. If this isn’t mathematically self-evident then you have not yet understood what’s being stated.
No, I didn’t understand what you said. It seemed like you simplified ML systems with a look up table in #1. In #2, it seems like you know what exactly is used to train these systems, and somehow papers before or after 2010 is of meaningful indicators for ML systems, which I don’t know where the reasoning came from. My apologies for not being knowledgeable in this area.
The two examples were (mostly) unrelated and served to demonstrate two cases where a perfect text predictor needs to do incredibly complex calculation to correctly predict text. Thus a perfect text predictor is vast superintelligence (and we won’t achieve perfect text prediction, but as we get better and better we might get closer to superintelligence)
In the first case, if the training data contains series of [hash] then [plain text], then a correct predictor must be able to retrieve the plain text from the hash (and because there are multiple plain texts with the same hash, it would have to calculate through all of them and evaluate which is most probable to appear). Thus correctly predicting text can mean being able to calculate an incredibly large amount of hashes on all combinations of text of certain lengths and evaluating which is the most probable.
In the second case, the task is to predict future papers based on past papers, which is kinda obviously very hard.
It doesn’t seem clear to me what those two demonstrations are trying to test. 1 seems like a case of over-fitting. 2 seems like an extension of 1 except it’s the case with papers, not sure how the papers case has anything to do with the generalized capabilities of ChatGPT. If you think ChatGPT is merely a complex lookup-table, then I don’t really know what to say. Lookup-table or NLP, I don’t know how either has much to do with general intelligence. Both are models that may seem intelligent if that’s where the discussion is focusing on. Honestly, I don’t really understand a lot of the stuff discussed on this site.
This is a really good point, but it only shows that superhuman reasoning might be possible, not that it is. Like, it’s possible to the extent that the transition functions humans can produce are restricted by noise rather than bias. But it’s unclear (at least to me) why bias can’t be most of the story.
My experience over the past few years has been one of being surprised by latent capacities in existing models. A lot of stuff like prompt engineering, fine tuning, chain of thought, Open-AI-style “alignment” can be seen as not so much creating new capacities as revealing/refining latent ones. Back when GPT-3 was new, Connor Leahy said something like “GPT-3 is already general intelligence” which sounded like hyperbole to me at the time, and seems less so now.
Though RSI still seems very plausible to me, one scenario I’ve started thinking about is a massive effective capabilities gain caused not by RSI or any non-trivial algorithmic improvement, but just the dissolution of a much larger than anticipated “latent capacities overhang”.
Possibly an absurd and confused scenario, but is it that implausible that some day we will get a model that still seems kinda dumb but is in fact one prompt away from super-criticality?
You don’t need to change anything in the underlying machine learning algorithms to make a model like ChatGPT generate new training data that could be used for recursive self-improvement.
Especially, if you give it access to a console so that it can reliably run code, it could create its own training data and get into recursive self-improvement.
If you for example want it to learn to reliably multiply two 4-digit numbers you can randomly generate 4-digit numbers. Then you let it generate a text answer with individual steps. You let a second model create python code to validate all the individual calculations in the individual steps. If the python code validates that all the calculations are correct, you can have a new piece of training data on how to multiply two 4-digit numbers.
Based on ChatGPT user data it might be possible to create an automated system that finds problems where ChatGPT currently most of the time gives a wrong answer and figure out how to create code that analyses newly created examples to see whether they are correct.
I’ll just note here that “ability to automate the validation” is only possible when we already know the answer. Since the automated loom, computers have been a device for doing the same thing, over and over, very fast.
You don’t necessarily need to know the correct answer beforehand to be able to validate whether or not an answer is correct. If we take Eliezer’s problem of generating text that matches a given hash value, it’s easy to validate whether an answer is true or not even if you don’t know the answer beforehand.
What’s important is that the AI is sometimes able to generate correct answers. If the criteria for a correct answer are well-defined enough it can go from solving a problem 1% of the time correctly to solving it 100% of the time correctly.
ChatGPT is used by millions of people and a good portion of that will click the feedback button, especially if they optimize their UI for that. It’s possible to build automated processes that will look at the problems where it currently frequently makes mistakes and learn to avoid them. It is possible to build a self-improving system around that.
If you let it do that for 10,000 different problems I would expect that it learns some reasoning habits that generalize and are useful for solving other problems as well.
One of top bloggers in my bubble said that he tries to write a most surprising next sentence when he is working on his posts.
You’ve failed to convince me that “Lideal—Ltext is noise”, or even offer any arguments for that. Could you elaborate more please? This seems potentially very interesting and relevant.
I agree that this is a key question. It seems to me that sometimes it might be noise and sometimes it might be creativity.
Signal and noise depend a bit on the perspective. I apply the conception from Kahnemann and if you are interested in more, reading Kahnemann’s latest book or listening to a podcast of him speak about the book is good.
Kahnemann does go through examples of an insurance company that treated the interrater quotes for insurance policies as noise and then used machine learning to cut down on that noise.
It’s worth noting that low noise is not universally desirable. I remember some VC firm that said they had a policy that if all partners thought it should invest in a company that it would not invest. That’s because the wisdom of the crowd does not make good VC investment decisions.
True.
-- Simulators
[EDIT: I read this as “having no knowledge of oxen” instead of “having knowledge of oxen”—is this what you meant? The comment seems relevant nevertheless.]
This does not follow: It is entirely possible that the fair-goers had no specific domain knowledge of oxen, while still having biases arising from domain-general reasoning. And indeed, they probably knew something about oxen—from Jaynes’ Probablity Theory:
The main point is correct, but maybe you should mention your demonstration is especially easy, but not necessarily the main reason (unless that’s what you think?). Also:
If you meant the crowd had no bias on average, that’s indeed the idea. But one can read your sentence as meaning that each individual had no bias, which would break the whole wisdom of crowd idea (because then Galton wouldn’t need a crowd: he could simply repeat the measurement process in one individual).
I’m so happy that I didn’t go to sleep, because I got to read this masterpiece of an article as soon as it was published.