>They find functions that fit the results. Most such functions are simple and therefore generalize well. But that doesn’t mean they generalize arbitrarily well.
You have no idea how simple the functions they are learning are.
>Not really any different from the human language LLM, it’s just trained on stuff evolution has figured out rather than stuff humans have figured out. This wouldn’t work if you used random protein sequences instead of evolved ones.
It would work just fine. The model would predict random arbitrary sequences and the structure would still be there.
>They try to predict the results. This leads to predicting the computation that led to the results, because the computation is well-approximated by a simple function and they are also likely to pick a simple function.
Models don’t care about “simple”. They care about what works. Simple is arbitrary and has no real meaning. There are many examples of interpretability research revealing convoluted functions.
>Inverting relationships like this is a pretty good use-case for language models. But here you’re still relying on having an evolutionary ecology to give you lots of examples of proteins.
So ? The point is that they’re limited by the data and the casual processes that informed it, not the intelligence or knowledge of humans providing the data. Models like this can and often do eclipse human ability.
If you train a predictor on text describing the outcome of games as well as games then a good enough predictor should be able to eclipse the output of even the best match in training by modulating the text describing the outcome.
It would work just fine. The model would predict random arbitrary sequences and the structure would still be there.
I don’t understand your position. Are you saying that if we generated protein sequences by uniformly randomly independently picking letters from “ILVFMCAGPTSYWQNHEDKR” to sample strings, and then trained an LLM to predict those uniform random strings, it would end up with internal structure representing how biology works? Because that’s obviously wrong to me and I don’t see why you’d believe it.
Models don’t care about “simple”. They care about what works. Simple is arbitrary and has no real meaning. There are many examples of interpretability research revealing convoluted functions.
The algorithm that uses a Fourier transform for modular multiplication is really simple. It is probably the most straightforward way to solve the problem with the tools available to the network, and it is strongly related to our best known algorithms for multiplication.
So ? The point is that they’re limited by the data and the casual processes that informed it, not the intelligence or knowledge of humans providing the data. Models like this can and often do eclipse human ability.
My claim is that for our richest data, the causal processes that inform the data is human intelligence. Of course you are right that there are other datasets available, but they are less rich but sometimes useful (as in the case of proteins).
Furthermore what I’m saying is that if the AI learns to create its own information instead of relying on copying data, it could achieve much more.
If you train a predictor on text describing the outcome of games as well as games then a good enough predictor should be able to eclipse the output of even the best match in training by modulating the text describing the outcome.
Plausibly true, but don’t our best game-playing AIs also do self-play to create new game-playing information instead of purely relying on other’s games? Like AlphaStar.
I don’t understand your position. Are you saying that if we generated protein sequences by uniformly randomly independently picking letters from “ILVFMCAGPTSYWQNHEDKR” to sample strings, and then trained an LLM to predict those uniform random strings, it would end up with internal structure representing how biology works? Because that’s obviously wrong to me and I don’t see why you’d believe it.
Ah no. I misunderstood you here. You’re right.
**What I was trying to get at the notion that something in particular (Human, evolution etc) has to have “figured something out”. The only requirement is that the Universe has “figured it out”. i.e it is possible. More on this later down.
The algorithm that uses a Fourier transform for modular multiplication is really simple. It is probably the most straightforward way to solve the problem with the tools available to the network, and it is strongly related to our best known algorithms for multiplication.
Simple is relative. It’s a good solution. I never said it wasn’t. It’s not however the simplest solution. The point here is that models don’t optimize for simple. They don’t care about that. They optimize for what works. If a simple function works then great. If it stops working then the model shifts from that just as readily as it picked a “simple” one in the first place. There is also no rule that a simple function would be less or more representative of the casual processes informing the real outcome than a more complicated one.
If a perfect predictor straight out of training is using a simple function for any task, it is because it worked for all the data it’d ever seen not because it was simple.
My claim is that for our richest data, the causal processes that inform the data is human intelligence.
**1. You underestimate just how much of the internet contains text whose prediction outright requires superhuman capabilities, like figuring out hashes, or predicting the results of scientific experiments, generating the result of many iterations of refinement, or predicting stock prices/movement. The Universe has figured it out. And that is enough. A perfect predictor of the internet would be a superintelligence, it won’t ‘max out’ anywhere near human just because humans wrote it down.
Essentially, for the predictor, the upper bound of what can be learned from a dataset is not the most capable trajectory, but the conditional structure of the universe implicated by their sum.
A predictor modelling all human text isn’t modelling humans but the universe. Text is written by the universe and humans are just the bits that touch the keyboard.
A Human is a general intelligence. Humans are a Super Intelligence.
A single machine that is at the level of the greatest human intelligence for every domain is a Super Intelligence even if there’s no notable synergy or interpolation.
But the chances of synergy or interpolation between domains is extremely high.
Much the same way you might expect new insights to arise from a human who has read and mastered the total sum of human knowledge.
A relatively mundane example of this that has already happened is the fact that you can converse in Catalan with GPT-4 on topics no Human Catalan speaker knows.
Game datasets aren’t the only example of outcome that is preceded by text describing it. Quite a lot of text is fashioned in this way actually.
Plausibly true, but don’t our best game-playing AIs also do self-play to create new game-playing information instead of purely relying on other’s games? Like AlphaStar.
I don’t believe your idea that neural networks are gonna gain superhuman ability through inverting hashes because I don’t believe neural networks can learn to invert cryptographic hashes. They are specifically designed to be non-invertible.
>They find functions that fit the results. Most such functions are simple and therefore generalize well. But that doesn’t mean they generalize arbitrarily well.
You have no idea how simple the functions they are learning are.
>Not really any different from the human language LLM, it’s just trained on stuff evolution has figured out rather than stuff humans have figured out. This wouldn’t work if you used random protein sequences instead of evolved ones.
It would work just fine. The model would predict random arbitrary sequences and the structure would still be there.
>They try to predict the results. This leads to predicting the computation that led to the results, because the computation is well-approximated by a simple function and they are also likely to pick a simple function.
Models don’t care about “simple”. They care about what works. Simple is arbitrary and has no real meaning. There are many examples of interpretability research revealing convoluted functions.
https://www.alignmentforum.org/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking
>Inverting relationships like this is a pretty good use-case for language models. But here you’re still relying on having an evolutionary ecology to give you lots of examples of proteins.
So ? The point is that they’re limited by the data and the casual processes that informed it, not the intelligence or knowledge of humans providing the data. Models like this can and often do eclipse human ability.
If you train a predictor on text describing the outcome of games as well as games then a good enough predictor should be able to eclipse the output of even the best match in training by modulating the text describing the outcome.
I don’t understand your position. Are you saying that if we generated protein sequences by uniformly randomly independently picking letters from “ILVFMCAGPTSYWQNHEDKR” to sample strings, and then trained an LLM to predict those uniform random strings, it would end up with internal structure representing how biology works? Because that’s obviously wrong to me and I don’t see why you’d believe it.
The algorithm that uses a Fourier transform for modular multiplication is really simple. It is probably the most straightforward way to solve the problem with the tools available to the network, and it is strongly related to our best known algorithms for multiplication.
My claim is that for our richest data, the causal processes that inform the data is human intelligence. Of course you are right that there are other datasets available, but they are less rich but sometimes useful (as in the case of proteins).
Furthermore what I’m saying is that if the AI learns to create its own information instead of relying on copying data, it could achieve much more.
Plausibly true, but don’t our best game-playing AIs also do self-play to create new game-playing information instead of purely relying on other’s games? Like AlphaStar.
Ah no. I misunderstood you here. You’re right.
**What I was trying to get at the notion that something in particular (Human, evolution etc) has to have “figured something out”. The only requirement is that the Universe has “figured it out”. i.e it is possible. More on this later down.
Simple is relative. It’s a good solution. I never said it wasn’t. It’s not however the simplest solution. The point here is that models don’t optimize for simple. They don’t care about that. They optimize for what works. If a simple function works then great. If it stops working then the model shifts from that just as readily as it picked a “simple” one in the first place. There is also no rule that a simple function would be less or more representative of the casual processes informing the real outcome than a more complicated one.
If a perfect predictor straight out of training is using a simple function for any task, it is because it worked for all the data it’d ever seen not because it was simple.
**1. You underestimate just how much of the internet contains text whose prediction outright requires superhuman capabilities, like figuring out hashes, or predicting the results of scientific experiments, generating the result of many iterations of refinement, or predicting stock prices/movement. The Universe has figured it out. And that is enough. A perfect predictor of the internet would be a superintelligence, it won’t ‘max out’ anywhere near human just because humans wrote it down.
Essentially, for the predictor, the upper bound of what can be learned from a dataset is not the most capable trajectory, but the conditional structure of the universe implicated by their sum.
A predictor modelling all human text isn’t modelling humans but the universe. Text is written by the universe and humans are just the bits that touch the keyboard.
A Human is a general intelligence. Humans are a Super Intelligence. A single machine that is at the level of the greatest human intelligence for every domain is a Super Intelligence even if there’s no notable synergy or interpolation.
But the chances of synergy or interpolation between domains is extremely high.
Much the same way you might expect new insights to arise from a human who has read and mastered the total sum of human knowledge.
A relatively mundane example of this that has already happened is the fact that you can converse in Catalan with GPT-4 on topics no Human Catalan speaker knows.
Game datasets aren’t the only example of outcome that is preceded by text describing it. Quite a lot of text is fashioned in this way actually.
All our best game playing AI are not predictors. They are either deep reinforcement learning (which as with anything can be modelled by a predictor https://arxiv.org/abs/2106.01345 https://arxiv.org/abs/2205.14953 https://sites.google.com/view/multi-game-transformers) or some variation of search or some of both. The only information that can be provided to them are the rules and the state of the board/game so they are bound by the data in a way a predictor isn’t necessarily.
Self play is a fine option. What I’m disputing is the necessity of it when prediction is on the table.
I don’t believe your idea that neural networks are gonna gain superhuman ability through inverting hashes because I don’t believe neural networks can learn to invert cryptographic hashes. They are specifically designed to be non-invertible.