This comes from OpenAI saying they didn’t expect ChatGPT to be a big commercial success. It was not a top-priority project.
Maxime Riché
An illustrative model of backfire risks from pausing AI research
Expectations for Gemini: hopefully not a big deal
In fact, the costs to inference ChatGPT exceed the training costs on a weekly basis
That seems quite wild, if the training cost was 50M$, then the inference cost for a year would be 2.5B$.
The inference cost dominating the cost seems to depend on how you split the cost of building the supercomputer (buying the GPUs).
If you include the cost of building the supercomputer into the training cost, then the inference cost (without the cost of building the computer) looks cheap. If you split the building cost between training and inference in proportion to the “use time”, then the inference cost would dominate.
Are these 2 bullet points faithful to your conclusion?
GPT-4 training run (renting the compute for the final run): 100M$, of which 1⁄3 to 2⁄3 is the cost of the staff
GPT-4 training run + building the supercomputer: 600M$, of which ~20% for cost of the staff
And some hot takes (mine):
Because supercomputers become “obsolete” quickly (~3 years), you need to run inferences to pay for building your supercomputer (you need profitable commercial applications), or your training cost must also account for the full cost of the supercomputer, and this produces a ~x6 increase in training cost.
In forecasting models, we may be underestimating the investment to be able to train a frontier model by ~x6 (closer to 600M$ in 2022 than 100M$).
The bottleneck to train new frontier models is now going to be building more powerful supercomputers.
More investments won’t help that much in solving this bottleneck.
This bottleneck will cause most capability gains to come from improving software efficiency.
Open-source models will stay close in terms of capability to frontier models.
This will reduce the profitability of simple and general commercial applications.
1) In the web interface, the parameter “Hardware adoption delay” is:
Meaning: Years between a chip design and its commercial release.
Best guess value: 1
Justification for best guess value: Discussed here. The conservative value of 2.5 years corresponds to an estimate of the time needed to make a new fab. The aggressive value (no delay) corresponds to fabless improvements in chip design that can be printed with existing production lines with ~no delay.Is there another parameter for the delay (after the commercial release) to produce the hundreds of thousands of chips and build a supercomputer using them?
(With maybe an aggressive value for just “refurnishing” an existing supercomputer or finishing a supercomputer just waiting for the chips)2) Do you think that in a scenario with quick large gains in hardware efficiency, the delay for building a new chip fab could be significantly larger than the current estimate because of the need to also build new factories for the machines that will be used in the new chip fab? (e.g. ASMI could also need to build factories, not just TSMC)
3) Do you think that these parameters/adjustments would significantly change the relative impact on the takeoff of the “hardware overhang” when compared to the “software overhang”? (e.g. maybe making hardware overhang even less important for the speed of the takeoff)
This is a big reason for why GPT4 is likely not that big but instead trained on much more data :)
Do you also have estimates of the fraction of resources in our light cone that we expect to be used to create optimised good stuff?
Maybe the use of prompt suffixes can do a great deal to decrease the probability chatbots turning into Waluigi. See the “insert” functionality of OpenAI API https://openai.com/blog/gpt-3-edit-insert
Chatbots developers could use suffix prompts in addition to prefix prompts to make it less likely to fall into a Waluigi completion.
Indeed, empirical results show that filtering the data, helps quite well in aligning with some preferences: Pretraining Language Models with Human Preferences
What about the impact of dropout (parameters, layers), normalisation (batch, layer) (with a batch containing several episodes), asynchronous distributed data collection (making batch aggregation more stochastic), weight decay (impacting any weight), multi-agent RL training with independent agents, etc.
And other possible stuff that don’t exist at the moment: online pruning and growth while training, population training where the gradient hackers are exploited.
Shouldn’t that naively make gradient hacking very hard?
We see a lot of people die, in the reality, fictions and dreams.
We also see a lot of people having sex or sexual desire in fictions or dreams before experiencing it.
IDK how strong this is a counter argument to how powerful the alignment in us is. Maybe a biological reward system + imitation+ fiction and later dreams is simply what is at play in humans.
Should we expect these decompositions to be even more interpretable if the model was trained to output a prediction as soon as possible? (After any block, instead of outputting the prediction after the full network)
Some quick thoughts about “Content we aren’t (yet) discussing”:
Shard theory should be about transmission of values by SL (teaching, cloning, inheritance) more than learning them using RL
SL (Cloning) is more important than RL. Humans learn a world model by SSL, then they bootstrap their policies through behavioural cloning and finally they finetune their policies thought RL.
Why? Because of theoretical reasons and from experimental data points, this is the cheapest why to generate good general policies…
SSL before SL because you get much more frequent and much denser data about the world by trying to predict it. ⇒ SSL before SL because of a bottleneck on the data from SL.
SL before RL because this remove half (in log scale) of the search space by removing the need to discover|learn your reward function at the same time than your policy function. Because in addition, this remove the need do to the very expensive exploration and the temporal and “agential”(when multiagent) credit assignments. ⇒ SL before RL because of the cost of doing RL.
Differences:
In cloning, the behaviour comes first and then the biological reward is observed or not. Behaviours that gives no biological reward to the subject can be learned. The subject will still learn some kind of values associated to these behaviours.
Learning with SL, instead of RL, doesn’t rely as much on credit assignment and exploration. What are the consequences of that?
What values are transmitted?
1) The final values
The learned values known by the previous generation.
Why?
Because it is costly to explore by yourself your reward function space
Because it is beneficiary to the community to help you improve your policies quickly
2) Internalised instrumental values
Some instrument goals are learned as final goal, they are “internalised”.
Why?
exploration is too costly
finding an instrumental goal is too rare or too costly
exploitation is too costly
having to make the choice of pursuing an instrumental goal in every situation is too costly or not quick enough (reaction time)
when being highly credible is beneficial
implicit commitments to increase your credibility
3) Non-internalised instrumental values
Why?
Because it is beneficiary to the community to help you improve your policies quickly
Shard theory is not about the 3rd level of reward function
We have here 3 level of rewards function:
1) The biological rewards
Hardcoded in our body
Optimisation process creating it: Evolution
Universe + Evolution ⇒ Biological rewards
Not really flexible
Without “drugs” and advanced biotechnologies
Almost no generalization power
Physical scope: We feel stuff when we are directly involved
Temporal scope: We feel stuff when they are happening
Similarity scope: We fell stuff when we are directly involved
Called sensations, pleasure, pain
2) The learned values | rewards | shards
Learned through life
Optimisation process creating it: SL and RL relying on biological rewards
Biological rewards + SL and RL ⇒ Learned values in the brain
Flexible in term of years
Medium generalization power
Physical scope: We learn to care for even in case where we are not involved (our close circle)
Temporal scope: We learn to feel emotions about the future and the past
Similarity scope: We learn to feel emotions for other kind of beings
Called intuitions, feelings
Shard theory may be explaining only this part
3) (optional) The chosen values:
Decided upon reflection
Optimisation process creating it: Thinking relying on the brain
Learned values in the brain + Thinking ⇒ Chosen values “on paper” | “in ideas”
Flexible in term of minutes
Can have up to very high generalization power
Physical scope: We can chose to care without limits of distances in space
Temporal scope: We can chose to care without limits of distances in time
Similarity scope: We can chose to care without limits in term of similarity to us
Called values, moral values
Why a 3rd level was created?
In short, to get more utility OOD.
A bit more details:
Because we want to design policies far OOD (out of our space of lived experiences). To do that, we know that we need to have a value function|reward model|utility function that generalizes very far. Thanks to this chosen general reward function, we can plan and try to reach a desired outcome far OOD. After reaching it, we will update our learned utility function (lvl 2).
Thanks to lvl 3, we can design public policies, dedicate our life to exploring the path towards a larger reward that will never be observed in our lifetime.
One impact of the 3 levels hierarchy:
This could explain why most philosophers can support scope sensitive values but never act on them.
You can see the sum of the votes and the number of votes (by having your mouse over the number). This should be enough to give you a rough idea of the ration between + and—votes :)
If you look at the logit given a range that is not [0.0, 1.0] but [low perf, high perf], then you get a bit more predictive power, but it is still confusingly low.
A possible intuition here is that the scaling is producing a transition from non-zero performance to non-perfect performance. This seems right since the random baseline is not 0.0 and reaching perfect accuracy is impossible.
I tried this only with PaLM on NLU and I used the same adjusted range for all tasks:
[0.9 * overall min. acc., 1.0 − 0.9 * (1.0 - overall max acc.)] ~ [0.13, 0.95]
Even if this model was true, they are maybe other additional explanations like the improvement on one task are not modeled by one logit function but by several of them. A task would be composed of sub-tasks each modelizable by one logit function. And if this make sense, one could try to model the improvements in all of the tasks using only a small number of logit curves associated to each sub-tasks (decomposing each tasks into a set of sub-tasks with a simple trend).
(Also Gopher looks like less predictable and the data more sparse (no data point in the X0 B parameters))
Indeed but to slightly counter balance this, at the same time, it looks like it was trained on ~500B tokens (while ~300B were used for GPT-3 and for GPT-2 something like ~50B).
It’s only 1.2 billion parameters.
Indeed but to slightly counter balance this, at the same time, it looks like it was trained on ~500B tokens (while ~300B were used for GPT-3 and something like ~50B for GPT-2).
“The training algorithm has found a better representation”?? That seems strange to me since the loss should be lower in that case, not spiking. Or maybe you mean that the training broke free of a kind of local minima (without telling that he found a better one yet). Also I guess people training the models observed that waiting after these spike don’t lead to better performances or they would not have removed them from the training.
Around this idea, and after looking at the “grokking” paper, I would guess that it’s more likely caused by the weight decay (or similar) causing the training to break out of a kind of local minima. An interesting point may be that larger/better LM may have significantly sharper internal models and thus are more prone to this phenomenon (The weight decay (or similar) more easily breaking the more sensitive/better/sharper models).
It should be very easy to check if these spikes are caused by the weight decay “damaging” very sharp internal models. Like replay the spiky part several times with less and less weight decay… (I am curious of similar tests with varying the momentum, dropout… At looking if the spikes are initially triggered by some subset of the network, during how many training steps long are the spikes...)
I wonder if the result is dependent on the type of OOD.
If you are OOD by having less extractable information, then the results are intuitive.
If you are OOD by having extreme extractable information or misleading information, then the results are unexpected.
Oh, I just read their Appendix A: “Instances Where “Reversion to the OCS” Does Not Hold”
Outputting the average prediction is indeed not the only behavior OOD. It seems that there are different types of OOD regimes.