According to the Chinchilla paper, a compute-optimal model of that size should have ~500B parameters and have used ~10T tokens. Based on its GPT-4′s demonstrated capabilities though, that’s probably an overestimate.
Yeah agree, I think it would make sense that’s trained on 10x-20x the amount of tokens of GPT-3 so around 3-5T tokens (2x-3x Chinchilla) and that would give around 200-300b parameters giving those laws.
Sorry for the late reply, but yeah, it was mostly vibes based on what I’d seen before. I’ve been looking over the benchmarks in the Technical Report again though, and I’m starting to feel like 500B+10T isn’t too far off. Although language benchmarks are fairly similar, the improvements in mathematical capabilities over the previous SOTA is much larger than I first realised, and seem to match a model of that size considering the conventionally trained PaLM and its derivatives’ performances.
According to the Chinchilla paper, a compute-optimal model of that size should have ~500B parameters and have used ~10T tokens. Based on its GPT-4′s demonstrated capabilities though, that’s probably an overestimate.
Yeah agree, I think it would make sense that’s trained on 10x-20x the amount of tokens of GPT-3 so around 3-5T tokens (2x-3x Chinchilla) and that would give around 200-300b parameters giving those laws.
Are you saying that you would have expected GPT-4 to be stronger if it was 500B+10T? Is that based on benchmarks/extrapolations or vibes?
Sorry for the late reply, but yeah, it was mostly vibes based on what I’d seen before. I’ve been looking over the benchmarks in the Technical Report again though, and I’m starting to feel like 500B+10T isn’t too far off. Although language benchmarks are fairly similar, the improvements in mathematical capabilities over the previous SOTA is much larger than I first realised, and seem to match a model of that size considering the conventionally trained PaLM and its derivatives’ performances.