Sorry for the late reply, but yeah, it was mostly vibes based on what I’d seen before. I’ve been looking over the benchmarks in the Technical Report again though, and I’m starting to feel like 500B+10T isn’t too far off. Although language benchmarks are fairly similar, the improvements in mathematical capabilities over the previous SOTA is much larger than I first realised, and seem to match a model of that size considering the conventionally trained PaLM and its derivatives’ performances.
Sorry for the late reply, but yeah, it was mostly vibes based on what I’d seen before. I’ve been looking over the benchmarks in the Technical Report again though, and I’m starting to feel like 500B+10T isn’t too far off. Although language benchmarks are fairly similar, the improvements in mathematical capabilities over the previous SOTA is much larger than I first realised, and seem to match a model of that size considering the conventionally trained PaLM and its derivatives’ performances.