There may also be some architecture advances, although I’m unsure why we didn’t see these recent LLM’s. In Sam Altman’s AC10 meetup Q&A he did say that GPT-4 would use a different loss function, what effect would that have? I have no idea.
he did say that GPT-4 would use a different loss function, what effect would that have? I have no idea.
One possibility is shifting the power law. See UL2 which combines the various denoising losses in what turns out to be a very good way: “U-PaLM: Transcending Scaling Laws with 0.1% Extra Compute”, Tay et al 2022 - halving PaLM training requirements w/UL2 losses. I don’t know if OA discovered UL2 first, but it’s not all that exotic or subtle and is certainly something that many people ask themselves when they learn about the difference between bidirectional and unidirectional models: “why not train on both/all the losses?”
There may also be some architecture advances, although I’m unsure why we didn’t see these recent LLM’s. In Sam Altman’s AC10 meetup Q&A he did say that GPT-4 would use a different loss function, what effect would that have? I have no idea.
You can see some examples in this Jan 2023 overview of transformer advances by Lilian Weng and The Transformer Family v2
One possibility is shifting the power law. See UL2 which combines the various denoising losses in what turns out to be a very good way: “U-PaLM: Transcending Scaling Laws with 0.1% Extra Compute”, Tay et al 2022 - halving PaLM training requirements w/UL2 losses. I don’t know if OA discovered UL2 first, but it’s not all that exotic or subtle and is certainly something that many people ask themselves when they learn about the difference between bidirectional and unidirectional models: “why not train on both/all the losses?”