Hmm, I don’t count “It may work but we’ll do something smarter instead” as “it won’t work” for my purposes.
I totally agree that noise will start to dominate eventually… but the thing I’m especially interested in with Amp(GPT-7) is not the “7” part but the “Amp” part. Using prompt programming, fine-tuning on its own library, fine-tuning with RL, making chinese-room-bureaucracies, training/evolving those bureaucracies… what do you think about that? Naively the scaling laws would predict that we’d need far less long-horizon data to train them, since they have far fewer parameters, right? Moreover IMO evolved-chinese-room-bureaucracy is a pretty good model for how humans work, and in particular for how humans are able to generalize super well and make long-term plans etc. without many lifetimes of long-horizon training.
Hmm, I don’t count “It may work but we’ll do something smarter instead” as “it won’t work” for my purposes.
I totally agree that noise will start to dominate eventually… but the thing I’m especially interested in with Amp(GPT-7) is not the “7” part but the “Amp” part. Using prompt programming, fine-tuning on its own library, fine-tuning with RL, making chinese-room-bureaucracies, training/evolving those bureaucracies… what do you think about that? Naively the scaling laws would predict that we’d need far less long-horizon data to train them, since they have far fewer parameters, right? Moreover IMO evolved-chinese-room-bureaucracy is a pretty good model for how humans work, and in particular for how humans are able to generalize super well and make long-term plans etc. without many lifetimes of long-horizon training.