I looked at the paper you recommended Zack. The specific section having to do with “how” AGI is developed (para 1.2) skirts around the problem.
“We assume that AGI is developed by pretraining a single large foundation model using selfsupervised learning on (possibly multi-modal) data [Bommasani et al., 2021], and then fine-tuning it using model-free reinforcement learning (RL) with a reward function learned from human feedback [Christiano et al., 2017] on a wide range of computer-based tasks.4 This setup combines elements of the techniques used to train cutting-edge systems such as GPT-4 [OpenAI, 2023a], Sparrow [Glaese et al., 2022], and ACT-1 [Adept, 2022]; we assume, however, that 2 the resulting policy goes far beyond their current capabilities, due to improvements in architectures, scale, and training tasks. We expect a similar analysis to apply if AGI training involves related techniques such as model-based RL and planning [Sutton and Barto, 2018] (with learned reward functions), goal-conditioned sequence modeling [Chen et al., 2021, Li et al., 2022, Schmidhuber, 2020], or RL on rewards learned via inverse RL [Ng and Russell, 2000]—however, these are beyond our current scope.”
Altman has recently said in a speech that continuing to do what has led them to GPT4 is probably not going to get to AGI. “”Let’s use the word superintelligence now, as superintelligence can’t discover novel physics, I don’t think it’s a superintelligence. Training on the data of what you know, teaching to clone the behavior of humans and human text, I don’t think that’s going to get there. So there’s this question that has been debated in the field for a long time: what do we have to do in addition to a language model to make a system that can go discover new physics?”
I think it’s pretty clear that no one has a clear path to AGI, nor do we know what a superintelligence will do, yet the Longtermist ecosystem is thriving. I find that curious, to say the least.
I looked at the paper you recommended Zack. The specific section having to do with “how” AGI is developed (para 1.2) skirts around the problem.
“We assume that AGI is developed by pretraining a single large foundation model using selfsupervised learning on (possibly multi-modal) data [Bommasani et al., 2021], and then fine-tuning it using model-free reinforcement learning (RL) with a reward function learned from human feedback [Christiano et al., 2017] on a wide range of computer-based tasks.4 This setup combines elements of the techniques used to train cutting-edge systems such as GPT-4 [OpenAI, 2023a], Sparrow [Glaese et al., 2022], and ACT-1 [Adept, 2022]; we assume, however, that 2 the resulting policy goes far beyond their current capabilities, due to improvements in architectures, scale, and training tasks. We expect a similar analysis to apply if AGI training involves related techniques such as model-based RL and planning [Sutton and Barto, 2018] (with learned reward functions), goal-conditioned sequence modeling [Chen et al., 2021, Li et al., 2022, Schmidhuber, 2020], or RL on rewards learned via inverse RL [Ng and Russell, 2000]—however, these are beyond our current scope.”
Altman has recently said in a speech that continuing to do what has led them to GPT4 is probably not going to get to AGI. “”Let’s use the word superintelligence now, as superintelligence can’t discover novel physics, I don’t think it’s a superintelligence. Training on the data of what you know, teaching to clone the behavior of humans and human text, I don’t think that’s going to get there. So there’s this question that has been debated in the field for a long time: what do we have to do in addition to a language model to make a system that can go discover new physics?”
https://the-decoder.com/sam-altman-on-agi-scaling-large-language-models-is-not-enough/
I think it’s pretty clear that no one has a clear path to AGI, nor do we know what a superintelligence will do, yet the Longtermist ecosystem is thriving. I find that curious, to say the least.