I think that the key thing we want to do is predict the generalization of future neural networks.
It’s not what I want to do, at least. For me, the key thing is to predict the behavior of AGI-level systems. The behavior of NNs-as-trained-today is relevant to this only inasmuch as NNs-as-trained-today will be relevant to future AGI-level systems.
My impression is that you think that pretraining+RLHF (+ maybe some light agency scaffold) is going to get us all the way there, meaning the predictive power of various abstract arguments from other domains is screened off by the inductive biases and other technical mechanistic details of pretraining+RLHF. That would mean we don’t need to bring in game theory, economics, computer security, distributed systems, cognitive psychology, business, history into it – we can just look at how ML systems work and are shaped, and predict everything we want about AGI-level systems from there.
I disagree. I do not think pretraining+RLHF is getting us there. I think we currently don’t know what training/design process would get us to AGI. Which means we can’t make closed-form mechanistic arguments about how AGI-level systems will be shaped by this process, which means the abstract often-intuitive arguments from other fields do have relevant things to say.
And I’m not seeing a lot of ironclad arguments that favour “pretraining + RLHF is going to get us to AGI” over “pretraining + RLHF is not going to get us to AGI”. The claim that e. g. shard theory generalizes to AGI is at least as tenuous as the claim that it doesn’t.
Flagging that this is one of the main claims which we seem to dispute; I do not concede this point FWIW.
It’s not what I want to do, at least. For me, the key thing is to predict the behavior of AGI-level systems. The behavior of NNs-as-trained-today is relevant to this only inasmuch as NNs-as-trained-today will be relevant to future AGI-level systems.
It’s not what I want to do, at least. For me, the key thing is to predict the behavior of AGI-level systems. The behavior of NNs-as-trained-today is relevant to this only inasmuch as NNs-as-trained-today will be relevant to future AGI-level systems.
My impression is that you think that pretraining+RLHF (+ maybe some light agency scaffold) is going to get us all the way there, meaning the predictive power of various abstract arguments from other domains is screened off by the inductive biases and other technical mechanistic details of pretraining+RLHF. That would mean we don’t need to bring in game theory, economics, computer security, distributed systems, cognitive psychology, business, history into it – we can just look at how ML systems work and are shaped, and predict everything we want about AGI-level systems from there.
I disagree. I do not think pretraining+RLHF is getting us there. I think we currently don’t know what training/design process would get us to AGI. Which means we can’t make closed-form mechanistic arguments about how AGI-level systems will be shaped by this process, which means the abstract often-intuitive arguments from other fields do have relevant things to say.
And I’m not seeing a lot of ironclad arguments that favour “pretraining + RLHF is going to get us to AGI” over “pretraining + RLHF is not going to get us to AGI”. The claim that e. g. shard theory generalizes to AGI is at least as tenuous as the claim that it doesn’t.
I’d be interested if you elaborated on that.
Thanks for pointing out that distinction!