Yes but I again expect AGI to use continuous learning, so the training run doesn’t really end. But yes I largely agree with that summary.
NN/DL in its various flavors are simply what efficient approx bayesian inference involves, and there are not viable non-equivalent dramatically better alternatives.
Thanks Jacob for talking me through your model. I agree with you that this is a model that EY and others associated with him have put forth. I’ve looked back through Eliezer’s old posts, and he is consistently against the idea that LLMs are the path to superintelligence (not just that they’re not the only path, but he outright denies that superintelligence could come from neural nets).
My update, based on your arguments here, is that any future claim about a mechanism for iterative self-improvement that happens suddenly, on the training hardware and involves > 2 OOMs of improvement, needs to first deal with the objections you are raising here to be a meaningful way of moving the conversation forward.
Yes but I again expect AGI to use continuous learning, so the training run doesn’t really end. But yes I largely agree with that summary.
NN/DL in its various flavors are simply what efficient approx bayesian inference involves, and there are not viable non-equivalent dramatically better alternatives.
Thanks Jacob for talking me through your model. I agree with you that this is a model that EY and others associated with him have put forth. I’ve looked back through Eliezer’s old posts, and he is consistently against the idea that LLMs are the path to superintelligence (not just that they’re not the only path, but he outright denies that superintelligence could come from neural nets).
My update, based on your arguments here, is that any future claim about a mechanism for iterative self-improvement that happens suddenly, on the training hardware and involves > 2 OOMs of improvement, needs to first deal with the objections you are raising here to be a meaningful way of moving the conversation forward.