I would not update much on Foom from this. The paper’s results are only relevant to one branch of AI development (I would call it “enormous self-supervised DL”). There may be other branches where Foom is the default mode (e.g. some practical AIXI implementation), and which are under the radar for now.
But I agree, we now can be certain that AGI is indeed a matter of time. I also agree that it gives us a chance to experiment with a non-scary AGI first (e.g. some transformer descendant that beats humans on almost everything, but remains to be a one-way text-processing mincer).
Moreover, BIG bench makes the path to AGI shorter, as one can now measure progress towards it, and maybe even apply RL to directly maximize the score.
I would not update much on Foom from this. The paper’s results are only relevant to one branch of AI development (I would call it “enormous self-supervised DL”). There may be other branches where Foom is the default mode (e.g. some practical AIXI implementation), and which are under the radar for now.
But I agree, we now can be certain that AGI is indeed a matter of time. I also agree that it gives us a chance to experiment with a non-scary AGI first (e.g. some transformer descendant that beats humans on almost everything, but remains to be a one-way text-processing mincer).
Moreover, BIG bench makes the path to AGI shorter, as one can now measure progress towards it, and maybe even apply RL to directly maximize the score.