I stand by this piece, and I now think it makes a nice complement to discussions of GPT-3. In both cases, we have significant improvements in chunking of concepts into latent spaces, but we don’t appear to have anything like a causal model in either. And I’ve believed for several years that causal reasoning is the thing that puts us in the endgame.
(That’s not to say either system would still be safe if scaled up massively; mesa-optimization would be a reason to worry.)
That being said, I’m not very confident this piece (or any piece on the current state of AI) will still be timely a year from now, so maybe I shouldn’t recommend it for inclusion after all.
I stand by this piece, and I now think it makes a nice complement to discussions of GPT-3. In both cases, we have significant improvements in chunking of concepts into latent spaces, but we don’t appear to have anything like a causal model in either. And I’ve believed for several years that causal reasoning is the thing that puts us in the endgame.
(That’s not to say either system would still be safe if scaled up massively; mesa-optimization would be a reason to worry.)
That being said, I’m not very confident this piece (or any piece on the current state of AI) will still be timely a year from now, so maybe I shouldn’t recommend it for inclusion after all.