Simple demonstration #2: Train on only science papers up until 2010, each preceded by date and title, and then ask the model to generate starting from titles and dates in 2020.
Arbitrarily superintelligent non-causally-trained models will probably still fail at this. IID breaks that kind of prediction. you’d need to train them in a way that makes causally invalid models implausible hypotheses.
Arbitrarily superintelligent non-causally-trained models will probably still fail at this. IID breaks that kind of prediction. you’d need to train them in a way that makes causally invalid models implausible hypotheses.
But, also, if you did that, then yes, agreed.