Sure, but the point is that those theories are much less likely than if GPT-3.5 had done it too.
I too was a bit surprised. Critch should probably have emphasized the hello-world twist a bit more: I don’t spend much time reading quines or recreational programming, so I was assuming it could’ve been memorized and wasn’t sure that that was ‘novel’ (there are lots of quine ‘genres’, like multilingual quines or ‘radiation-hardened’ quines) until I’d look through a bunch of results and noticed none of them had that. So his point is not that quines are somehow incredibly amazing & impossible to write hitherto, but that it’s gotten good enough at code-writing that it can meaningful modify & adapt quines.
Surely one should look for ones that are like “Quine given argument 1, output [something else] given argument 2”. The presence or absence of this sort of already very modular template being in the data would give better context.
Sure, but the point is that those theories are much less likely than if GPT-3.5 had done it too.
I too was a bit surprised. Critch should probably have emphasized the hello-world twist a bit more: I don’t spend much time reading quines or recreational programming, so I was assuming it could’ve been memorized and wasn’t sure that that was ‘novel’ (there are lots of quine ‘genres’, like multilingual quines or ‘radiation-hardened’ quines) until I’d look through a bunch of results and noticed none of them had that. So his point is not that quines are somehow incredibly amazing & impossible to write hitherto, but that it’s gotten good enough at code-writing that it can meaningful modify & adapt quines.
Surely one should look for ones that are like “Quine given argument 1, output [something else] given argument 2”. The presence or absence of this sort of already very modular template being in the data would give better context.