Are you assuming that there will be a sudden jump in AI scientific research capability from subhuman to strongly superhuman? It is one possibility, sure. Another is that the first AIs capable of writing research papers won’t be superhumanly good at it, and won’t advance research very far or even in a useful direction. It seems to me quite likely that this state of affairs will persist for at least six months.
Do you give the latter scenario less than 0.01 probability? That seems extremely confident to me.
I don’t think we need superhuman capability here for stuff to get crazy, pure volume of papers could substitute for that. If you can write a mediocre but logically correct paper with $50 of compute instead of with $10k of graduate student salary, that accelerates the pace of progress by a factor of 200, which seems enough for me to enable a whole bunch of other advances which will feed into AI research and make the models even better.
That’s not a math or physics paper, and it includes a bit more “handholding” in the form of an explicit database than would really make me update. The style of scientific papers is obviously very easy to copy for current LLMs, what I’m trying to get at is that if LLMs can start to make genuinely novel contributions at a slightly below-human level and learn from the mediocre article they write, pure volume of papers can make up for quality.
Are you assuming that there will be a sudden jump in AI scientific research capability from subhuman to strongly superhuman? It is one possibility, sure. Another is that the first AIs capable of writing research papers won’t be superhumanly good at it, and won’t advance research very far or even in a useful direction. It seems to me quite likely that this state of affairs will persist for at least six months.
Do you give the latter scenario less than 0.01 probability? That seems extremely confident to me.
I don’t think we need superhuman capability here for stuff to get crazy, pure volume of papers could substitute for that. If you can write a mediocre but logically correct paper with $50 of compute instead of with $10k of graduate student salary, that accelerates the pace of progress by a factor of 200, which seems enough for me to enable a whole bunch of other advances which will feed into AI research and make the models even better.
So you’re now strongly expecting to die in less than 6 months? (Assuming that the tweet is not completely false)
That’s not a math or physics paper, and it includes a bit more “handholding” in the form of an explicit database than would really make me update. The style of scientific papers is obviously very easy to copy for current LLMs, what I’m trying to get at is that if LLMs can start to make genuinely novel contributions at a slightly below-human level and learn from the mediocre article they write, pure volume of papers can make up for quality.