Timelines appear to me to be at least one and maybe two orders-of-magnitude more salient than they are strategically relevant, in EA/rationalist circles. I think the right level of investment in work on them is basically “sometimes people who are interested write blogposts on them in their spare time”, and it is basically not worthwhile for anyone to focus their main work on timelines at current margins. Also, the “trying to be serious” efforts on timelines typically look-to-me to be basically bullshit—i.e. they make basically-implausible assumptions which simplify the problem, and then derive nonsense conclusions from those. (Ajeya’s biological anchors report is a good central example here.) (Also, to be clear, that sort of work is great insofar as people use it as a toy model and invest both effort and credence accordingly.)
Also, there’s already a not terrible model in this post, which I’d use as a reference:
The AI pause debate seems to be a “some people had a debate in their spare time” sort of project, and seems like a pretty solid spare-time thing to do. But if making that debate happen was someone’s main job for two months, then I’d be pretty unimpressed with their productivity.
I disagree with this, mostly because of Nora Belrose making some very important points, and unifying and making better arguments that AI is easy to control, so I’m happy with how the debate went.
Also, there’s already a not terrible model in this post, which I’d use as a reference:
https://www.lesswrong.com/posts/3nMpdmt8LrzxQnkGp/ai-timelines-via-cumulative-optimization-power-less-long
I disagree with this, mostly because of Nora Belrose making some very important points, and unifying and making better arguments that AI is easy to control, so I’m happy with how the debate went.