Timelines appear to me to be at least one and maybe two orders-of-magnitude more salient than they are strategically relevant, in EA/rationalist circles. I think the right level of investment in work on them is basically “sometimes people who are interested write blogposts on them in their spare time”, and it is basically not worthwhile for anyone to focus their main work on timelines at current margins. Also, the “trying to be serious” efforts on timelines typically look-to-me to be basically bullshit—i.e. they make basically-implausible assumptions which simplify the problem, and then derive nonsense conclusions from those. (Ajeya’s biological anchors report is a good central example here.) (Also, to be clear, that sort of work is great insofar as people use it as a toy model and invest both effort and credence accordingly.)
The AI pause debate seems to be a “some people had a debate in their spare time” sort of project, and seems like a pretty solid spare-time thing to do. But if making that debate happen was someone’s main job for two months, then I’d be pretty unimpressed with their productivity.
I think there’s room for high-value work on figuring out which intermediate goals are valuable to aim for. That work does not look like running a survey on which intermediate goals are valuable to aim for. It looks like some careful thinking backchained from end-goals (like “don’t die to AI”), a bunch of red-teaming of ideas and distilling key barriers, combined with “comprehensive info gathering”-style research to find what options are available (like e.g. the regulatory survey mentioned in the OP). The main goal of such work would be to discover novel high-value intermediate goals, barriers and unknown unknowns. (Note that there is value in running surveys to create common knowledge, but that’s a very different use-case from figuring things out.)
I do not currently see much reason to care about how much computing power Chinese actors will have access to. If the world were e.g. implementing a legal regime around AI which used compute availability as a major lever, then sure, availability of computing power to Chinese actors would be important for determining how much buy-in is needed from the Chinese government to make that legal regime effective. But realistically, the answer to that is probably “the Chinese need to be on-board for a legal strategy to be effective regardless”. (Also I don’t expect Chinese interest to be rate-limiting anyway, their governments’ incentives are much more directly lined up with X-risk mitigation interests than most other governments).
In general, there are versions of strategy work which I would consider useful. But in practice, it looks-to-me like people who invest full-time effort in such things do not usually produce more value than people just e.g. writing off-the-cuff posts on the topic or organizing debates as side-projects or spending a few evenings going through some data and then writing up as a moderate-effort post. Most people do not seem to know how to put more effort into such projects in a way which produces more actual value, as opposed to just more professional-looking outputs.
Timelines appear to me to be at least one and maybe two orders-of-magnitude more salient than they are strategically relevant, in EA/rationalist circles. I think the right level of investment in work on them is basically “sometimes people who are interested write blogposts on them in their spare time”, and it is basically not worthwhile for anyone to focus their main work on timelines at current margins. Also, the “trying to be serious” efforts on timelines typically look-to-me to be basically bullshit—i.e. they make basically-implausible assumptions which simplify the problem, and then derive nonsense conclusions from those. (Ajeya’s biological anchors report is a good central example here.) (Also, to be clear, that sort of work is great insofar as people use it as a toy model and invest both effort and credence accordingly.)
Also, there’s already a not terrible model in this post, which I’d use as a reference:
The AI pause debate seems to be a “some people had a debate in their spare time” sort of project, and seems like a pretty solid spare-time thing to do. But if making that debate happen was someone’s main job for two months, then I’d be pretty unimpressed with their productivity.
I disagree with this, mostly because of Nora Belrose making some very important points, and unifying and making better arguments that AI is easy to control, so I’m happy with how the debate went.
Fwiw, there is also AI governance work that is neither policy nor lab governance, in particular trying to answer broader strategic questions that are relevant to governance, e.g., timelines, whether a pause is desirable, which intermediate goals are valuable to aim for, and how much computing power Chinese actors will have access to. I guess this is sometimes called “AI strategy”, but often the people/orgs working on AI governance also work on AI strategy, and vice versa, and they kind of bleed into each other.
How do you feel about that sort of work relative to the policy work you highlight above?
Let’s go through those:
Timelines appear to me to be at least one and maybe two orders-of-magnitude more salient than they are strategically relevant, in EA/rationalist circles. I think the right level of investment in work on them is basically “sometimes people who are interested write blogposts on them in their spare time”, and it is basically not worthwhile for anyone to focus their main work on timelines at current margins. Also, the “trying to be serious” efforts on timelines typically look-to-me to be basically bullshit—i.e. they make basically-implausible assumptions which simplify the problem, and then derive nonsense conclusions from those. (Ajeya’s biological anchors report is a good central example here.) (Also, to be clear, that sort of work is great insofar as people use it as a toy model and invest both effort and credence accordingly.)
The AI pause debate seems to be a “some people had a debate in their spare time” sort of project, and seems like a pretty solid spare-time thing to do. But if making that debate happen was someone’s main job for two months, then I’d be pretty unimpressed with their productivity.
I think there’s room for high-value work on figuring out which intermediate goals are valuable to aim for. That work does not look like running a survey on which intermediate goals are valuable to aim for. It looks like some careful thinking backchained from end-goals (like “don’t die to AI”), a bunch of red-teaming of ideas and distilling key barriers, combined with “comprehensive info gathering”-style research to find what options are available (like e.g. the regulatory survey mentioned in the OP). The main goal of such work would be to discover novel high-value intermediate goals, barriers and unknown unknowns. (Note that there is value in running surveys to create common knowledge, but that’s a very different use-case from figuring things out.)
I do not currently see much reason to care about how much computing power Chinese actors will have access to. If the world were e.g. implementing a legal regime around AI which used compute availability as a major lever, then sure, availability of computing power to Chinese actors would be important for determining how much buy-in is needed from the Chinese government to make that legal regime effective. But realistically, the answer to that is probably “the Chinese need to be on-board for a legal strategy to be effective regardless”. (Also I don’t expect Chinese interest to be rate-limiting anyway, their governments’ incentives are much more directly lined up with X-risk mitigation interests than most other governments).
In general, there are versions of strategy work which I would consider useful. But in practice, it looks-to-me like people who invest full-time effort in such things do not usually produce more value than people just e.g. writing off-the-cuff posts on the topic or organizing debates as side-projects or spending a few evenings going through some data and then writing up as a moderate-effort post. Most people do not seem to know how to put more effort into such projects in a way which produces more actual value, as opposed to just more professional-looking outputs.
Also, there’s already a not terrible model in this post, which I’d use as a reference:
https://www.lesswrong.com/posts/3nMpdmt8LrzxQnkGp/ai-timelines-via-cumulative-optimization-power-less-long
I disagree with this, mostly because of Nora Belrose making some very important points, and unifying and making better arguments that AI is easy to control, so I’m happy with how the debate went.