A Call for More Policy Analysis

I would like to see more concrete discussion and analysis of AI policy in the EA community, and on this forum in particular.

AI policy would broadly encompass all relevant actors meaningfully influencing the future and impact of AI, which would likely be governments, research labs and institutes, and international organizations.

Some initial thoughts and questions I have on this topic:

1) How do we ensure all research groups with a likely chance of developing AGI know and care about the relevant work in AI safety (which hopefully is satisfactorily worked out by then)?

Some possibilities: trying to make AI safety a common feature of computer science curricula, general community building and more AI safety conferences, more popular culture conveying non-terminatoresque illustrations of the risk.

2) What strategies might be available for laggards in a race scenario to retard progress of leading groups, or to steal their research?

Some possibilities in no particular order: espionage, malware, financial or political pressures, power outages, surveillance of researchers.

3) Will there be clear warning signs?

Not just in general AI progress, but locally near the leading lab. Observable changes in stock price, electricity output, etc.

4) Openness or secrecy?

Thankfully the Future of Life Institute is working on this one. As I understand the consensus is that openness is advisable now, but secrecy may be necessary later. So what mechanisms are available to keep research private?

5) How many players will there be with a significant chance of developing AGI? Which players?

6) Is an arms race scenario likely?

7) What is the most likely speed of takeoff?

8) When and where will AGI be developed?

Personally, I believe the use of forecasting tournaments to get a better sense of when and where AGI will arrive would be a very worthwhile use of our time and resources. After reading Superforecasting by Dan Gardner and Phillip Tetlock I was struck by how effective these tournaments are at singling out those with low Brier scores and using them to get a better-than-average predictions of future circumstances.

Perhaps the EA community could fund a forecasting tournament on the Good Judgment Project posing questions attempting to ascertain when AGI will be developed (I am guessing superforecasters will make more accurate predictions than AI experts on this topic), which research groups are the most likely candidates to be the developers of the first AGI, and other relevant questions. We would need to formulate the questions such that they are specific enough for use in the tournament.