Given a particular plan that is likely to be implemented, what interventions or desiderata complement that plan (by making it more likely to succeed or by being better in worlds where it succeeds)
Affordances: for various relevant actors, what strategically significant actions could they take? What levers do they have? What would it be great for them to do (or avoid)?
Intermediate goals: what goals or desiderata are instrumentally useful?
Threat modeling: for various threats, model them well enough to understand necessary and sufficient conditions for preventing them.
Memes (& frames): what would it be good if people believed or paid attention to?
What considerations or side effects are relevant to slowing AI?
How do labs act, as a function of [whatever determines that]? In particular, what’s the deal with “racing”?
AI risk advocacy
How could the AI safety community do AI risk advocacy well?
What considerations or side effects are relevant to AI risk advocacy?
What’s the deal with crunch time?
How will the strategic landscape be different in the future?
What will be different near the end, and what interventions or actions will that enable? In particular, is eleventh-hour coordination possible? (Also maybe emergency brakes that don’t appear until the end.)
What concrete asks should we have for labs? for government?
Meta: how can you help yourself or others do better AI strategy research?
AI strategy research
projectsproject generatorspromptsMostly for personal use.
Some prompts inspired by Framing AI strategy:
Plans
What plans would be good?
Given a particular plan that is likely to be implemented, what interventions or desiderata complement that plan (by making it more likely to succeed or by being better in worlds where it succeeds)
Affordances: for various relevant actors, what strategically significant actions could they take? What levers do they have? What would it be great for them to do (or avoid)?
Intermediate goals: what goals or desiderata are instrumentally useful?
Threat modeling: for various threats, model them well enough to understand necessary and sufficient conditions for preventing them.
Memes (& frames): what would it be good if people believed or paid attention to?
For forecasting prompts, see List of uncertainties about the future of AI.
Some miscellaneous prompts:
Slowing AI
How can various relevant actors slow AI?
How can the AI safety community slow AI?
What considerations or side effects are relevant to slowing AI?
How do labs act, as a function of [whatever determines that]? In particular, what’s the deal with “racing”?
AI risk advocacy
How could the AI safety community do AI risk advocacy well?
What considerations or side effects are relevant to AI risk advocacy?
What’s the deal with crunch time?
How will the strategic landscape be different in the future?
What will be different near the end, and what interventions or actions will that enable? In particular, is eleventh-hour coordination possible? (Also maybe emergency brakes that don’t appear until the end.)
What concrete asks should we have for labs? for government?
Meta: how can you help yourself or others do better AI strategy research?