I expect to update this comment with additional sources—and perhaps new analytic frames—as I become aware of them and they become public. Last updated 23 May 2023.
Eugene Bardach’s Things Governments Do (affordances for states from a non-AI perspective) (thanks to Matthijs Maas for this suggestion)
Observation: you get different taxonomies if you start at goals (like “slow China”) vs levers (like “immigration policy”). And your uncertainties are like “how can actor X achieve goal Y” vs “how can actor X leverage its ability Z.” Maybe you think of different affordances; try both ways.
Alex Gray mentions as a motivating/illustrative example (my paraphrasing): windfall clauses (or equity-sharing or other benefit-sharing mechanisms) are unlikely to be created by labs but it’s relatively easy for labs to take an existing windfall-clause affordance
If I was rewriting this post today, I would probably discuss something like object-level frames or strategic perspectives. They make aspects of a situation more salient; whether or not they’re true, and whether or not they’re the kind-of-thing that can be true, they can be useful. See Matthijs Maas’s Strategic Perspectives on Transformative AI Governance for illustration.
I expect to update this comment with additional sources—and perhaps new analytic frames—as I become aware of them and they become public. Last updated 23 May 2023.
Affordances:
[draft] Matthijs Maas’s “Levers of governance” in Transformative AI Governance: A Literature Review
Eugene Bardach’s Things Governments Do (affordances for states from a non-AI perspective) (thanks to Matthijs Maas for this suggestion)
Observation: you get different taxonomies if you start at goals (like “slow China”) vs levers (like “immigration policy”). And your uncertainties are like “how can actor X achieve goal Y” vs “how can actor X leverage its ability Z.” Maybe you think of different affordances; try both ways.
Alex Gray mentions as a motivating/illustrative example (my paraphrasing): windfall clauses (or equity-sharing or other benefit-sharing mechanisms) are unlikely to be created by labs but it’s relatively easy for labs to take an existing windfall-clause affordance
Intermediate goals:
Rethink Priorities’s Survey on intermediate goals in AI governance (2023)
[draft] Matthijs Maas’s “Parameters of Transformative AI Governance”
Theories of victory:
Matthijs Maas’s Strategic Perspectives on Transformative AI Governance (2022) and related work in progress
Memes & frames:
Maybe Holden Karnofsky’s Spreading messages to help with the most important century (2023)
Leverage:
[draft] Alex Lintz’s “A simple model for how to prioritize different timelines”
Vaniver’s Weight by Impact (2023)
If I was rewriting this post today, I would probably discuss something like object-level frames or strategic perspectives. They make aspects of a situation more salient; whether or not they’re true, and whether or not they’re the kind-of-thing that can be true, they can be useful. See Matthijs Maas’s Strategic Perspectives on Transformative AI Governance for illustration.