Longtermist AI policy projects for economists (this doc was originally just made for Risto Uuk’s own use, so the ideas shouldn’t be taken as high-confidence recommendations to anyone else)
Context
I intend for this to include both technical and governance problems, and problems relevant to a variety of AI risk scenarios (e.g., AI optimising against humanity, AI misuse by humans, AI extinction risk, AI dystopia risk...)
I spent hardly any time on this, and just included things I’ve stumbled upon, rather than specifically searching for these things
So I’m sure there’s a lot I’m missing.
Please comment if you know of other things worth mentioning here. Or if you’re better placed to make a list like this than I am, feel very free to do so; you could take whatever you want from this list and then comment here to let people know where to find your better thing.
(It’s also possible another list like this already exists. And it’s also possible that economists could contribute to such a large portion of AI risk problems that there’s no added value in making a separate list for economists specifically. If you think either of those things are true, please comment to say so!)
Problems in AI risk that economists could potentially contribute to
List(s) of relevant problems
What can the principal-agent literature tell us about AI risk? (and this comment)
Many of the questions in Technical AGI safety research outside AI
Many of the questions in The Centre for the Governance of AI’s research agenda
Many of the questions in Cooperation, Conflict, and Transformative Artificial Intelligence (a research agenda of the Center on Long-Term Risk)
At least a couple of the questions in 80,000 Hours’ Research questions that could have a big social impact, organised by discipline
Longtermist AI policy projects for economists (this doc was originally just made for Risto Uuk’s own use, so the ideas shouldn’t be taken as high-confidence recommendations to anyone else)
Context
I intend for this to include both technical and governance problems, and problems relevant to a variety of AI risk scenarios (e.g., AI optimising against humanity, AI misuse by humans, AI extinction risk, AI dystopia risk...)
Wei Dai’s list of Problems in AI Alignment that philosophers could potentially contribute to made me think that it could be useful to have a list of problems in AI risk that economists could potentially contribute to. So I began making such a list.
But:
I’m neither an AI researcher nor an economist
I spent hardly any time on this, and just included things I’ve stumbled upon, rather than specifically searching for these things
So I’m sure there’s a lot I’m missing.
Please comment if you know of other things worth mentioning here. Or if you’re better placed to make a list like this than I am, feel very free to do so; you could take whatever you want from this list and then comment here to let people know where to find your better thing.
(It’s also possible another list like this already exists. And it’s also possible that economists could contribute to such a large portion of AI risk problems that there’s no added value in making a separate list for economists specifically. If you think either of those things are true, please comment to say so!)
Recently I was also trying to figure out what resources to send to an economist, and couldn’t find a list that existed either! The list I came up with is subsumed by yours, except:
- Questions within Some AI Governance Research Ideas
- “Further Research” section within an OpenPhil 2021 report: https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth
- The AI Objectives Institute just launched, and they may have questions in the future