The whole thing reads a bit like “AI governance” and “AI strategy” reinvented under a different name, seemingly without bothering to understand what’s the current understanding.
I overall agree with this comment, but do want to push back on this sentence. I don’t really know what it means to “invent AI governance” or “invent AI strategy”, so I don’t really know what it means to “reinvent AI governance” or “reinvent AI strategy”.
Separately, I also don’t really think it’s worth spending a ton of time trying to really understand what current people think about AI governance. Like, I think we are mostly confused, it’s a very green-field situation, and it really doesn’t seem to me to be the case that you have to read the existing stuff to helpfully contribute. Also a really large fraction of the existing stuff is actually just political advocacy dressed up as inquiry, and I think many people are better off not reading it (like, the number of times I was confused about a point of some AI governance paper and the explanation turned out to be “yeah, the author didn’t really believe this, but saying it would aim their political goals of being taken more seriously, or gaining reputation, or allowing them to later change policy in some different way” is substantially larger than the number of times I learned something helpful from these papers, so I do mostly recommend staying away from them).
I overall agree with this comment, but do want to push back on this sentence. I don’t really know what it means to “invent AI governance” or “invent AI strategy”, so I don’t really know what it means to “reinvent AI governance” or “reinvent AI strategy”.
By reinventing it, I means, for example, asking questions like “how to influence the dynamic between AI labs in a way which allows everyone to slow down at critical stage”, “can we convince some actors about AI risk without the main effect being they will put more resources into the race”, “what’s up with China”, “how to generally slow down things when necessary” and similar, and attempts to answer them.
I do agree that reading a lot of policy papers is of limited direct use in terms of direct hypothesis forming: in my experience the more valuable parts often have the form of private thoughts or semi-privately shared thinking.
On the other hand… in my view, if people have a decent epistemic base, they often should engage with the stuff you dislike, but from a proper perspective: not “this is the author attempting to literally communicate what they believe”, but more of “this is a written speech-act which probably makes some sense and has some purpose”. In other words… people who want to work on strategy unfortunately eventually need to be able to operate in epistemically hostile environments. They should train elsewhere, and spent enough time elsewhere to stay sane, but they need to understand e.g. how incentive landscapes influence what people think and write, and this is not possible to get good without dipping your feet in the water.
I’m sympathetic under some interpretations of “a ton of time,” but I think it’s still worth people’s time to spend at least ~10 hours of reading and ~10 hours of conversation getting caught up with AI governance/strategy thinking, if they want to contribute.
Arguments for this:
Some basic ideas/knowledge that the field is familiar with (e.g. on the semiconductor supply chain, antitrust law, immigration, US-China relations, how relevant governments and AI labs work, the history of international cooperation in the 20th century) seem really helpful for thinking about this stuff productively.
First-hand knowledge of how relevant governments and labs work is hard/costly to get on one’s own.
Lack of shared context makes collaboration with other researchers and funders more costly.
Even if the field doesn’t know that much and lots of papers are more advocacy pieces, people can learn from what the field does know and read the better content.
Yeah, totally, 10 hours of reading seems definitely worth it, and like, I think many hours of conversation, if only because those hours of conversation will probably just help you think through things yourself.
I also think it does make a decent amount of sense to coordinate with existing players in the field before launching new initiatives and doing big things, though I don’t think it should be a barrier before you suggest potential plans, or discuss potential directions forward.
If someone’s been following along with popular LW posts on alignment and is new to governance, I’d expect them to find the “core readings” in “weeks” 4-6 most relevant.
I overall agree with this comment, but do want to push back on this sentence. I don’t really know what it means to “invent AI governance” or “invent AI strategy”, so I don’t really know what it means to “reinvent AI governance” or “reinvent AI strategy”.
Separately, I also don’t really think it’s worth spending a ton of time trying to really understand what current people think about AI governance. Like, I think we are mostly confused, it’s a very green-field situation, and it really doesn’t seem to me to be the case that you have to read the existing stuff to helpfully contribute. Also a really large fraction of the existing stuff is actually just political advocacy dressed up as inquiry, and I think many people are better off not reading it (like, the number of times I was confused about a point of some AI governance paper and the explanation turned out to be “yeah, the author didn’t really believe this, but saying it would aim their political goals of being taken more seriously, or gaining reputation, or allowing them to later change policy in some different way” is substantially larger than the number of times I learned something helpful from these papers, so I do mostly recommend staying away from them).
By reinventing it, I means, for example, asking questions like “how to influence the dynamic between AI labs in a way which allows everyone to slow down at critical stage”, “can we convince some actors about AI risk without the main effect being they will put more resources into the race”, “what’s up with China”, “how to generally slow down things when necessary” and similar, and attempts to answer them.
I do agree that reading a lot of policy papers is of limited direct use in terms of direct hypothesis forming: in my experience the more valuable parts often have the form of private thoughts or semi-privately shared thinking.
On the other hand… in my view, if people have a decent epistemic base, they often should engage with the stuff you dislike, but from a proper perspective: not “this is the author attempting to literally communicate what they believe”, but more of “this is a written speech-act which probably makes some sense and has some purpose”. In other words… people who want to work on strategy unfortunately eventually need to be able to operate in epistemically hostile environments. They should train elsewhere, and spent enough time elsewhere to stay sane, but they need to understand e.g. how incentive landscapes influence what people think and write, and this is not possible to get good without dipping your feet in the water.
I’m sympathetic under some interpretations of “a ton of time,” but I think it’s still worth people’s time to spend at least ~10 hours of reading and ~10 hours of conversation getting caught up with AI governance/strategy thinking, if they want to contribute.
Arguments for this:
Some basic ideas/knowledge that the field is familiar with (e.g. on the semiconductor supply chain, antitrust law, immigration, US-China relations, how relevant governments and AI labs work, the history of international cooperation in the 20th century) seem really helpful for thinking about this stuff productively.
First-hand knowledge of how relevant governments and labs work is hard/costly to get on one’s own.
Lack of shared context makes collaboration with other researchers and funders more costly.
Even if the field doesn’t know that much and lots of papers are more advocacy pieces, people can learn from what the field does know and read the better content.
Yeah, totally, 10 hours of reading seems definitely worth it, and like, I think many hours of conversation, if only because those hours of conversation will probably just help you think through things yourself.
I also think it does make a decent amount of sense to coordinate with existing players in the field before launching new initiatives and doing big things, though I don’t think it should be a barrier before you suggest potential plans, or discuss potential directions forward.
Do you have links to stuff you think would be worthwhile for newcomers to read?
Yep! Here’s a compilation.
If someone’s been following along with popular LW posts on alignment and is new to governance, I’d expect them to find the “core readings” in “weeks” 4-6 most relevant.
Thanks!