I broadly agree with this general take, though I’d like to add some additional reasons for hope:
1.EAs are spending way more effort and money on AI policy. I don’t have exact numbers on this, but I do have a lot of evidence in this direction: at every single EAG, there are far more people interested in AI x-risk policy than biorisk policy, and even those focusing on biorisk are not really focusing on preventing gain-of-function (as opposed to say, engineered pandemics or general robustness). I think this is the biggest reason to expect that AI might be different.
I also think there’s some degree of specialization here, and having the EA policy people all swap to biorisk would be quite costly in the future. So I do sympathize with the majority of AI x-risk focused EAs doing AI x-risk stuff, as opposed to biorisk stuff. (Though I also do think that getting a “trial run” in would be a great learning experience.)
2. Some of the big interventions that people want are things governments might do anyways. To put it another way, governments have a lot of inertia. Often when I talk to AI policy people, the main reason for hope is that they want the government to do something that already has a standard template, or is something that governments already know how to do. For example, the authoritarian regimes example you gave, especially if the approach is to dump an absolute crapton of money on compute to race harder or to use sanctions to slow down other countries. Another example people talk about is having governments break up or nationalize large tech companies, so as to slow down AI research. Or maybe the action needed is to enforce some “alignment norms” that are easy to codify into law, and that the policy teams of industry groups are relatively bought into.
The US government already dumps a lot of money onto compute and AI research, is leveling sanctions vs China, and has many Senators that are on board for breaking up large tech companies. The EU already exports its internet regulations to the rest of the world, and it’s very likely that it’d export its AI regulations as well. So it might be easier to push these interventions through, than it is to convince the government not to give $600k to a researcher to do gain-of-function, which is what they have been doing for a long time.
(This seems like how I’d phrase your first point. Admittedly, there’s a good chance I’m also failing the ideological Turing test on this one.)
3. AI is taken more seriously than COVID. I think it’s reasonable to believe that the US government takes AI issues more seriously than COVID—for example, it’s seen as more of a national security issue (esp wrt China), and it’s less politicized. And AI (x-risk) is an existential threat to nations, which generally tends to be taken way more seriously than COVID is. So one reason for hope is that policymakers don’t really care about preventing a pandemic, but they might actually care about AI, enough that they will listen to the relevant experts and actually try. To put it another way, while there is a general factor of sanity that governments can have, there’s also tremendous variance in how competent any particular government is on various tasks. (EDIT: Daniel makes a similar point above.)
4. EAs will get better at influencing the government over time. This is similar to your second point. EAs haven’t spent a lot of time trying to influence politics. This isn’t just about putting people into positions of power—it’s also about learning how to interface with the government in ways that are productive, or how to spend money to achieve political results, or how to convince senior policymakers. It’s likely we’ll get better at influence over time as we learn what and what not to do, and will leverage our efforts more effectively.
For example, the California Yimbys were a lot worse at interfacing with the state government or the media effectively when they first started ~10 years ago. But recently they’ve had many big wins in terms of legalizing housing!
(That being said, it seems plausible to me that EAs should try to get gain-of-function research banned as a trial run, both because we’d probably learn a lot doing it, and because it’s good to have clear wins.)
and has many Senators that are on board for breaking up large tech companies.
That’s exactly the opposite of what we need if you listen to AI safety policy folk because it strengthens race dynamics. If you would have all the tech companies merged together, they are likely the first to develop AGI and thus have to worry less about other researchers being the first which allows them to invest more resources into safety.
Idk, I’ve spoken to AI safety policy people who think it’s a terrible idea, and some who think it’ll still be necessary. On one hand, you have the race dynamics, on the other hand you have returns to scale and higher profits from horizontal/vertical integration.
I think it’s reasonable to believe that the US government takes AI issues more seriously than COVID—for example, it’s seen as more of a national security issue (esp wrt China), and it’s less politicized.
I’m not sure that’s helpful from a safety perspective. Is it really helpful if the US unleashes the unfriendly self-improving monster first, in an effort to “beat” China?
From my reading and listening on the topic, the US government does not take AI safety seriously, when “safety” is defined in the way that we define it here on LessWrong. Their concerns around AI safety have more to do with things like ensuring that datasets aren’t biased so that the AI doesn’t produce accidentally racist outcomes. But thinking about AI safety to ensure that a recursively self-improving optimizer doesn’t annihilate humanity on its way to some inscrutable goal? I don’t think that’s a big focus of the US government. If anything, that outcome is seen as an acceptable risk for the US remaining ahead of China in some kind of imagined AI arms race.
My impression is that 2 and 4 are relatively cruxy for some people? Especially 2.
IE I’ve heard from some academics that the “natural” thing to do is to join with the AI ethics crowd/Social Justice crowd and try to get draconian anti tech/anti AI regulations passed. My guess is their inside view beliefs are some combination of:
A. Current tech companies are uniquely good at AI research relative to their replacements. IE, even if the US government destroys $10b of current industry RnD spending, and then spends $15b on AI research, this is way less effective at pushing AGI capabilities.
B. Investment in AI research happens in large part due to expectation of outsized profits. Destroy expectation of outsized profits via draconian anti innovation/anti market regulation or just by tacking on massive regulatory burdens (which the US/UK/EU governments are very capable of doing) is enough to curb research interest in this area significantly.
C. There’s no real pressure from Chinese AI efforts. IE, delaying current AGI progress in the US/UK by 3 years just actually delays AGI by 3 years. More generally, there aren’t other relevant players besides big, well known US/UK labs.
(I don’t find 2 super plausible myself, so I don’t have a great inside view of this. I am trying to understand this view better by talking to said academics. In particular, even if C is true (IE China not an AI threat), the US federal government certainly doesn’t believe this and is very hawkish vs China + very invested in throwing money at, or at least not hindering, tech research it believes is necessary for competition.)
As for 4, this is a view I hear a lot from EA policy people? e.g. we used to make stupid mistakes, we’re definitely not making them now; we used to just all be junior, now we have X and Y high ranking positions; and we did a bunch of experimentation and we figured out what messaging works relatively better. I think 4 would be a crux for me, personally—if our current efforts to influence government are as good as we can get, I think this route of influence is basically unviable. But I do believe that 4 is probably true to a large extent.
I broadly agree with this general take, though I’d like to add some additional reasons for hope:
1. EAs are spending way more effort and money on AI policy. I don’t have exact numbers on this, but I do have a lot of evidence in this direction: at every single EAG, there are far more people interested in AI x-risk policy than biorisk policy, and even those focusing on biorisk are not really focusing on preventing gain-of-function (as opposed to say, engineered pandemics or general robustness). I think this is the biggest reason to expect that AI might be different.
I also think there’s some degree of specialization here, and having the EA policy people all swap to biorisk would be quite costly in the future. So I do sympathize with the majority of AI x-risk focused EAs doing AI x-risk stuff, as opposed to biorisk stuff. (Though I also do think that getting a “trial run” in would be a great learning experience.)
2. Some of the big interventions that people want are things governments might do anyways. To put it another way, governments have a lot of inertia. Often when I talk to AI policy people, the main reason for hope is that they want the government to do something that already has a standard template, or is something that governments already know how to do. For example, the authoritarian regimes example you gave, especially if the approach is to dump an absolute crapton of money on compute to race harder or to use sanctions to slow down other countries. Another example people talk about is having governments break up or nationalize large tech companies, so as to slow down AI research. Or maybe the action needed is to enforce some “alignment norms” that are easy to codify into law, and that the policy teams of industry groups are relatively bought into.
The US government already dumps a lot of money onto compute and AI research, is leveling sanctions vs China, and has many Senators that are on board for breaking up large tech companies. The EU already exports its internet regulations to the rest of the world, and it’s very likely that it’d export its AI regulations as well. So it might be easier to push these interventions through, than it is to convince the government not to give $600k to a researcher to do gain-of-function, which is what they have been doing for a long time.
(This seems like how I’d phrase your first point. Admittedly, there’s a good chance I’m also failing the ideological Turing test on this one.)
3. AI is taken more seriously than COVID. I think it’s reasonable to believe that the US government takes AI issues more seriously than COVID—for example, it’s seen as more of a national security issue (esp wrt China), and it’s less politicized. And AI (x-risk) is an existential threat to nations, which generally tends to be taken way more seriously than COVID is. So one reason for hope is that policymakers don’t really care about preventing a pandemic, but they might actually care about AI, enough that they will listen to the relevant experts and actually try. To put it another way, while there is a general factor of sanity that governments can have, there’s also tremendous variance in how competent any particular government is on various tasks. (EDIT: Daniel makes a similar point above.)
4. EAs will get better at influencing the government over time. This is similar to your second point. EAs haven’t spent a lot of time trying to influence politics. This isn’t just about putting people into positions of power—it’s also about learning how to interface with the government in ways that are productive, or how to spend money to achieve political results, or how to convince senior policymakers. It’s likely we’ll get better at influence over time as we learn what and what not to do, and will leverage our efforts more effectively.
For example, the California Yimbys were a lot worse at interfacing with the state government or the media effectively when they first started ~10 years ago. But recently they’ve had many big wins in terms of legalizing housing!
(That being said, it seems plausible to me that EAs should try to get gain-of-function research banned as a trial run, both because we’d probably learn a lot doing it, and because it’s good to have clear wins.)
That’s exactly the opposite of what we need if you listen to AI safety policy folk because it strengthens race dynamics. If you would have all the tech companies merged together, they are likely the first to develop AGI and thus have to worry less about other researchers being the first which allows them to invest more resources into safety.
Idk, I’ve spoken to AI safety policy people who think it’s a terrible idea, and some who think it’ll still be necessary. On one hand, you have the race dynamics, on the other hand you have returns to scale and higher profits from horizontal/vertical integration.
I’m not sure that’s helpful from a safety perspective. Is it really helpful if the US unleashes the unfriendly self-improving monster first, in an effort to “beat” China?
From my reading and listening on the topic, the US government does not take AI safety seriously, when “safety” is defined in the way that we define it here on LessWrong. Their concerns around AI safety have more to do with things like ensuring that datasets aren’t biased so that the AI doesn’t produce accidentally racist outcomes. But thinking about AI safety to ensure that a recursively self-improving optimizer doesn’t annihilate humanity on its way to some inscrutable goal? I don’t think that’s a big focus of the US government. If anything, that outcome is seen as an acceptable risk for the US remaining ahead of China in some kind of imagined AI arms race.
Are any of these cruxes for anyone?
My impression is that 2 and 4 are relatively cruxy for some people? Especially 2.
IE I’ve heard from some academics that the “natural” thing to do is to join with the AI ethics crowd/Social Justice crowd and try to get draconian anti tech/anti AI regulations passed. My guess is their inside view beliefs are some combination of:
A. Current tech companies are uniquely good at AI research relative to their replacements. IE, even if the US government destroys $10b of current industry RnD spending, and then spends $15b on AI research, this is way less effective at pushing AGI capabilities.
B. Investment in AI research happens in large part due to expectation of outsized profits. Destroy expectation of outsized profits via draconian anti innovation/anti market regulation or just by tacking on massive regulatory burdens (which the US/UK/EU governments are very capable of doing) is enough to curb research interest in this area significantly.
C. There’s no real pressure from Chinese AI efforts. IE, delaying current AGI progress in the US/UK by 3 years just actually delays AGI by 3 years. More generally, there aren’t other relevant players besides big, well known US/UK labs.
(I don’t find 2 super plausible myself, so I don’t have a great inside view of this. I am trying to understand this view better by talking to said academics. In particular, even if C is true (IE China not an AI threat), the US federal government certainly doesn’t believe this and is very hawkish vs China + very invested in throwing money at, or at least not hindering, tech research it believes is necessary for competition.)
As for 4, this is a view I hear a lot from EA policy people? e.g. we used to make stupid mistakes, we’re definitely not making them now; we used to just all be junior, now we have X and Y high ranking positions; and we did a bunch of experimentation and we figured out what messaging works relatively better. I think 4 would be a crux for me, personally—if our current efforts to influence government are as good as we can get, I think this route of influence is basically unviable. But I do believe that 4 is probably true to a large extent.