Governments will take control of AGI before it’s ASI, right?
Governments don’t have to make AGI to control AGI. They still have a monopoly on force. Surely we’re not still expecting things to move so fast that they don’t notice what’s going on before AGI changes the physical balance of power?
If governments (likely the US government) do assert some measure of control over AGI projects, they will be involved in decisions about alignment and control strategies as AGI improves. As long as we survive those decisions (which I think we probably will, at least for a while[1]), they will also be deciding to what economic or military uses that AGI is put.
I predict that governments are going to notice the military applications and exert some measure of control over those projects. If AGI companies, personnel, or projects hop borders, they’re just changing which guys with guns will take over control from them in important ways.
For a while here, I’ve been puzzled that analysis of policy implications of AGI don’t often include government control and military applications. I haven’t wanted to speak up, just in case we’re all keeping mum so as not to tip off governments. Aschenbrenner’s Situational Awareness has let that cat out of that bag, so I think it’s time to include this likelihood in our public strategy analysis.
I think we’re used to a status quo in which Western governments have been pretty hands-off in their relationship with technology companies. But that has historically changed with circumstances (e.g., the War Powers act in WWII), and circumstances are changing, ever more obviously. People with relevant expertise have been shouting from the hilltops that AGI will make dramatic changes in the world, many talking about it literally taking over the world. Sure, those voices can be dismissed as crackpots now, but as AI progresses visibly toward AGI (and the efforts are visible), more and more people will take notice.
Are the politicians dumb enough (with regard to technology and cognitive science) to miss the implications until it’s too late? I think they are. Humans are stunningly foolish outside of their own expertise and when we don’t have personal motivation to think things through thoroughly and realistically.
Are the people in national security collectively dumb enough to miss this? No way.
I’ve heard people dismiss government involvement because a manhattan project or nationalization seem unlikely for several reasons. I agree. My point here is that it just takes a couple of guys with guns showing up at the AGI company and informing them that the government wants in on all consequential decisions. If laws need to be changed, they will be (I think they actually don’t, given the security concerns). It would be the quickest bipartisan legislation ever: The “nice demigod, we’ll take it” bill.
I’m not certain about all of this, but it does seem highly probable. I think we’ve been collectively unrealistic about likely first-AGI scenarios. Would you rather have Sam Altman or the US Government in charge of AGI as it progresses to ASI? I don’t know which I’d take, but I don’t think I get a choice.
One implication is that public and government attitudes toward AGI x-risk issues may be critical. We can work to prepare the ground. Current political efforts haven’t convinced the public or the government that AGI is important let alone existentially risky, but progress is on our side in that effort.
I’d love to hear alternate scenarios in which this doesn’t happen, or things I’m missing.
^ It seems like AGI remaining under human control is the biggest variable, but if that doesn’t happen, policy impacts are kind of irrelevant. I think it’s pretty likely that instruction-following or corrigibility as a singular target will be implemented successfully for full superhuman AGI, for reasons given in those links. That type of alignment target doesn’t guarantee good results like value alignment does by definition, but it does seem easier much easier to achieve, since partial success can be leveraged into full success duiring a slow takeoff.
At the low end of the spectrum, yes. That appointment may well indicate that they’re already interested in keeping an eye on the situation. Or that OpenAI is pre-empting some concerns about security of their operation.
I’d expect government involvement to ramp up from there by default unless there’s a blocker I haven’t thought of or seen discussed.
Maybe the balance of power has changed. Politicians need to win in democratic elections. Democratic elections are decided by people who spend a lot of time online. The tech companies can nudge their algorithms to provide more negative information about a selected politician, and more positive information about his competitors. And the politicians know it.
Banning Trump on social networks, no matter how much some people applauded it for tribal reasons, sent a strong message to all politicians across the political spectrum: you could be next. At least banning is obvious, but getting the negative news about you on the first page of Google results and moving the positive news to the second page, or sharing Facebook posts from your haters and hiding Facebook posts from your fans would be more difficult to prove.
The government takeover of tech companies would require bipartisan action prepared in secret. How much can you prepare something secret if the tech companies own all your communication means (your messages, the messages of your staff), and can assign an AI to compile the pieces of information and detect possible threats?
I think there are considderations like these that could prevent government from being in charge, but the default scenario from here is that they do exert control over AGI in nontrivial ways.
Interesting points. I think you’re right about an influence to do what tech companies want. This would apply to some of them—Google and Meta—but not OpenAI or Anthropic since they don’t control media.
I don’t think government control would require any bipartisan action. I think the existing laws surrounding security would suffice, since AGI is absolutely security-relevant. (I’m no law expert, but my GPT4o legal consultant thought it was likely). If it did require new laws, those wouldn’t need to be secret.
Reconnaissance might be a candidate for one of the first uses of powerful A(G)I systems by militaries—if this isn’t already the case. There’s already an abundance of satellite data (likely exabytes in the next decade) that could be thrown into training datasets. It’s also less inflammatory than using AI systems for autonomous weapon design, say, and politically more feasible. So there’s a future in which A(G)I-powered reconnaissance systems have some transformative military applications, the military high-ups take note, and things snowball from there.
Sure, at the low end. I think there are lots of reasons the government is and will continue to be highly interested in AI for military purposes.
That’s AI; I’m thinking about competent, agentic AGI that also follows human orders. I think that’s what we’re likely to get, for reasons I go into in the instruction-following AGI link above.
Governments will take control of AGI before it’s ASI, right?
Governments don’t have to make AGI to control AGI. They still have a monopoly on force. Surely we’re not still expecting things to move so fast that they don’t notice what’s going on before AGI changes the physical balance of power?
If governments (likely the US government) do assert some measure of control over AGI projects, they will be involved in decisions about alignment and control strategies as AGI improves. As long as we survive those decisions (which I think we probably will, at least for a while[1]), they will also be deciding to what economic or military uses that AGI is put.
I predict that governments are going to notice the military applications and exert some measure of control over those projects. If AGI companies, personnel, or projects hop borders, they’re just changing which guys with guns will take over control from them in important ways.
For a while here, I’ve been puzzled that analysis of policy implications of AGI don’t often include government control and military applications. I haven’t wanted to speak up, just in case we’re all keeping mum so as not to tip off governments. Aschenbrenner’s Situational Awareness has let that cat out of that bag, so I think it’s time to include this likelihood in our public strategy analysis.
I think we’re used to a status quo in which Western governments have been pretty hands-off in their relationship with technology companies. But that has historically changed with circumstances (e.g., the War Powers act in WWII), and circumstances are changing, ever more obviously. People with relevant expertise have been shouting from the hilltops that AGI will make dramatic changes in the world, many talking about it literally taking over the world. Sure, those voices can be dismissed as crackpots now, but as AI progresses visibly toward AGI (and the efforts are visible), more and more people will take notice.
Are the politicians dumb enough (with regard to technology and cognitive science) to miss the implications until it’s too late? I think they are. Humans are stunningly foolish outside of their own expertise and when we don’t have personal motivation to think things through thoroughly and realistically.
Are the people in national security collectively dumb enough to miss this? No way.
I’ve heard people dismiss government involvement because a manhattan project or nationalization seem unlikely for several reasons. I agree. My point here is that it just takes a couple of guys with guns showing up at the AGI company and informing them that the government wants in on all consequential decisions. If laws need to be changed, they will be (I think they actually don’t, given the security concerns). It would be the quickest bipartisan legislation ever: The “nice demigod, we’ll take it” bill.
I’m not certain about all of this, but it does seem highly probable. I think we’ve been collectively unrealistic about likely first-AGI scenarios. Would you rather have Sam Altman or the US Government in charge of AGI as it progresses to ASI? I don’t know which I’d take, but I don’t think I get a choice.
One implication is that public and government attitudes toward AGI x-risk issues may be critical. We can work to prepare the ground. Current political efforts haven’t convinced the public or the government that AGI is important let alone existentially risky, but progress is on our side in that effort.
I’d love to hear alternate scenarios in which this doesn’t happen, or things I’m missing.
^ It seems like AGI remaining under human control is the biggest variable, but if that doesn’t happen, policy impacts are kind of irrelevant. I think it’s pretty likely that instruction-following or corrigibility as a singular target will be implemented successfully for full superhuman AGI, for reasons given in those links. That type of alignment target doesn’t guarantee good results like value alignment does by definition, but it does seem easier much easier to achieve, since partial success can be leveraged into full success duiring a slow takeoff.
Government involvement might just look like the companies adding people like Paul Nakasone to their boards.
At the low end of the spectrum, yes. That appointment may well indicate that they’re already interested in keeping an eye on the situation. Or that OpenAI is pre-empting some concerns about security of their operation.
I’d expect government involvement to ramp up from there by default unless there’s a blocker I haven’t thought of or seen discussed.
Maybe the balance of power has changed. Politicians need to win in democratic elections. Democratic elections are decided by people who spend a lot of time online. The tech companies can nudge their algorithms to provide more negative information about a selected politician, and more positive information about his competitors. And the politicians know it.
Banning Trump on social networks, no matter how much some people applauded it for tribal reasons, sent a strong message to all politicians across the political spectrum: you could be next. At least banning is obvious, but getting the negative news about you on the first page of Google results and moving the positive news to the second page, or sharing Facebook posts from your haters and hiding Facebook posts from your fans would be more difficult to prove.
The government takeover of tech companies would require bipartisan action prepared in secret. How much can you prepare something secret if the tech companies own all your communication means (your messages, the messages of your staff), and can assign an AI to compile the pieces of information and detect possible threats?
I think there are considderations like these that could prevent government from being in charge, but the default scenario from here is that they do exert control over AGI in nontrivial ways.
Interesting points. I think you’re right about an influence to do what tech companies want. This would apply to some of them—Google and Meta—but not OpenAI or Anthropic since they don’t control media.
I don’t think government control would require any bipartisan action. I think the existing laws surrounding security would suffice, since AGI is absolutely security-relevant. (I’m no law expert, but my GPT4o legal consultant thought it was likely). If it did require new laws, those wouldn’t need to be secret.
Reconnaissance might be a candidate for one of the first uses of powerful A(G)I systems by militaries—if this isn’t already the case. There’s already an abundance of satellite data (likely exabytes in the next decade) that could be thrown into training datasets. It’s also less inflammatory than using AI systems for autonomous weapon design, say, and politically more feasible. So there’s a future in which A(G)I-powered reconnaissance systems have some transformative military applications, the military high-ups take note, and things snowball from there.
Sure, at the low end. I think there are lots of reasons the government is and will continue to be highly interested in AI for military purposes.
That’s AI; I’m thinking about competent, agentic AGI that also follows human orders. I think that’s what we’re likely to get, for reasons I go into in the instruction-following AGI link above.