You’re absolutely right that the government will get involved. I was hoping for more of a collaboration between the tech company that creates it, and the government. If we don’t have a breakdown in democracy (which is entirely possible but I don’t think inevitable), that will put the government in charge of ASI. Which sounds bad, but it could be worse—having nobody but the ASI in charge, it it being misaligned.
My hope is something like “okay, we’ve realized that this is way too dangerous to go making more of them. So nobody is allowed to. But we’re going to use this one for the betterment of all. Technologies it creates will be distributed without restrictions, except when needed for safety”.
Of course that will be a biased version of “for the good of all”, but now I think the public scrutiny and the sheer ease of doing good might actually win out.
Hmm. I still think that perhaps this view of a possible future might not be taking enough account of the fact that ‘the government’ isn’t a thing. There are in fact several governments governing various portions of the human population, and they don’t always agree on who should be in charge. I am suggesting that whichever of these governments seems to be about to take control of a technology which will give it complete control over all the other governments… might be in for a rocky time. Sometimes they get mad at each other, or power hungry, and do some rather undemocratic things.
Right. I mean the US government. My timelines are short enough that I expect one of the US tech firms to achieve AGI first. Other scenarios seem possible but unlikely to me.
The scenario I am suggesting as seeming likely to me is that Russia and/or China are going to, at some point, recognize that the US companies (and thus US government) are on the brink of achieving AGI sufficiently powerful to ensure global hegemony. I expect that in that moment, if there is not a strong international treaty regarding sharing of power, that Russian and/or China will feel backed into a corner. In the face of an existential risk to their governance, the governments and militaries are likely to undertake either overt or covert acts of war.
If such a scenario does come to pass, in a highly offense-favoring fragile world-state as the one we are in, the results would likely be extremely messy. As in, lots of civilian casualties, and most or all of the employees of the leading labs dead.
Thus, I don’t think it makes sense to focus on the idea of “OpenAI develops ASI and the world smoothly transitions into Sam Altman as All-Powerful-Ruler-of-Everything-Forever” without also considering that an even more likely scenario if things seem to be going that way is all employees of OpenAI dead, most US datacenters bombed, and a probable escalation into World War III but with terrifying new technology.
So what I’m saying is that your statement:
But I’d prefer to gamble on the utopias offered by Altman, Hassabis, or Amodio. This is an argument against an AI pause, but not a strong one.
Is talking about a scenario that to me seems screened off from probably occurring by really bad outcomes. Like, I’d put less than 5% chance of a leading AI lab getting all the way to deployment-ready aligned ASI without either strong international cooperation and treaties and power-sharing with other nations, or substantial acts of state-sponsored violence with probable escalation to World War. I believe a peaceful resolution in this scenario requires treaties first.
You’re absolutely right that the government will get involved. I was hoping for more of a collaboration between the tech company that creates it, and the government. If we don’t have a breakdown in democracy (which is entirely possible but I don’t think inevitable), that will put the government in charge of ASI. Which sounds bad, but it could be worse—having nobody but the ASI in charge, it it being misaligned.
My hope is something like “okay, we’ve realized that this is way too dangerous to go making more of them. So nobody is allowed to. But we’re going to use this one for the betterment of all. Technologies it creates will be distributed without restrictions, except when needed for safety”.
Of course that will be a biased version of “for the good of all”, but now I think the public scrutiny and the sheer ease of doing good might actually win out.
Hmm. I still think that perhaps this view of a possible future might not be taking enough account of the fact that ‘the government’ isn’t a thing. There are in fact several governments governing various portions of the human population, and they don’t always agree on who should be in charge. I am suggesting that whichever of these governments seems to be about to take control of a technology which will give it complete control over all the other governments… might be in for a rocky time. Sometimes they get mad at each other, or power hungry, and do some rather undemocratic things.
Right. I mean the US government. My timelines are short enough that I expect one of the US tech firms to achieve AGI first. Other scenarios seem possible but unlikely to me.
The scenario I am suggesting as seeming likely to me is that Russia and/or China are going to, at some point, recognize that the US companies (and thus US government) are on the brink of achieving AGI sufficiently powerful to ensure global hegemony. I expect that in that moment, if there is not a strong international treaty regarding sharing of power, that Russian and/or China will feel backed into a corner. In the face of an existential risk to their governance, the governments and militaries are likely to undertake either overt or covert acts of war.
If such a scenario does come to pass, in a highly offense-favoring fragile world-state as the one we are in, the results would likely be extremely messy. As in, lots of civilian casualties, and most or all of the employees of the leading labs dead.
Thus, I don’t think it makes sense to focus on the idea of “OpenAI develops ASI and the world smoothly transitions into Sam Altman as All-Powerful-Ruler-of-Everything-Forever” without also considering that an even more likely scenario if things seem to be going that way is all employees of OpenAI dead, most US datacenters bombed, and a probable escalation into World War III but with terrifying new technology.
So what I’m saying is that your statement:
Is talking about a scenario that to me seems screened off from probably occurring by really bad outcomes. Like, I’d put less than 5% chance of a leading AI lab getting all the way to deployment-ready aligned ASI without either strong international cooperation and treaties and power-sharing with other nations, or substantial acts of state-sponsored violence with probable escalation to World War. I believe a peaceful resolution in this scenario requires treaties first.