Thanks for this analysis. You make a very good point.
I think we’d do well to consider how the rest of the world is going to think about AGI, once they really start to think about it. The logic isn’t that convoluted. The fact that people often fail to engage with it doesn’t mean they can’t get it once it seems both pressing and socially relevant to have an opinion. I think we as a community have been sort of hoping that the public and politicians will leave this to the experts. But they probably won’t.
The problem with this proposed equilibrium is that it isn’t stable. Everyone is incentived to publicly decry building aligned AGI, but work in secret to race everyone else building their own in secret. That sounds even more dangerous.
Also note that, while aligned AGI could be used as a cultural superweapon, and it would be extremely tempting to do so; it doesn’t have to be used that way. The better ideas for alignment goals are along the lines of empowering humans to do whatever they want. That includes going on engaging with their preferred culture.
Edit:
there’s also the small matter of a really aligned AGI promising to solve almost all of the world’s problems. The powerful may not have as many problems themselves, but both every human and all of their loved ones are all currently going to die unless we develop transformative technologies in time to stop it. I and a lot of people would trade a fair amount of cultural success for not watching everyone I love, including me, die.
UPD 07/16/2023: This is all nonsense, never mind.
>I think we’d do well to consider how the rest of the world is going to think about AGI, once they really start to think about it.
Why are you sure it hasn’t happened yet? The last year was perceived for me (in Moscow) as a nightmarish heap of nonsense, but everything becomes devilishly logical, if we assume that Putin understands as well as you and I what the emergence of AI threatens him personally (well, and humanity, at the same time), and is ready to do anything to prevent this.
(I hope no one needs to explain that a nuclear war cannot be started by simply pressing a button? Putin could not have done this a year ago, even now, after a year of war psychosis, he still cannot do this—but he can reach the desired state in a few obvious steps. I do not understand what is the ultimate goal—to save humanity? save humanity by becoming a god before others? and how he plans to achieve this—but people no more stupid than me thought about it for a couple of years longer …)
It’s an interesting thought; I have read that Putin mentioned that whoever controls AI controls the world a few years ago. I think in general the sense of a great upcoming transformation may have forced his hand, but I always assumed it was climate change (Ukraine is a big agricultural state after all).
But of course it could also be personal (his own mortality making him feel like he has to leave a legacy). AI was not an angle I had considered; if this really was about looking for a nuclear casus belli then the situation would be even more dangerous than it seems. I doubt it though personally; if he wanted that, couldn’t he have gotten away at least with first use of tactical nukes in Ukraine? That would be a ramp to escalation.
UPD 07/16/2023: This is all nonsense, never mind.
>It’s an interesting thought; I have read that Putin mentioned that whoever controls AI controls the world a few years ago.
In reality, he said that whoever controls the AI will “become the Overlord of the World” (“станет Властелином Мира”). (I’m not sure how to translate correctly—Master, Ruler, Lord … but you can’t call a country like that—only a person, and this is an established phrase for denoting the ultimate goal of all sorts of evil geniuses and supervillains. )
Putin said this at least twice, in 2017 and 2019: “Artificial intelligence is not only the future of Russia, it is the future of all mankind. There are colossal opportunities and threats that are difficult to predict today. Whoever becomes the leader in this area will be the overlord of the world. ” http://kremlin.ru/events/president/news/55493 September 2017
lately he hasn’t said anything like that, but the fact that on November 24, 2022, (in less than 2 weeks after the retreat from Kherson) he participated in a conference on AI is also remarkable (http://kremlin.ru/events/president/news /69927 )
>I doubt it though personally; if he wanted that, couldn’t he have gotten away at least with first use of tactical nukes in Ukraine? That would be a ramp to escalation.
It was also impossible to launch a small nuclear strike without a year of propaganda training. Possible sequence:
-the Ukrainian offensive breaks through the front (which Putin appears to be consciously facilitating); tactical nuclear strike; -nuclear explosion in Moscow (not a mandatory step—but it costs nothing (8 minutes earlier or later), but it gives a lot of tactical advantages for controlling the scale of the war, and what follows after); -limited exchange of blows with the US.
However, it’s not Putin who looks like the beneficiary, but China — I don’t know, maybe for this KGB colonel lust for power is the same mask after mask, like technophobia, but in fact he is still faithful to the ideas of communism?)
What I definitely believe in: AI—The Cursed One Ring of Omnipotence, encrusted with the Philosopher’s Infinity Stones and even incomparably cooler, there was nothing so absurdly valuable and dangerous in any comic.
This is understood not only by me, but also by the owners of billions, the leaders of scientific groups and underground empires, the owners of nuclear arsenals and Altman’s secretary.
Civilizations do not perish by turning into paper clips (we would be paper clips), but to die in the royal battle for the possession of this artifact that has already started is a very real chance for mankind.
I think we as a community have been sort of hoping that the public and politicians will leave this to the experts
I honestly don’t think they should, and for good reason. Right now half of the experts I’m seeing on Twitter make arguments like “if you aren’t scared of airplanes why are you scared of AI” and similar nonsense (if you follow the discourse there, you know who I’m talking about). The other half is caught in a heated debate in which stating that you think AI has only a 5% chance of destroying everything we ever cared about and held dear is called “being an optimist”.
From the outside, this looks like an utterly deranged bubble that has lost touch with reality. These are all people who are very passionate about AI as a scientific/technical goal and have personally gone all-in with their career into this field. The average person doesn’t give a rat’s ass about AI beyond what it can do for them and wants to just live their life well and hope the same for their children. Their cost/benefit evaluations will be radically different.
The problem with this proposed equilibrium is that it isn’t stable. Everyone is incentived to publicly decry building aligned AGI, but work in secret to race everyone else building their own in secret. That sounds even more dangerous.
To a point, but you could say the same of e.g. bioweapons. Or nuclear armaments. But somehow we’ve managed, possibly because there’s coupled to all that a real awareness that those things are mostly paths to self-destruction anyway.
Also note that, while aligned AGI could be used as a cultural superweapon, and it would be extremely tempting to do so; it doesn’t have to be used that way. The better ideas for alignment goals are along the lines of empowering humans to do whatever they want. That includes going on engaging with their preferred culture.
Hmm, true only to a point IMO. If you don’t use AGI as a cultural superweapon, you’ll use it as a meta cultural one. “All humans should be empowered to do what they want equally” is itself a cultural value. It’s one I share but I think we can all point at people who would genuinely disagree. And there’s probably deeper consequences that come with that. As I said, at the very least, many large and powerful religious group would see the kind of post-singularity world you imagine as a paradise as actually sacrilegious and empty of all meaning and purpose. That alone would make things potentially very spicy. Saying “but you have the option” only fixes this to a point—many people worry about what others do, many people don’t want the option either because they see it as temptation. I don’t agree with that, but it’s a thing, and it will matter a lot before any AGI is deployed, and may matter overall to its morality (I don’t think that bigots or such should be allowed to interfere with others’ lives, but there’s something still a bit disturbing to me about the sheer finality and absoluteness of straight up imposing an immutable, non-human determined system on everyone. That said, if it was just that, we’d definitely be way up in the 5-th top percentile of possible AGI utopias).
there’s also the small matter of a really aligned AGI promising to solve almost all of the world’s problems. The powerful may not have as many problems themselves, but both every human and all of their loved ones are all currently going to die unless we develop transformative technologies in time to stop it. I and a lot of people would trade a fair amount of cultural success for not watching everyone I love, including me, die.
True, but this must be weighed against the risks too. The higher the potential power of AGI to solve problems, the higher the dangers if it goes awry (if it is possible to make us immortal, it is possible to make us immortal and tortured forever, for example). I worry that in general feeling like the goal is in sight might catalyse a rush that loses us the option altogether. I don’t like the idea of dying, but there’s a reason why the necromancer who is ready to sacrifice thousands of souls to gain immortality for himself is usually the villain in stories.
I think nuclear weapons and bioweapons are importantly different than AGI, because they are primarily offensive. Nuclear weapons have been stalemated by the doctrine of mutually assured destruction. Bioweapons could similarly inflict immense damage, but in the case of engineered viruses, would be turned on their users deliberately if not accidentally. Aligned AGI could enable the neutralization of others’ offensive weapons, once it gets smart enough to create the means to do so. So deploying it holds little downside, and a lot of defensive upside.
Also note that many nations have worked to obtain nuclear weapons despite being signatory to treaties saying they would not. It’s the smart move, in many ways.
For those reasons I don’t think that treaties are a long-term viable means to prevent AGI. And driving those projects into military black-ops projects doesn’t sound like it’s likely to up the odds of creating aligned AGI.
On your last point, I personally agree with you. Waiting until we’re sure we have safe AI is the right thing to do, even if this generation dies of old age during that wait. But I’m not sure how the public will react if it becomes common belief that AGI will either kill us, or solve all of our practical problems. They could push for development just as easily as push for a moratorium on AGI development.
Aligned AGI could enable the neutralization of others’ offensive weapons, once it gets smart enough to create the means to do so. So deploying it holds little downside, and a lot of defensive upside.
Depends how fast it goes I guess—defending is always harder than attacking when it comes to modern firepower, and it takes a lot of smarts and new tech to overcome that. But also, in some ways defence is also risky. For example a near perfect anti-ICBM shield would break MAD, making nuclear war in fact more attractive to those who have it.
For those reasons I don’t think that treaties are a long-term viable means to prevent AGI. And driving those projects into military black-ops projects doesn’t sound like it’s likely to up the odds of creating aligned AGI.
Eh, don’t know if it’d make odds worse either. At least I’d expect militaries to care about not blowing themselves up. And having to run operations in secret would gum the process up a bit.
But I’m not sure how the public will react if it becomes common belief that AGI will either kill us, or solve all of our practical problems. They could push for development just as easily as push for a moratorium on AGI development.
True, but I think that if they read the average discourse we see here on AGI lots of people would just think that the AGI killing us sounds bad but the alternative as described sounds shady. Based on precedent, lots of people are suspicious of promises of utopia.
All good points. Particularly, I haven’t thought about the up-sides of AGI as a covert military project. There are some large downsides, but my impression is that the military tends to take a longer-term view than politicians or business people.
The public reaction is really difficult to predict or influence. But it’s likely to become important. This has prompted me to write a post on that topic. Thanks for a great post and discussion!
Thanks for this analysis. You make a very good point.
I think we’d do well to consider how the rest of the world is going to think about AGI, once they really start to think about it. The logic isn’t that convoluted. The fact that people often fail to engage with it doesn’t mean they can’t get it once it seems both pressing and socially relevant to have an opinion. I think we as a community have been sort of hoping that the public and politicians will leave this to the experts. But they probably won’t.
The problem with this proposed equilibrium is that it isn’t stable. Everyone is incentived to publicly decry building aligned AGI, but work in secret to race everyone else building their own in secret. That sounds even more dangerous.
Also note that, while aligned AGI could be used as a cultural superweapon, and it would be extremely tempting to do so; it doesn’t have to be used that way. The better ideas for alignment goals are along the lines of empowering humans to do whatever they want. That includes going on engaging with their preferred culture.
Edit:
there’s also the small matter of a really aligned AGI promising to solve almost all of the world’s problems. The powerful may not have as many problems themselves, but both every human and all of their loved ones are all currently going to die unless we develop transformative technologies in time to stop it. I and a lot of people would trade a fair amount of cultural success for not watching everyone I love, including me, die.
UPD 07/16/2023: This is all nonsense, never mind.
>I think we’d do well to consider how the rest of the world is going to think about AGI, once they really start to think about it.
Why are you sure it hasn’t happened yet? The last year was perceived for me (in Moscow) as a nightmarish heap of nonsense, but everything becomes devilishly logical, if we assume that Putin understands as well as you and I what the emergence of AI threatens him personally (well, and humanity, at the same time), and is ready to do anything to prevent this.
(I hope no one needs to explain that a nuclear war cannot be started by simply pressing a button? Putin could not have done this a year ago, even now, after a year of war psychosis, he still cannot do this—but he can reach the desired state in a few obvious steps. I do not understand what is the ultimate goal—to save humanity? save humanity by becoming a god before others? and how he plans to achieve this—but people no more stupid than me thought about it for a couple of years longer …)
It’s an interesting thought; I have read that Putin mentioned that whoever controls AI controls the world a few years ago. I think in general the sense of a great upcoming transformation may have forced his hand, but I always assumed it was climate change (Ukraine is a big agricultural state after all).
But of course it could also be personal (his own mortality making him feel like he has to leave a legacy). AI was not an angle I had considered; if this really was about looking for a nuclear casus belli then the situation would be even more dangerous than it seems. I doubt it though personally; if he wanted that, couldn’t he have gotten away at least with first use of tactical nukes in Ukraine? That would be a ramp to escalation.
UPD 07/16/2023: This is all nonsense, never mind.
>It’s an interesting thought; I have read that Putin mentioned that whoever controls AI controls the world a few years ago.
In reality, he said that whoever controls the AI will “become the Overlord of the World” (“станет Властелином Мира”).
(I’m not sure how to translate correctly—Master, Ruler, Lord … but you can’t call a country like that—only a person, and this is an established phrase for denoting the ultimate goal of all sorts of evil geniuses and supervillains. )
Putin said this at least twice, in 2017 and 2019:
“Artificial intelligence is not only the future of Russia, it is the future of all mankind. There are colossal opportunities and threats that are difficult to predict today. Whoever becomes the leader in this area will be the overlord of the world. ”
http://kremlin.ru/events/president/news/55493 September 2017
“If someone can provide a monopoly in the field of artificial intelligence, the consequences are clear to all of us—he will become the overlord of the world”
https://www.forbes.ru/obshchestvo/376957-stat-vlastelinom-mira-putin-potreboval-obespechit-suverenitet-rossii-v-oblasti May 30, 2019
lately he hasn’t said anything like that, but the fact that on November 24, 2022, (in less than 2 weeks after the retreat from Kherson) he participated in a conference on AI is also remarkable (http://kremlin.ru/events/president/news /69927 )
Well, also “the ruler of the world” is his middle name (and the first too) https://ru.wikipedia.org/wiki/%D0%92%D0%BB%D0%B0%D0%B4%D0%B8%D0 %BC%D0%B8%D1%80_(%D0%B8%D0%BC%D1%8F)
>I doubt it though personally; if he wanted that, couldn’t he have gotten away at least with first use of tactical nukes in Ukraine? That would be a ramp to escalation.
It was also impossible to launch a small nuclear strike without a year of propaganda training.
Possible sequence:
-the Ukrainian offensive breaks through the front (which Putin appears to be consciously facilitating);
tactical nuclear strike;
-nuclear explosion in Moscow (not a mandatory step—but it costs nothing (8 minutes earlier or later), but it gives a lot of tactical advantages for controlling the scale of the war, and what follows after);
-limited exchange of blows with the US.
However, it’s not Putin who looks like the beneficiary, but China — I don’t know, maybe for this KGB colonel lust for power is the same mask after mask, like technophobia, but in fact he is still faithful to the ideas of communism?)
What I definitely believe in:
AI—The Cursed One Ring of Omnipotence, encrusted with the Philosopher’s Infinity Stones and even incomparably cooler, there was nothing so absurdly valuable and dangerous in any comic.
This is understood not only by me, but also by the owners of billions, the leaders of scientific groups and underground empires, the owners of nuclear arsenals and Altman’s secretary.
Civilizations do not perish by turning into paper clips (we would be paper clips), but to die in the royal battle for the possession of this artifact that has already started is a very real chance for mankind.
I honestly don’t think they should, and for good reason. Right now half of the experts I’m seeing on Twitter make arguments like “if you aren’t scared of airplanes why are you scared of AI” and similar nonsense (if you follow the discourse there, you know who I’m talking about). The other half is caught in a heated debate in which stating that you think AI has only a 5% chance of destroying everything we ever cared about and held dear is called “being an optimist”.
From the outside, this looks like an utterly deranged bubble that has lost touch with reality. These are all people who are very passionate about AI as a scientific/technical goal and have personally gone all-in with their career into this field. The average person doesn’t give a rat’s ass about AI beyond what it can do for them and wants to just live their life well and hope the same for their children. Their cost/benefit evaluations will be radically different.
To a point, but you could say the same of e.g. bioweapons. Or nuclear armaments. But somehow we’ve managed, possibly because there’s coupled to all that a real awareness that those things are mostly paths to self-destruction anyway.
Hmm, true only to a point IMO. If you don’t use AGI as a cultural superweapon, you’ll use it as a meta cultural one. “All humans should be empowered to do what they want equally” is itself a cultural value. It’s one I share but I think we can all point at people who would genuinely disagree. And there’s probably deeper consequences that come with that. As I said, at the very least, many large and powerful religious group would see the kind of post-singularity world you imagine as a paradise as actually sacrilegious and empty of all meaning and purpose. That alone would make things potentially very spicy. Saying “but you have the option” only fixes this to a point—many people worry about what others do, many people don’t want the option either because they see it as temptation. I don’t agree with that, but it’s a thing, and it will matter a lot before any AGI is deployed, and may matter overall to its morality (I don’t think that bigots or such should be allowed to interfere with others’ lives, but there’s something still a bit disturbing to me about the sheer finality and absoluteness of straight up imposing an immutable, non-human determined system on everyone. That said, if it was just that, we’d definitely be way up in the 5-th top percentile of possible AGI utopias).
True, but this must be weighed against the risks too. The higher the potential power of AGI to solve problems, the higher the dangers if it goes awry (if it is possible to make us immortal, it is possible to make us immortal and tortured forever, for example). I worry that in general feeling like the goal is in sight might catalyse a rush that loses us the option altogether. I don’t like the idea of dying, but there’s a reason why the necromancer who is ready to sacrifice thousands of souls to gain immortality for himself is usually the villain in stories.
I think nuclear weapons and bioweapons are importantly different than AGI, because they are primarily offensive. Nuclear weapons have been stalemated by the doctrine of mutually assured destruction. Bioweapons could similarly inflict immense damage, but in the case of engineered viruses, would be turned on their users deliberately if not accidentally. Aligned AGI could enable the neutralization of others’ offensive weapons, once it gets smart enough to create the means to do so. So deploying it holds little downside, and a lot of defensive upside.
Also note that many nations have worked to obtain nuclear weapons despite being signatory to treaties saying they would not. It’s the smart move, in many ways.
For those reasons I don’t think that treaties are a long-term viable means to prevent AGI. And driving those projects into military black-ops projects doesn’t sound like it’s likely to up the odds of creating aligned AGI.
On your last point, I personally agree with you. Waiting until we’re sure we have safe AI is the right thing to do, even if this generation dies of old age during that wait. But I’m not sure how the public will react if it becomes common belief that AGI will either kill us, or solve all of our practical problems. They could push for development just as easily as push for a moratorium on AGI development.
Depends how fast it goes I guess—defending is always harder than attacking when it comes to modern firepower, and it takes a lot of smarts and new tech to overcome that. But also, in some ways defence is also risky. For example a near perfect anti-ICBM shield would break MAD, making nuclear war in fact more attractive to those who have it.
Eh, don’t know if it’d make odds worse either. At least I’d expect militaries to care about not blowing themselves up. And having to run operations in secret would gum the process up a bit.
True, but I think that if they read the average discourse we see here on AGI lots of people would just think that the AGI killing us sounds bad but the alternative as described sounds shady. Based on precedent, lots of people are suspicious of promises of utopia.
All good points. Particularly, I haven’t thought about the up-sides of AGI as a covert military project. There are some large downsides, but my impression is that the military tends to take a longer-term view than politicians or business people.
The public reaction is really difficult to predict or influence. But it’s likely to become important. This has prompted me to write a post on that topic. Thanks for a great post and discussion!