Your proposed solution of “coordinate with our sworn enemies not to develop ASI and continue to restrict the development of any advanced technology in medicine” has the predicted outcome of we die because we remain helpless to do anything about the things killing us. Either our sworn enemies defect on the agreement and develop ASI or we just all individually die of aging. Lose lose.
First, China are not “our sworn enemies” and this mindset already takes things to the extreme. China has diverging interests which might compete with ours but it’s not literally ideologically hell-bent on destroying everyone else on the planet. This kind of extreme mindset is already toxic; if you posit that coordination is impossible, of course it is.
Second, if your only alternative to death is living in a literal Hell, then I think many would reasonably pick death. It also must be noted that here:
That deadline might be 1 billion years until the sun expands or it might be 20 years until we face the first rampant asi.
the natural deadline is VERY distant. Plenty of time to do something about it. The close deadline (and many other such deadlines) is of our own making, ironically in the rush of avoiding some other kind of hypothetical danger that may be much further away. If we want to avoid being destroyed, learning how to not destroy ourselves would be an important first step.
First, China are not “our sworn enemies” and this mindset already takes things to the extreme.
I was referring to China, Russia, and to a lesser extent about 10 other countries who probably won’t have the budget to build ASI anytime soon. Both China and Russia hold the rest of the world at gunpoint with nuclear arsenals, like the USA does, and some European nations. All are essentially one bad decision from causing catastrophic damage.
Past attempts to come to some kind of deal to not build doomsday weapons to hold each other hostage all failed, why would they succeed this time? What could happen as a result of all this campaigning for government regulation is that like enriched nuclear material, ASIs above a certain level of capability may be the exclusive domain of governments. Who will be unaccountable and choose safety measures based on their own opaque processes. In this scenario, instead of many tech companies competing, it’s large governments, who can marshall far more resources than any private company can get from investors. Not sure this delays ASI at all.
Notably they also have not used nuclear weaponry recently and overall nuclear stockpiles have decreased by 80 percent. Part of playing the grim game is not giving the other player reasons to go grim by defecting. Same goes for ASI: they can suppress each other but if one defects, the consequences is that they can’t benefit.
The mutual result is actually quite stable with only government control as their incentives against self-destruction is high.
Basically only North Korea-esque nations in this scenario have the most incentive to defect, but would be suppressed by all extant powers. Since they would be essentially seen as terrorist speciciders, it’s hard to see why any actions against them wouldn’t be justified.
I think the crux of our disagreement is you are using Eliezers model, where the first ASI you build is by default deceptive and motivated always in a way beneficial to itself, and also ridiculously intelligent, able to defeat what should be hard limits.
While I am using a model where you can easily, with known software techniques, built ASI that are useful and take up the “free energy” needed for hostile ASI to win.
If, when we build the first ASI class systems, if it turns out Eliezers model is accurate, I will agree that grim games are rational and something we can do to delay the inevitable. (It might be stable for centuries, even, although eventually the game will fail and result in human extinction or ASI release or both)
I do feel we need hard evidence to determine which world we are in. Do you agree with that or do you think we should just assume ASIs are going to fit the first model and threaten nuclear war not to build the them?
Hard evidence would be building many ASI and testing them in secure facilities.
ASI is unnecessary when we have other options and grim game dynamics apply to avoid extinction or dystopia. I find even most such descriptions of tool level AI as disgusting(as do many others, I find).
Inevitability only applies if we have perfect information about the future, which we do not.
If it was up to me alone, I think we can give it at least a thousand years. Perhaps we can first raise the IQ of humanity by 1 SD via simple embryo selection before we go about extinctioning ourselves.
I actually do not think that we’re that close to cracking AGI: however, the intensity of the reaction imo is an excellent litmus test of how disgusting it is to most.
I strongly suspect the grim game dynamics have already begun, too, which has been one reason I’ve found comfort in the future.
From my perspective, I see the inverse, I see Singularity Criticality having already begun. The singularity is the world of human level AGI and self replicating robots, one where very large increases in resources are possible.
Singularity Criticality is that pre-singularity, as tools become capable of producing more economic value than their cost exist, they accelerate the last steps towards the (AGI, self replicating robots). Further developments follow from there.
I do not think anything other than essentially immediate nuclear war can stop a Singularity.
Observationally there is enormous economic pressure towards the singularity, I see no evidence whatsoever of policymakers even considering grim triggers. Can you please cite a government official stating a willingness to commit to total war if another party violates rules on ASI production? Can you cite any political parties or think tanks who are not directly associated with Eliezer Yudkowsky? I am willing to update on evidence.
I understand you feel disgust, but I cannot disambiguate the disgust you feel vs the luddites observing the rise of factory work. (the luddites were in the short term correct, the new factory jobs were a major downgrade). Worlds change and the world of stasis you propose, with very slow advances through embryo selection, I think is unlikely.
The UK has already mentioned that perhaps there should be a ban on models above a certain level. Though it’s not official, I have pretty good record that Chinese party members have already discussed worldwide war as potentially necessary(Eric Hoel also mentioned it, separately). Existential risk has been mentioned and of course, national risk is already a concern, so even for “mundane” reasons, it’s a matter of priority/concern and grim triggers are a natural consequence.
Elon had a personal discussion with China recently as well, and given his well known perspective on the dangers of AI, I expect that this point of view has only been reinforced.
And this is with barely reasoning chatbots!
As for Luddites, I don’t see why inflicting dystopia upon humanity because it fits some sort of cute agenda has any good purpose. But notably the Luddites did not have the support of the government and the government was not threatened by textile mills. Obviously this isn’t the case with nuclear, AI or bio. We’ve seen slowdowns on all of those.
“Worlds change” has no meaning: human culture and involvement influence the change of the world.
Ok. Thank you for the updates. Seems like the near term outcome depends on a race condition, where as you said government is acting and so is private industry, and government has incentives to preserve the status quo but also get immensely more rich and powerful.
The economy of course says the other. Investors are gambling the Nvidia is going to expand AI accelerator production by probably 2 orders of magnitude or more (to match the P/E ratio they have run the stocks to) , which is consistent with a world building many AGI, some ASI, and deploying many production systems. So you posit that governments worldwide are going to act in a coordinated manner to suppress the technology despite wealthy supporters of it.
I won’t claim to know the actual outcome but may we live in interesting times.
I think even the wealthy supporters of it are more complex: I was surprised that Palantir’s Peter Thiel came out discussing how AI “must not be allowed to surpass the human spirit” even as he clearly is looking to use AI in military operations. This all suggests significant controls incoming, even from those looking to benefit from it.
I agree with controls. I have an issue with wasted time on bureaucratic review and think it could burn the lead the western countries have.
Basically, “do z y z” to prove your model is good, design it according to “this known good framework” is ok with me.
“We have closed reviews for this year” is not. “We have issued too many AI research licenses this year” is not. “We have denied your application because we made mistakes in our review and will not update on evidence” is not.
All of these occur from a power imbalance. The entity requesting authorization is liable for any errors, but the government makes itself immune from accountability. (For example the government should be on the hook for lost revenue from the future products actual revenue for each day the review is delayed. The government should be required to buy companies at fair market value if it denies them an AI research license. Etc)
You are using the poisoned banana theory and do not believe we can easily build controllable ASI systems by restricting their inputs to in test distribution examples and resetting state often, correct?
I just wanted to establish your cruxes. Because if you could build safe ASI easily would this change your opinion on the correct policy?
No, I wouldn’t want it even if it was possible since by nature it is a replacement of humanity. I’d only accept Elon’s vision of AI bolted onto humans, so it effectively is part of us and thus can be said to be an evolution rather than replacement.
My main crux is that humanity has to be largely biological due to holobiont theory. There’s a lot of flexibility around that but anything that threatens that is a nonstarter.
Ok, that’s reasonable. Do you foresee, in worlds where ASI turns out to be easily controllable, ones where governments set up “grim triggers” like you advocate for or do you think, in worlds conditional on ASI being easily controllable/taskable, that such policies would not be enacted by the superpowers with nuclear weapons?
Obviously, without grim triggers, you end up with the scenario you despise: immortal humans and their ASI tools controlling essentially all power and wealth.
This is I think kind of a flaw in your viewpoint. Over the arrow of time, AI/AGI/ASI adopters and contributors are going to have almost all of the effective votes. Your stated preferences mean over time your faction will lose power and relevance.
For an example of this see autonomous weapons bans. Or a general example is the emh.
Please note I am trying to be neutral here. Your preferences are perfectly respectable and understandable, it’s just that some preferences may have more real world utility than others.
This frames things as an inevitability which is almost certainly wrong, but more specifically opposition to a technology leads to alternatives being developed. E.g. widespread nuclear control led to alternatives being pursued for energy.
Being controllable is unlikely even if it is tractable by human controllers: it still represents power which means it’ll be treated as a threat by established actors and its terroristic implications mean there is moral valence to police it.
In a world with controls, grim triggers or otherwise, AI would have to develop along different lines and likely in ways that are more human compatible. In a world of intense grim triggers, it may be that is too costly to continue to develop beyond a point. “Don’t build ASI or we nuke” is completely reasonable if both “build ASI” and “nuking” is negative, but the former is more negative.
Autonomous weapons actually are an excellent example of delay: despite excellent evidence of the superiority of drones, pilots have continued to mothball it for at least 40 years and so have governments in spite of wartime benefits.
The argument seems to similar to the flaw in the “billion year” argument: we may die eventually, but life only persists by resisting death, long enough for it to replicate.
As far as real world utility, notwithstanding some recent successes, going down without fighting for myself and my children is quite silly.
I think the error here is you may be comparing technologies on different benefit scales than I am.
Nuclear power can be cheaper than paying for fossil fuel to burn in a generator, if the nuclear reactor is cheaply built and has a small operating staff. Your benefit is a small decrease in price per kWh.
As we both know, cheaply built and lightly staffed nuclear plants are a hazard and governments have made them illegal. Safe plants, that are expensively built with lots of staff and time spent on reviewing the plans for approval and redoing faulty work during construction, are more expensive than fossil fuel and now renewables, and are generally not worth building.
Until extremely recently, AI controlled aircraft did not exist. The general public has for decades had a misinterpretation of what “autopilot” systems are capable of. Until a few months ago, none of those systems could actually pilot their aircraft, they solely act as simple controllers to head towards waypoints, etc. (Some can control the main flight controls during a landing but many of the steps must be performed by the pilot)
The benefit of an AI controlled aircraft is you don’t have to pay a pilot.
Drones were not superior until extremely recently. You may be misinformed to the capabilities of systems like the predator 1 and 2 drones, which were not capable of air combat maneuvering and had no software algorithms available in that era capable of it. Also combat aircraft have been firing autonomous missiles at each other since the Korean war.
Note both benefits are linear. You get say n percent cheaper electricity where n is less than 50 percent, or n percent cheaper to operate aircraft, where n is less than 20 percent.
The benefits of AGI is exponential. Eventually the benefits scale to millions, then billions, then trillions of times the physical resources, etc, that you started with.
It’s extremely divergent. Once a faction gets even a doubling or 2 it’s over, nukes won’t stop them.
Assumption: by doubling I mean say a nation with a GDP of 10 trillion gets AGI and now has 20 or 40 trillion GDP. Their territory is covered with billions of new AGI based robotic factories and clinics and so on. Your nuclear bombardment does not destroy enough copies of the equipment to prevent them from recovering.
I’ll look for the article later but basically the Air Force has found pilotless aircraft to be useful for around thirty years but organized rejection has led to most such programs meeting an early death.
The rest is a lot of AGI is magic without considering the actual costs of computation or noncomputable situations. Nukes would just scale up: it costs much less to destroy than it is to build and the significance of modern economics is indeed that they require networks which do not take shocks well. Everything else basically is “ASI is magic.”
We would need some more context on what you are referring to. For loitering over an undefended target and dropping bombs, yes, drones are superior and the us air force has allowed the US army to operate those drones instead. I do not think the us air force has had the belief that operating high end aircraft such as stealth and supersonic fighter bombers was within the capability of drone software over the last 30 years, with things shifting recently. Remember, in 2012 the first modern deep learning experiments were tried, prior to this AI was mostly a curiosity.
If “the bomb” can wipe out a country with automated factories and missile defense systems, why fear AGI/ASI? I see a bit of cognitive dissonance in your latest point similar to Gary Marcus. Gary Marcus has consistently argued that current llms are just a trick, real AGI is very far away, and that near term systems are no threat, yet also argues for AI pauses. This feels like an incoherent view that you are also expressing. Either AGI/ASI is, as you put it, in fact magic and you need to pound the red button early and often, or you can delay committing national suicide until later. I look forward to a clarification of your beliefs.
I don’t think it is magic but it is still sufficiently disgusting to treat it with equal threat now. Red button now.
Its not a good idea to treat a disease right before it kills you: prevention is the way to go.
So no, I don’t think it is magic. But I do think just as the world agreed against human cloning long before there was a human clone, now is the time to act.
So gathering up your beliefs, you believe ASI/AGI to be a threat, but not so dangerous a threat you need to use nuclear weapons until an enemy nation with it is extremely far along, which will take, according to your beliefs, many years since it’s not that good.
But you find the very idea of non human intelligence in use by humans or possibly serving itself so disgusting that you want nuclear weapons used the instant anyone steps out of compliance with international rules you wish to impose. (Note this is historically unprecedented, arms control treaties have been voluntary and did not have immediate thermonuclear war as the penalty for violating them)
And since your beliefs are emotionally based on “disgust”, I assume there is no updating based on actual measurements? That is, if ASI turns out to be safer than you currently think, you still want immediate nukes, and vice versa?
What percentage of the population of world superpower decision makers do you feel share your belief? Just a rough guess is fine.
The point is that sanctions should be applied as necessary to discourage AGI, however, approximate grim triggers should apply as needed to prevent dystopia.
As the other commentators have mentioned, my reaction is not unusual and thus this is why the concerns of doom have been widespread.
First, China are not “our sworn enemies” and this mindset already takes things to the extreme. China has diverging interests which might compete with ours but it’s not literally ideologically hell-bent on destroying everyone else on the planet. This kind of extreme mindset is already toxic; if you posit that coordination is impossible, of course it is.
Second, if your only alternative to death is living in a literal Hell, then I think many would reasonably pick death. It also must be noted that here:
the natural deadline is VERY distant. Plenty of time to do something about it. The close deadline (and many other such deadlines) is of our own making, ironically in the rush of avoiding some other kind of hypothetical danger that may be much further away. If we want to avoid being destroyed, learning how to not destroy ourselves would be an important first step.
First, China are not “our sworn enemies” and this mindset already takes things to the extreme.
I was referring to China, Russia, and to a lesser extent about 10 other countries who probably won’t have the budget to build ASI anytime soon. Both China and Russia hold the rest of the world at gunpoint with nuclear arsenals, like the USA does, and some European nations. All are essentially one bad decision from causing catastrophic damage.
Past attempts to come to some kind of deal to not build doomsday weapons to hold each other hostage all failed, why would they succeed this time? What could happen as a result of all this campaigning for government regulation is that like enriched nuclear material, ASIs above a certain level of capability may be the exclusive domain of governments. Who will be unaccountable and choose safety measures based on their own opaque processes. In this scenario, instead of many tech companies competing, it’s large governments, who can marshall far more resources than any private company can get from investors. Not sure this delays ASI at all.
Notably they also have not used nuclear weaponry recently and overall nuclear stockpiles have decreased by 80 percent. Part of playing the grim game is not giving the other player reasons to go grim by defecting. Same goes for ASI: they can suppress each other but if one defects, the consequences is that they can’t benefit.
The mutual result is actually quite stable with only government control as their incentives against self-destruction is high.
Basically only North Korea-esque nations in this scenario have the most incentive to defect, but would be suppressed by all extant powers. Since they would be essentially seen as terrorist speciciders, it’s hard to see why any actions against them wouldn’t be justified.
I think the crux of our disagreement is you are using Eliezers model, where the first ASI you build is by default deceptive and motivated always in a way beneficial to itself, and also ridiculously intelligent, able to defeat what should be hard limits.
While I am using a model where you can easily, with known software techniques, built ASI that are useful and take up the “free energy” needed for hostile ASI to win.
If, when we build the first ASI class systems, if it turns out Eliezers model is accurate, I will agree that grim games are rational and something we can do to delay the inevitable. (It might be stable for centuries, even, although eventually the game will fail and result in human extinction or ASI release or both)
I do feel we need hard evidence to determine which world we are in. Do you agree with that or do you think we should just assume ASIs are going to fit the first model and threaten nuclear war not to build the them?
Hard evidence would be building many ASI and testing them in secure facilities.
ASI is unnecessary when we have other options and grim game dynamics apply to avoid extinction or dystopia. I find even most such descriptions of tool level AI as disgusting(as do many others, I find).
Inevitability only applies if we have perfect information about the future, which we do not.
If it was up to me alone, I think we can give it at least a thousand years. Perhaps we can first raise the IQ of humanity by 1 SD via simple embryo selection before we go about extinctioning ourselves.
I actually do not think that we’re that close to cracking AGI: however, the intensity of the reaction imo is an excellent litmus test of how disgusting it is to most.
I strongly suspect the grim game dynamics have already begun, too, which has been one reason I’ve found comfort in the future.
From my perspective, I see the inverse, I see Singularity Criticality having already begun. The singularity is the world of human level AGI and self replicating robots, one where very large increases in resources are possible.
Singularity Criticality is that pre-singularity, as tools become capable of producing more economic value than their cost exist, they accelerate the last steps towards the (AGI, self replicating robots). Further developments follow from there.
I do not think anything other than essentially immediate nuclear war can stop a Singularity.
Observationally there is enormous economic pressure towards the singularity, I see no evidence whatsoever of policymakers even considering grim triggers. Can you please cite a government official stating a willingness to commit to total war if another party violates rules on ASI production? Can you cite any political parties or think tanks who are not directly associated with Eliezer Yudkowsky? I am willing to update on evidence.
I understand you feel disgust, but I cannot disambiguate the disgust you feel vs the luddites observing the rise of factory work. (the luddites were in the short term correct, the new factory jobs were a major downgrade). Worlds change and the world of stasis you propose, with very slow advances through embryo selection, I think is unlikely.
The UK has already mentioned that perhaps there should be a ban on models above a certain level. Though it’s not official, I have pretty good record that Chinese party members have already discussed worldwide war as potentially necessary(Eric Hoel also mentioned it, separately). Existential risk has been mentioned and of course, national risk is already a concern, so even for “mundane” reasons, it’s a matter of priority/concern and grim triggers are a natural consequence.
Elon had a personal discussion with China recently as well, and given his well known perspective on the dangers of AI, I expect that this point of view has only been reinforced.
And this is with barely reasoning chatbots!
As for Luddites, I don’t see why inflicting dystopia upon humanity because it fits some sort of cute agenda has any good purpose. But notably the Luddites did not have the support of the government and the government was not threatened by textile mills. Obviously this isn’t the case with nuclear, AI or bio. We’ve seen slowdowns on all of those.
“Worlds change” has no meaning: human culture and involvement influence the change of the world.
Ok. Thank you for the updates. Seems like the near term outcome depends on a race condition, where as you said government is acting and so is private industry, and government has incentives to preserve the status quo but also get immensely more rich and powerful.
The economy of course says the other. Investors are gambling the Nvidia is going to expand AI accelerator production by probably 2 orders of magnitude or more (to match the P/E ratio they have run the stocks to) , which is consistent with a world building many AGI, some ASI, and deploying many production systems. So you posit that governments worldwide are going to act in a coordinated manner to suppress the technology despite wealthy supporters of it.
I won’t claim to know the actual outcome but may we live in interesting times.
I think even the wealthy supporters of it are more complex: I was surprised that Palantir’s Peter Thiel came out discussing how AI “must not be allowed to surpass the human spirit” even as he clearly is looking to use AI in military operations. This all suggests significant controls incoming, even from those looking to benefit from it.
Googling for “must not be allowed to surpass the human spirit” and Palantir finds no hits.
He discussed it here:
https://youtu.be/Ufm85wHJk5A?list=PLQk-vCAGvjtcMI77ChZ-SPP—cx6BWBWm
I agree with controls. I have an issue with wasted time on bureaucratic review and think it could burn the lead the western countries have.
Basically, “do z y z” to prove your model is good, design it according to “this known good framework” is ok with me.
“We have closed reviews for this year” is not. “We have issued too many AI research licenses this year” is not. “We have denied your application because we made mistakes in our review and will not update on evidence” is not.
All of these occur from a power imbalance. The entity requesting authorization is liable for any errors, but the government makes itself immune from accountability. (For example the government should be on the hook for lost revenue from the future products actual revenue for each day the review is delayed. The government should be required to buy companies at fair market value if it denies them an AI research license. Etc)
Lead is irrelevant to human extinction, obviously. The first to die is still dead.
In a democratic world, those affected have a say in how they should be inflicted with AI and how much they want to die or suffer.
The government represents the people.
You are using the poisoned banana theory and do not believe we can easily build controllable ASI systems by restricting their inputs to in test distribution examples and resetting state often, correct?
I just wanted to establish your cruxes. Because if you could build safe ASI easily would this change your opinion on the correct policy?
No, I wouldn’t want it even if it was possible since by nature it is a replacement of humanity. I’d only accept Elon’s vision of AI bolted onto humans, so it effectively is part of us and thus can be said to be an evolution rather than replacement.
My main crux is that humanity has to be largely biological due to holobiont theory. There’s a lot of flexibility around that but anything that threatens that is a nonstarter.
Ok, that’s reasonable. Do you foresee, in worlds where ASI turns out to be easily controllable, ones where governments set up “grim triggers” like you advocate for or do you think, in worlds conditional on ASI being easily controllable/taskable, that such policies would not be enacted by the superpowers with nuclear weapons?
Obviously, without grim triggers, you end up with the scenario you despise: immortal humans and their ASI tools controlling essentially all power and wealth.
This is I think kind of a flaw in your viewpoint. Over the arrow of time, AI/AGI/ASI adopters and contributors are going to have almost all of the effective votes. Your stated preferences mean over time your faction will lose power and relevance.
For an example of this see autonomous weapons bans. Or a general example is the emh.
Please note I am trying to be neutral here. Your preferences are perfectly respectable and understandable, it’s just that some preferences may have more real world utility than others.
This frames things as an inevitability which is almost certainly wrong, but more specifically opposition to a technology leads to alternatives being developed. E.g. widespread nuclear control led to alternatives being pursued for energy.
Being controllable is unlikely even if it is tractable by human controllers: it still represents power which means it’ll be treated as a threat by established actors and its terroristic implications mean there is moral valence to police it.
In a world with controls, grim triggers or otherwise, AI would have to develop along different lines and likely in ways that are more human compatible. In a world of intense grim triggers, it may be that is too costly to continue to develop beyond a point. “Don’t build ASI or we nuke” is completely reasonable if both “build ASI” and “nuking” is negative, but the former is more negative.
Autonomous weapons actually are an excellent example of delay: despite excellent evidence of the superiority of drones, pilots have continued to mothball it for at least 40 years and so have governments in spite of wartime benefits.
The argument seems to similar to the flaw in the “billion year” argument: we may die eventually, but life only persists by resisting death, long enough for it to replicate.
As far as real world utility, notwithstanding some recent successes, going down without fighting for myself and my children is quite silly.
I think the error here is you may be comparing technologies on different benefit scales than I am.
Nuclear power can be cheaper than paying for fossil fuel to burn in a generator, if the nuclear reactor is cheaply built and has a small operating staff. Your benefit is a small decrease in price per kWh.
As we both know, cheaply built and lightly staffed nuclear plants are a hazard and governments have made them illegal. Safe plants, that are expensively built with lots of staff and time spent on reviewing the plans for approval and redoing faulty work during construction, are more expensive than fossil fuel and now renewables, and are generally not worth building.
Until extremely recently, AI controlled aircraft did not exist. The general public has for decades had a misinterpretation of what “autopilot” systems are capable of. Until a few months ago, none of those systems could actually pilot their aircraft, they solely act as simple controllers to head towards waypoints, etc. (Some can control the main flight controls during a landing but many of the steps must be performed by the pilot)
The benefit of an AI controlled aircraft is you don’t have to pay a pilot.
Drones were not superior until extremely recently. You may be misinformed to the capabilities of systems like the predator 1 and 2 drones, which were not capable of air combat maneuvering and had no software algorithms available in that era capable of it. Also combat aircraft have been firing autonomous missiles at each other since the Korean war.
Note both benefits are linear. You get say n percent cheaper electricity where n is less than 50 percent, or n percent cheaper to operate aircraft, where n is less than 20 percent.
The benefits of AGI is exponential. Eventually the benefits scale to millions, then billions, then trillions of times the physical resources, etc, that you started with.
It’s extremely divergent. Once a faction gets even a doubling or 2 it’s over, nukes won’t stop them.
Assumption: by doubling I mean say a nation with a GDP of 10 trillion gets AGI and now has 20 or 40 trillion GDP. Their territory is covered with billions of new AGI based robotic factories and clinics and so on. Your nuclear bombardment does not destroy enough copies of the equipment to prevent them from recovering.
I’ll look for the article later but basically the Air Force has found pilotless aircraft to be useful for around thirty years but organized rejection has led to most such programs meeting an early death.
The rest is a lot of AGI is magic without considering the actual costs of computation or noncomputable situations. Nukes would just scale up: it costs much less to destroy than it is to build and the significance of modern economics is indeed that they require networks which do not take shocks well. Everything else basically is “ASI is magic.”
I would bet on the bomb.
Two points :
We would need some more context on what you are referring to. For loitering over an undefended target and dropping bombs, yes, drones are superior and the us air force has allowed the US army to operate those drones instead. I do not think the us air force has had the belief that operating high end aircraft such as stealth and supersonic fighter bombers was within the capability of drone software over the last 30 years, with things shifting recently. Remember, in 2012 the first modern deep learning experiments were tried, prior to this AI was mostly a curiosity.
If “the bomb” can wipe out a country with automated factories and missile defense systems, why fear AGI/ASI? I see a bit of cognitive dissonance in your latest point similar to Gary Marcus. Gary Marcus has consistently argued that current llms are just a trick, real AGI is very far away, and that near term systems are no threat, yet also argues for AI pauses. This feels like an incoherent view that you are also expressing. Either AGI/ASI is, as you put it, in fact magic and you need to pound the red button early and often, or you can delay committing national suicide until later. I look forward to a clarification of your beliefs.
I don’t think it is magic but it is still sufficiently disgusting to treat it with equal threat now. Red button now.
Its not a good idea to treat a disease right before it kills you: prevention is the way to go.
So no, I don’t think it is magic. But I do think just as the world agreed against human cloning long before there was a human clone, now is the time to act.
So gathering up your beliefs, you believe ASI/AGI to be a threat, but not so dangerous a threat you need to use nuclear weapons until an enemy nation with it is extremely far along, which will take, according to your beliefs, many years since it’s not that good.
But you find the very idea of non human intelligence in use by humans or possibly serving itself so disgusting that you want nuclear weapons used the instant anyone steps out of compliance with international rules you wish to impose. (Note this is historically unprecedented, arms control treaties have been voluntary and did not have immediate thermonuclear war as the penalty for violating them)
And since your beliefs are emotionally based on “disgust”, I assume there is no updating based on actual measurements? That is, if ASI turns out to be safer than you currently think, you still want immediate nukes, and vice versa?
What percentage of the population of world superpower decision makers do you feel share your belief? Just a rough guess is fine.
The point is that sanctions should be applied as necessary to discourage AGI, however, approximate grim triggers should apply as needed to prevent dystopia.
As the other commentators have mentioned, my reaction is not unusual and thus this is why the concerns of doom have been widespread.
So the answer is: enough.