What does winning look like? What do you do next? How do you “bury the body”? You get AGI and you show it off publicly, Xi blows his stack as he realizes how badly he screwed up strategically and declares a national emergency and the CCP starts racing towards its own AGI in a year, and… then what? What do you do in this 1 year period, while you still enjoy AGI supremacy? You have millions of AGIs which can do… stuff. What is this stuff? Are you going to start massive weaponized hacking to subvert CCP AI programs as much as possible short of nuclear war? Lobby the UN to ban rival AGIs and approve US carrier group air strikes on the Chinese mainland? License it to the CCP to buy them off? Just… do nothing and enjoy 10%+ GDP growth for one year before the rival CCP AGIs all start getting deployed? Do you have any idea at all? If you don’t, what is the point of ‘winning the race’?
The standard LW & rationalist thesis (which AFAICT you agree with) is that sufficiently superintelligent AI is a magic wand that allows you to achieve whatever outcome you want. So one answer would be to prevent the CCP from doing potentially nasty things to you while they have AGI supremacy. Another answer might be turn the CCP into a nice liberal democracy friendly to the United States. Both of these are within the range of things the United States has done historically when they have had the opportunity.
The standard LW & rationalist thesis (which AFAICT you agree with) is that sufficiently superintelligent AI is a magic wand that allows you to achieve whatever outcome you want.
The standard LW & rationalist thesis is accepted by few people anywhere in the world, especially among policy and decision-makers, and it’s hard to imagine that it will be widely and uncontroversially accepted anywhere until it is a fait accompli—and even then I expect many people will continue to argue fallbacks about “the ghost in the machine is outsourced human labor” or “you can’t trust the research outputs” or “it’s just canned lab demos” or “it’ll fail to generalize out of distribution”. Hence, we need not concern ourselves here with what we think.
So one answer would be to prevent the CCP from doing potentially nasty things to you while they have AGI supremacy. Another answer might be turn the CCP into a nice liberal democracy friendly to the United States. Both of these are within the range of things the United States has done historically when they have had the opportunity.
It is a certainly viable strategy, if one were to execute it fully, rather than partially. But I don’t think people are very interested in biting these sorts of bullets, without a Pearl Harbor or 9/11:
HAWK: “Here’s our Plan A, you’ll love it!
‘We should launch an unprovoked and optional AI arms race, whose best-case scenario and ‘winning’ requires the USA to commit to, halfway around the world, the total conquest, liquidation, and complete reconstruction of the second-largest/most powerful nuclearized country on earth, taking over a country with 4.25x more people than itself, which will fiercely resist this humiliation and colonization, likely involving megadeaths, and trying to turn it into a nice liberal democracy (which we have failed to do in many countries far smaller & weaker than us, eg. Haiti, Afghanistan, or Iraq), and where if we ever fail in this task, that means they will then be highly motivated to do the same to us, and likely far more motivated than we were when we began, potentially creating our country’s most bitter foe ever.’”
The original comment you wrote appeared to be a response to “AI China hawks” like Leopold Aschenbrenner. Those people do accept the AI-is-extremely-powerful premise, and are arguing for an arms race based on that premise. I don’t think whether normies can feel the AGI is very relevant to their position, because one of their big goals is to make sure Xi is never in a position to run the world, and completing a Manhattan Project for AI would probably prevent that regardless (even if it kills us).
If you’re trying to argue instead that the Manhattan Project won’t happen, then I’m mostly ambivalent. But I’ll remark that that argument feels a lot more shaky in 2024 than in 2020, when Trump’s daughter is literally retweeting Leopold’s manifesto.
The original comment you wrote appeared to be a response to “AI China hawks” like Leopold Aschenbrenner. Those people do accept the AI-is-extremely-powerful premise...when Trump’s daughter is literally retweeting Leopold’s manifesto.
But would she be retweeting it if Leopold was being up front about how the victory scenario entails something like ‘melt all GPUs and conquer and occupy China perpetually’ (or whichever of those viable strategies he actually thinks of, assuming he does), instead of coyly referring to ‘decisive military advantage’ - which doesn’t actually make sense or provide an exit plan?
The standard LW & rationalist thesis (which AFAICT you agree with) is that sufficiently superintelligent AI is a magic wand that allows you to achieve whatever outcome you want. So one answer would be to prevent the CCP from doing potentially nasty things to you while they have AGI supremacy. Another answer might be turn the CCP into a nice liberal democracy friendly to the United States. Both of these are within the range of things the United States has done historically when they have had the opportunity.
The standard LW & rationalist thesis is accepted by few people anywhere in the world, especially among policy and decision-makers, and it’s hard to imagine that it will be widely and uncontroversially accepted anywhere until it is a fait accompli—and even then I expect many people will continue to argue fallbacks about “the ghost in the machine is outsourced human labor” or “you can’t trust the research outputs” or “it’s just canned lab demos” or “it’ll fail to generalize out of distribution”. Hence, we need not concern ourselves here with what we think.
It is a certainly viable strategy, if one were to execute it fully, rather than partially. But I don’t think people are very interested in biting these sorts of bullets, without a Pearl Harbor or 9/11:
The original comment you wrote appeared to be a response to “AI China hawks” like Leopold Aschenbrenner. Those people do accept the AI-is-extremely-powerful premise, and are arguing for an arms race based on that premise. I don’t think whether normies can feel the AGI is very relevant to their position, because one of their big goals is to make sure Xi is never in a position to run the world, and completing a Manhattan Project for AI would probably prevent that regardless (even if it kills us).
If you’re trying to argue instead that the Manhattan Project won’t happen, then I’m mostly ambivalent. But I’ll remark that that argument feels a lot more shaky in 2024 than in 2020, when Trump’s daughter is literally retweeting Leopold’s manifesto.
But would she be retweeting it if Leopold was being up front about how the victory scenario entails something like ‘melt all GPUs and conquer and occupy China perpetually’ (or whichever of those viable strategies he actually thinks of, assuming he does), instead of coyly referring to ‘decisive military advantage’ - which doesn’t actually make sense or provide an exit plan?