My biggest problem with Leopold’s project is this: in a world where his models hold up, where superintelligence is right around the corner, a US / China race is inevitable, and the winner really matters; in that world, publishing these essays on the open internet is very dangerous. It seems just as likely to help the Chinese side as to help the US.
If China prioritizes AI (if they decide that it’s one tenth as important as Leopold suggests), I’d expect their administration to act more quickly and competently than the US. I don’t have a good reason to think Leopold’s essays will have a bigger impact in the US government than the Chinese, or vice-versa (I don’t think it matters much that it was written in English). My guess is that they’ve been read by some USG staffers, but I wouldn’t be surprised if things die out with the excitement of the upcoming election and partisan concerns. On the other hand, I wouldn’t be surprised if they’re already circulating in Beijing. If not now, then maybe in the future—now that these essays are published on the internet, there’s no way to take them back.
What’s more, it seems possible to me that by framing things as a race, and saying cooperation is “fanciful”, may (in a self-fulfilling prophecy way) make a race more likely (and cooperation less).
Another complicating factor is that there’s just no way the US could run a secret project without China getting word of it immediately. With all the attention paid to the top US labs and research scientists, they’re not going to all just slip away to New Mexico for three years unnoticed. (I’m not sure if China could pull off such a secret project, but I wouldn’t rule it out.)
A slight silver lining, I’m not sure if a world in which China “wins” the race is all that bad. I’m genuinely uncertain. Let’s take Leopold’s objections for example:
I genuinely do not know the intentions of the CCP and their authoritarian allies. But, as a reminder: the CCP is a regime founded on the continued worship of perhaps the greatest totalitarian mass-murderer in human history (“with estimates ranging from 40 to 80 million victims due to starvation, persecution, prison labor, and mass executions”); a regime that recently put a million Uyghurs in concentration camps and crushed a free Hong Kong; a regime that systematically practices mass surveillance for social control, both of the new-fangled (tracking phones, DNA databases, facial recognition, and so on) and the old-fangled (recruiting an army of citizens to report on their neighbors) kind; a regime that ensures all text messages passes through a censor, and that goes so far to repress dissent as to pull families into police stations when their child overseas attends a protest; a regime that has cemented Xi Jinping as dictator-for-life; a regime that touts its aims to militarily crush and “reeducate” a free neighboring nation; a regime that explicitly seeks a China-centric world order.
I agree that all of these are bad (very bad). But I think they’re all means to preserve the CCP’s control. With superintelligence, preservation of control is no longer a problem.
I believe Xi (or choose your CCP representative) would say that the ultimate goal is human flourishing, that all they do to maintain control is to preserve communism, which exists to make a better life for their citizens. If that’s the case, then if both sides are equally capable of building it, does it matter whether the instruction to maximize human flourishing comes from the US or China?
(Again, I want to reiterate that I’m genuinely uncertain here.)
I believe Xi (or choose your CCP representative) would say that the ultimate goal is human flourishing
I’m very much worried that this sort of thinking is a severe case of Typical Mind Fallacy.
I think the main terminal values of the individuals constituting the CCP – and I do mean terminal, not instrumental – are the preservation of their personal status, power, and control, like the values of ~all dictatorships, and most politicians in general. Ideology is mostly just an aesthetics, a tool for internal and external propaganda/rhetoric, and the backdrop for internal status games.
There probably are some genuine shards of ideology in their minds. But I expect minuscule overlap between their at-face-value ideological messaging, and the future they’d choose to build if given unchecked power.
On the other hand, if viewed purely as an organization/institution, I expect that the CCP doesn’t have coherent “values” worth talking about at all. Instead, it is best modeled as a moral-maze-like inertial bureaucracy/committee which is just replaying instinctive patterns of behavior.
I expect the actual “CCP” would be something in-between: it would intermittently act as a collection of power-hungry ideology-biased individuals, and as an inertial institution. I have no idea how this mess would actually generalize “off-distribution”, as in, outside the current resource, technology, and power constraints. But I don’t expect the result to be pretty.
Mind, similar holds for the USG too, if perhaps to a lesser extent.
I would argue that leaders like Xi would not immediately choose general human flourishing as the goal. Xi has a giant chip on his shoulder. I suspect (not with any real proof, but just from a general intuition) that he feels western powers humiliated imperial China and that permanently disabling them is the first order of business. That means immediately dissolving western governments and placing them under CCP control. Part of human flourishing is the feeling of agency. Having a foreign government use AI to remove their government is probably not conducive to human flourishing. Instead, it will produce utter despair and hopelessness.
Consider what the US did with Native Americans using complete tech superiority. Subjugation and decimation in the name of “improvement” and “reeducation.” Their governments were eliminated. They were often forcibly relocated at gunpoint. Schools were created to beat out “savage” habits from children. Their children were seized and rehomed with Whites. Their languages were forcibly suppresed and destroyed. Many killed themselves rather than submit. That is what I’d expect to happen to the West if China gets AGI.
Unfortunately, given the rate at which things are moving, I expect the West’s slight lead to evaporate. They’ve already fast copied SORA. The West is unprepared to contend with a fully operational China. The counter measures are half-hearted and too late. I foresee a very bleak future.
My biggest problem with Leopold’s project is this: in a world where his models hold up, where superintelligence is right around the corner, a US / China race is inevitable, and the winner really matters; in that world, publishing these essays on the open internet is very dangerous. It seems just as likely to help the Chinese side as to help the US.
If China prioritizes AI (if they decide that it’s one tenth as important as Leopold suggests), I’d expect their administration to act more quickly and competently than the US. I don’t have a good reason to think Leopold’s essays will have a bigger impact in the US government than the Chinese, or vice-versa (I don’t think it matters much that it was written in English). My guess is that they’ve been read by some USG staffers, but I wouldn’t be surprised if things die out with the excitement of the upcoming election and partisan concerns. On the other hand, I wouldn’t be surprised if they’re already circulating in Beijing. If not now, then maybe in the future—now that these essays are published on the internet, there’s no way to take them back.
What’s more, it seems possible to me that by framing things as a race, and saying cooperation is “fanciful”, may (in a self-fulfilling prophecy way) make a race more likely (and cooperation less).
Another complicating factor is that there’s just no way the US could run a secret project without China getting word of it immediately. With all the attention paid to the top US labs and research scientists, they’re not going to all just slip away to New Mexico for three years unnoticed. (I’m not sure if China could pull off such a secret project, but I wouldn’t rule it out.)
A slight silver lining, I’m not sure if a world in which China “wins” the race is all that bad. I’m genuinely uncertain. Let’s take Leopold’s objections for example:
I agree that all of these are bad (very bad). But I think they’re all means to preserve the CCP’s control. With superintelligence, preservation of control is no longer a problem.
I believe Xi (or choose your CCP representative) would say that the ultimate goal is human flourishing, that all they do to maintain control is to preserve communism, which exists to make a better life for their citizens. If that’s the case, then if both sides are equally capable of building it, does it matter whether the instruction to maximize human flourishing comes from the US or China?
(Again, I want to reiterate that I’m genuinely uncertain here.)
I’m very much worried that this sort of thinking is a severe case of Typical Mind Fallacy.
I think the main terminal values of the individuals constituting the CCP – and I do mean terminal, not instrumental – are the preservation of their personal status, power, and control, like the values of ~all dictatorships, and most politicians in general. Ideology is mostly just an aesthetics, a tool for internal and external propaganda/rhetoric, and the backdrop for internal status games.
There probably are some genuine shards of ideology in their minds. But I expect minuscule overlap between their at-face-value ideological messaging, and the future they’d choose to build if given unchecked power.
On the other hand, if viewed purely as an organization/institution, I expect that the CCP doesn’t have coherent “values” worth talking about at all. Instead, it is best modeled as a moral-maze-like inertial bureaucracy/committee which is just replaying instinctive patterns of behavior.
I expect the actual “CCP” would be something in-between: it would intermittently act as a collection of power-hungry ideology-biased individuals, and as an inertial institution. I have no idea how this mess would actually generalize “off-distribution”, as in, outside the current resource, technology, and power constraints. But I don’t expect the result to be pretty.
Mind, similar holds for the USG too, if perhaps to a lesser extent.
I would argue that leaders like Xi would not immediately choose general human flourishing as the goal. Xi has a giant chip on his shoulder. I suspect (not with any real proof, but just from a general intuition) that he feels western powers humiliated imperial China and that permanently disabling them is the first order of business. That means immediately dissolving western governments and placing them under CCP control. Part of human flourishing is the feeling of agency. Having a foreign government use AI to remove their government is probably not conducive to human flourishing. Instead, it will produce utter despair and hopelessness.
Consider what the US did with Native Americans using complete tech superiority. Subjugation and decimation in the name of “improvement” and “reeducation.” Their governments were eliminated. They were often forcibly relocated at gunpoint. Schools were created to beat out “savage” habits from children. Their children were seized and rehomed with Whites. Their languages were forcibly suppresed and destroyed. Many killed themselves rather than submit. That is what I’d expect to happen to the West if China gets AGI.
Unfortunately, given the rate at which things are moving, I expect the West’s slight lead to evaporate. They’ve already fast copied SORA. The West is unprepared to contend with a fully operational China. The counter measures are half-hearted and too late. I foresee a very bleak future.