I agree with many points here and have been excited about AE Studio’s outreach. Quick thoughts on China/international AI governance:
I think some international AI governance proposals have some sort of “kum ba yah, we’ll all just get along” flavor/tone to them, or some sort of “we should do this because it’s best for the world as a whole” vibe. This isn’t even Dem-coded so much as it is naive-coded, especially in DC circles.
US foreign policy is dominated primarily by concerns about US interests. Other considerations can matter, but they are not the dominant driving force. My impression is that this is true within both parties (with a few exceptions).
I think folks interested in international AI governance should study international security agreements and try to get a better understanding of relevant historical case studies. Lots of stuff to absorb from the Cold War, the Iran Nuclear Deal, US-China relations over the last several decades, etc. (I’ve been doing this & have found it quite helpful.)
Strong Republican leaders can still engage in bilateral/multilateral agreements that serve US interests. Recall that Reagan negotiated arms control agreements with the Soviet Union, and the (first) Trump Administration facilitated the Abraham Accords. Being “tough on China” doesn’t mean “there are literally no circumstances in which I would be willing to sign a deal with China.” (But there likely does have to be a clear case that the deal serves US interests, has appropriate verification methods, etc.)
I think some international AI governance proposals have some sort of “kum ba yah, we’ll all just get along” flavor/tone to them, or some sort of “we should do this because it’s best for the world as a whole” vibe. This isn’t even Dem-coded so much as it is naive-coded, especially in DC circles.
This inspired me to write a silly dialogue.
Simplicio enters. An engine rumbles like the thunder of the gods, as Sophistico focuses on ensuring his MAGMA-O1 racecar will go as fast as possible.
Simplicio: “You shouldn’t play Chicken.”
Sophistico: “Why not?”
Simplicio: “Because you’re both worse off?”
Sophistico, chortling, pats Simplicio’s shoulder
Sophistico: “Oh dear, sweet, naive Simplicio! Don’t you know that no one cares about what’s ‘better for everyone?’ It’s every man out for himself! Really, if you were in charge, Simplicio, you’d be drowned like a bag of mewling kittens.”
Simplicio: “Are you serious? You’re really telling me that you’d prefer to play a game where you and Galactico hurtle towards each other on tonnes of iron, desperately hoping the other will turn first?”
Sophistico: “Oh Simplicio, don’t you understand? If it were up to me, I wouldn’t be playing this game. But if I back out or turn first, Galactico gets to call me a Chicken, and say his brain is much larger than mine. Think of the harm that would do to the United Sophist Association! ”
Simplicio: “Or you could die when you both ram your cars into each other! Think of the harm that would do to you! Think of how Galactico is in the same position as you! “
Sophistico shakes his head sadly.
Sophistico: “Ah, I see! You must believe steering is a very hard problem. But don’t you understand that this is simply a matter of engineering? No matter how close Galactico and I get to the brink, we’ll have time to turn before we crash! Sure, there’s some minute danger that we might make a mistake in the razor-thin slice between utter safety and certain doom. But the probability of harm is small enough that it doesn’t change the calculus.”
Simplicio: “You’re not getting it. Your race against each other will shift the dynamics of when you’ll turn. Each moment in time, you’ll be incentivized to go just a little further until there’s few enough worlds that that razor-thin slice ain’t so thin any more. And your steering won’t save from that. It can’t. “
Sophistico: “What an argument! There’s no way our steering won’t be good enough. Look, I can turn away from Galactico’s car right now, can’t I? And I hardly think we’d push things till so late. We’d be able to turn in time. And moreover, we’ve never crashed before, so why should this time be any different?”
Simplico: “You’ve doubled the horsepower of your car and literally tied a rock to the pedal! You’re not going to be able to stop in time!”
Sophistico: “Well, of course I have to go faster than last time! USA must be first, you know?”
Simplicio: “OK, you know what? Fine. I’ll go talk to Galactico. I’m sure he’ll agree not to call you chicken.”
Sophistico: “That’s the most ridiculous thing I’ve ever heard. Galactico’s ruthless and will do anything to beat me.”
Simplicio leaves as Acceleratio arrives with a barrel of jetfuel for the scramjet engine he hooked up to Simplicio’s O-1.
Pretty much. It’s not “naive” if it’s literally the only option that actually does not harm everyone involved, unless of course we want to call every world leader and self-appointed foreign policy expert a blithering idiot with tunnel vision (I make no such claim a priori; ball’s in their court).
It’s important to not oversimplify things. It’s also important to not overcomplicate them. Domain experts tend to be resistant to the first kind of mental disease, but tragically prone to the second. Sometimes it really is Just That Simple, and everything else is commentary and superfluous detail.
If I squint, I can see where they’re coming from. People often say that wars are foolish, and both sides would be better off if they didn’t fight. And this is standardly called “naive” by those engaging in realpolitik. Sadly, for any particular war, there’s a significant chance they’re right. Even aside from human stupidity, game theory is not so kind as to allow for peace unending. But the China-America AI race is not like that. The Chinese don’t want to race. They’ve shown no interest in being part of a race. It’s just American hawks on a loud, Quixotic quest masking the silence.
If I were to continue the story, it’d show Simplicio asking Galactico not to play Chicken and Galacitco replying “race? What race?”. Then Sophistico crashes into Galactico and Simplicio. Everyone dies, The End.
People often say that wars are foolish, and both sides would be better off if they didn’t fight. And this is standardly called “naive” by those engaging in realpolitik. Sadly, for any particular war, there’s a significant chance they’re right. Even aside from human stupidity, game theory is not so kind as to allow for peace unending.
I’m not saying obviously that ALL conflict ever is avoidable or irrational, but there are a lot that are:
caused by a miscommunication/misunderstanding/delusional understanding of reality;
rooted in a genuine competition between conflicting interests, but those interests only pertain to a handful of leaders, and most of the people actually doing the fighting really have no genuine stake in it, just false information and/or a giant coordination problem that makes it hard to tell those leaders to fuck off;
rooted in a genuine competition between conflicting interests between the actual people doing the fighting, but the gains are still not so large to justify the costs of the war, which have been wildly underestimated.
And I’d say that just about makes up a good 90% of all conflicts. There’s a thing where people who are embedded into specialised domains start seeing the trees (“here is the complex clockwork of cause-and-effect that made this thing happen”) and missing the forest (“if we weren’t dumb and irrational as fuck none of this would have happened in the first place”). The main point of studying past conflicts should be to distil here and there a bit of wisdom about how in fact lot of that stuff is entirely avoidable if people can just stop being absolute idiots now and then.
My impression is that (without even delving into any meta-level IR theory debates) Democrats are more hawkish on Russia while Republicans are more hawkish on China. So while obviously neither parties are kum-ba-yah and both ultimately represent US interests, it still makes sense to expect each party to be less receptive to the idea of ending any potential arms race against the country they consider an existential threat to US interests if left unchecked, so the party that is more hawkish on a primarily military superpower would be worse on nuclear x-risk, and the party that is more hawkish on a primarily economic superpower would be worse on AI x-risk and environmental x-risk. (Negotiating arms control agreements with the enemy superpower right during its period of liberalization and collapse or facilitating a deal between multiple US allies with the clear goal of serving as a counterweight to the purported enemy superpower seems entirely irrelevant here.)
I agree with many points here and have been excited about AE Studio’s outreach. Quick thoughts on China/international AI governance:
I think some international AI governance proposals have some sort of “kum ba yah, we’ll all just get along” flavor/tone to them, or some sort of “we should do this because it’s best for the world as a whole” vibe. This isn’t even Dem-coded so much as it is naive-coded, especially in DC circles.
US foreign policy is dominated primarily by concerns about US interests. Other considerations can matter, but they are not the dominant driving force. My impression is that this is true within both parties (with a few exceptions).
I think folks interested in international AI governance should study international security agreements and try to get a better understanding of relevant historical case studies. Lots of stuff to absorb from the Cold War, the Iran Nuclear Deal, US-China relations over the last several decades, etc. (I’ve been doing this & have found it quite helpful.)
Strong Republican leaders can still engage in bilateral/multilateral agreements that serve US interests. Recall that Reagan negotiated arms control agreements with the Soviet Union, and the (first) Trump Administration facilitated the Abraham Accords. Being “tough on China” doesn’t mean “there are literally no circumstances in which I would be willing to sign a deal with China.” (But there likely does have to be a clear case that the deal serves US interests, has appropriate verification methods, etc.)
This inspired me to write a silly dialogue.
Simplicio enters. An engine rumbles like the thunder of the gods, as Sophistico focuses on ensuring his MAGMA-O1 racecar will go as fast as possible.
Simplicio: “You shouldn’t play Chicken.”
Sophistico: “Why not?”
Simplicio: “Because you’re both worse off?”
Sophistico, chortling, pats Simplicio’s shoulder
Sophistico: “Oh dear, sweet, naive Simplicio! Don’t you know that no one cares about what’s ‘better for everyone?’ It’s every man out for himself! Really, if you were in charge, Simplicio, you’d be drowned like a bag of mewling kittens.”
Simplicio: “Are you serious? You’re really telling me that you’d prefer to play a game where you and Galactico hurtle towards each other on tonnes of iron, desperately hoping the other will turn first?”
Sophistico: “Oh Simplicio, don’t you understand? If it were up to me, I wouldn’t be playing this game. But if I back out or turn first, Galactico gets to call me a Chicken, and say his brain is much larger than mine. Think of the harm that would do to the United Sophist Association! ”
Simplicio: “Or you could die when you both ram your cars into each other! Think of the harm that would do to you! Think of how Galactico is in the same position as you! “
Sophistico shakes his head sadly.
Sophistico: “Ah, I see! You must believe steering is a very hard problem. But don’t you understand that this is simply a matter of engineering? No matter how close Galactico and I get to the brink, we’ll have time to turn before we crash! Sure, there’s some minute danger that we might make a mistake in the razor-thin slice between utter safety and certain doom. But the probability of harm is small enough that it doesn’t change the calculus.”
Simplicio: “You’re not getting it. Your race against each other will shift the dynamics of when you’ll turn. Each moment in time, you’ll be incentivized to go just a little further until there’s few enough worlds that that razor-thin slice ain’t so thin any more. And your steering won’t save from that. It can’t. “
Sophistico: “What an argument! There’s no way our steering won’t be good enough. Look, I can turn away from Galactico’s car right now, can’t I? And I hardly think we’d push things till so late. We’d be able to turn in time. And moreover, we’ve never crashed before, so why should this time be any different?”
Simplico: “You’ve doubled the horsepower of your car and literally tied a rock to the pedal! You’re not going to be able to stop in time!”
Sophistico: “Well, of course I have to go faster than last time! USA must be first, you know?”
Simplicio: “OK, you know what? Fine. I’ll go talk to Galactico. I’m sure he’ll agree not to call you chicken.”
Sophistico: “That’s the most ridiculous thing I’ve ever heard. Galactico’s ruthless and will do anything to beat me.”
Simplicio leaves as Acceleratio arrives with a barrel of jetfuel for the scramjet engine he hooked up to Simplicio’s O-1.
Pretty much. It’s not “naive” if it’s literally the only option that actually does not harm everyone involved, unless of course we want to call every world leader and self-appointed foreign policy expert a blithering idiot with tunnel vision (I make no such claim a priori; ball’s in their court).
It’s important to not oversimplify things. It’s also important to not overcomplicate them. Domain experts tend to be resistant to the first kind of mental disease, but tragically prone to the second. Sometimes it really is Just That Simple, and everything else is commentary and superfluous detail.
If I squint, I can see where they’re coming from. People often say that wars are foolish, and both sides would be better off if they didn’t fight. And this is standardly called “naive” by those engaging in realpolitik. Sadly, for any particular war, there’s a significant chance they’re right. Even aside from human stupidity, game theory is not so kind as to allow for peace unending. But the China-America AI race is not like that. The Chinese don’t want to race. They’ve shown no interest in being part of a race. It’s just American hawks on a loud, Quixotic quest masking the silence.
If I were to continue the story, it’d show Simplicio asking Galactico not to play Chicken and Galacitco replying “race? What race?”. Then Sophistico crashes into Galactico and Simplicio. Everyone dies, The End.
I’m not saying obviously that ALL conflict ever is avoidable or irrational, but there are a lot that are:
caused by a miscommunication/misunderstanding/delusional understanding of reality;
rooted in a genuine competition between conflicting interests, but those interests only pertain to a handful of leaders, and most of the people actually doing the fighting really have no genuine stake in it, just false information and/or a giant coordination problem that makes it hard to tell those leaders to fuck off;
rooted in a genuine competition between conflicting interests between the actual people doing the fighting, but the gains are still not so large to justify the costs of the war, which have been wildly underestimated.
And I’d say that just about makes up a good 90% of all conflicts. There’s a thing where people who are embedded into specialised domains start seeing the trees (“here is the complex clockwork of cause-and-effect that made this thing happen”) and missing the forest (“if we weren’t dumb and irrational as fuck none of this would have happened in the first place”). The main point of studying past conflicts should be to distil here and there a bit of wisdom about how in fact lot of that stuff is entirely avoidable if people can just stop being absolute idiots now and then.
My impression is that (without even delving into any meta-level IR theory debates) Democrats are more hawkish on Russia while Republicans are more hawkish on China. So while obviously neither parties are kum-ba-yah and both ultimately represent US interests, it still makes sense to expect each party to be less receptive to the idea of ending any potential arms race against the country they consider an existential threat to US interests if left unchecked, so the party that is more hawkish on a primarily military superpower would be worse on nuclear x-risk, and the party that is more hawkish on a primarily economic superpower would be worse on AI x-risk and environmental x-risk. (Negotiating arms control agreements with the enemy superpower right during its period of liberalization and collapse or facilitating a deal between multiple US allies with the clear goal of serving as a counterweight to the purported enemy superpower seems entirely irrelevant here.)