I for one am not being hypocritical here. Analogy: Suppose it came to light that the US was working on super-bioweapons with a 100% fatality rate, long incubation period, vaccine-resistant, etc. and that they ignored the combined calls from most of the rest of the world to get them to stop. They say they are doing it safely and that it’ll only be used against terrorists (they say they’ve ‘aligned’ the virus to only kill terrorists or something like that, but many prominent bio experts say their techniques are far from adequate to ensure this and some say they are being pretty delusional to think their techniques even had a chance of achieving this). Wouldn’t you agree that other countries would be well within their rights to attack the relevant bioweapon facilities, after diplomacy failed?
I am not an American (so excuse me for my bad English!), so my opinion about the admissibility of attack on the US data centers is not so important. This is not my country.
But reading about the bombing of Russian data centers as an example was unpleasant. It sounds like a Western bias for me. And not only for me.
If the text is aimed at readers not only from the First World countries, well, perhaps the authors should do such a clarification as you did! Then it will not look like political hypocrisy. Or not write about air strikes at all, because people are distracted for discussing this.
Thank you for pointing this perspective out. Although Eliezer is from the west, I assure you he cares nothing for that sort of politics. The whole point is that the ban would have to be universally supported, with a tight alliance between US, China, Russia, and ideally every other country in the world. No one wants to do any airstrikes and, you’re right, they are distracting from the real conversation.
Thanks. I agree it was a mistake for Yudkowsky to mention that bit, for the reason you mention. Alternatively he should have clarified that he wasn’t being a hypocrite and that he’d say the same thing if it was US datacenters going rogue and threatening the world.
I think your opinion matters morally and epistemically regardless of your nationality. I agree that your opinion is less likely to influence the US government if you aren’t living in the US. Sorry about that.
Umm arguably the USA did exactly this when they developed devices that exploit fusion and then miniaturized them and loaded them into bombers, silos, and submarines.
They never made enough nukes to kill everyone on the planet but that bioweapon probably wouldn’t either. Bioweapon is more counterable, some groups would survive so long as they isolated long enough.
So… are you saying that if the nations of the world had gotten together to agree to ban nukes in 1950 or so, and the ban seemed to be generally working except that the USA said no and continued to develop nukes, the other nations of the world would have been justified in attacking said nuclear facilities?
Justified? Yes. Would the USA have caved in response? Of course not, it has nukes and they don’t. (Assuming it first gets everything in place for rapid exploitation of the nukes, it can use them danger close to vaporize invasions then bomb every country attackings most strategic assets. )
AGI has similar military benefits. Better attack fast or the country with it will rapidly become more powerful and you will be helpless to threaten anything in return, having not invested in AGI infrastructure.
So in this scenario each party has to have massive training facilities, smaller secret test runs, and warehouses full of robots so they can rapidly act if they think the other party is defecting. So everyone is a slight pressure on a button away from developing and using AGI.
If diplomacy failed, but yes, sure. I’ve previously wished out loud for China to sabotage US AI projects in retaliation for chip export controls, in the hopes that if all the countries sabotage all the other countries’ AI projects, maybe Earth as a whole can “uncoordinate” to not build AI even if Earth can’t coordinate.
Are you aware that AI safety is not considered a real issue by the Chinese intelligentsia? The limits of AI safety awareness here are surface-level discussions of Western AI Safety ideas. Not a single Chinese researcher, as far as I can recall, has actually said anything like “AI will kill us all by default if it is not aligned”.
Given the chip ban, any attempts at an AI control treaty will be viewed as an attempt to prevent China from overtaking the US in terms of AI hegemony. The only conditions to an AI control treaty that Beijing will accept will also allow it to reach transformative AGI first. Which it then will, because we don’t think AI safety is a real concern, the same way you don’t think the Christian rapture is a real concern.
The CCP does not think like the West. Nothing says it has to take Western concerns seriously. WE DON’T BELIEVE IN AI RUIN.
Nobody in the US cared either, three years earlier. That superintelligence will kill everyone on Earth is a truth, and once which has gotten easier and easier to figure out over the years. I have not entirely written off the chance that, especially as the evidence gets more obvious, people on Earth will figure out this true fact and maybe even do something about it and survive. I likewise am not assuming that China is incapable of ever figuring out this thing that is true. If your opinion of Chinese intelligence is lower than mine, you are welcome to say, “Even if this is true and the West figures out that it is true, the CCP could never come to understand it”. That could even be true, for all I know, but I do not have present cause to believe it. I definitely don’t believe it about everyone in China; if it were true and a lot of people in the West figured it out, I’d expect a lot of individual people in China to see it too.
American here. Yes, I would support it—even if it caused a lot of deaths because the data center is in a populated area. American AI researchers are a much bigger threat to what I care about (i.e., “the human project”) than Russia is.
Not sure if I would put it that strongly, but I think I would not support retaliation for the bombing if it legitimately (after diplomacy) came to that. The bombing country would have to claim to be acting in self-defense, try to minimize collateral damage, and not be doing large training runs themselves.
Suppose China and Russia accepted the Yudkowsky’s initiative. But the USA is not. Would you support to bomb a American data center?
I for one am not being hypocritical here. Analogy: Suppose it came to light that the US was working on super-bioweapons with a 100% fatality rate, long incubation period, vaccine-resistant, etc. and that they ignored the combined calls from most of the rest of the world to get them to stop. They say they are doing it safely and that it’ll only be used against terrorists (they say they’ve ‘aligned’ the virus to only kill terrorists or something like that, but many prominent bio experts say their techniques are far from adequate to ensure this and some say they are being pretty delusional to think their techniques even had a chance of achieving this). Wouldn’t you agree that other countries would be well within their rights to attack the relevant bioweapon facilities, after diplomacy failed?
I’m not an American, so my consent doesn’t mean much :)
? Can you elaborate, I’m not sure what you are saying.
I am not an American (so excuse me for my bad English!), so my opinion about the admissibility of attack on the US data centers is not so important. This is not my country.
But reading about the bombing of Russian data centers as an example was unpleasant. It sounds like a Western bias for me. And not only for me.
‘What on Earth was the point of choosing this as an example? To rouse the political emotions of the readers and distract them from the main question?’.
If the text is aimed at readers not only from the First World countries, well, perhaps the authors should do such a clarification as you did! Then it will not look like political hypocrisy. Or not write about air strikes at all, because people are distracted for discussing this.
I’m Russian and I think, when I will translate this, I will change “Russian” to “[other country’s]”. Will feel safer that way.
BTW, Done
Thank you for pointing this perspective out. Although Eliezer is from the west, I assure you he cares nothing for that sort of politics. The whole point is that the ban would have to be universally supported, with a tight alliance between US, China, Russia, and ideally every other country in the world. No one wants to do any airstrikes and, you’re right, they are distracting from the real conversation.
Thanks. I agree it was a mistake for Yudkowsky to mention that bit, for the reason you mention. Alternatively he should have clarified that he wasn’t being a hypocrite and that he’d say the same thing if it was US datacenters going rogue and threatening the world.
I think your opinion matters morally and epistemically regardless of your nationality. I agree that your opinion is less likely to influence the US government if you aren’t living in the US. Sorry about that.
Thanks for your answer, this is important to me.
Umm arguably the USA did exactly this when they developed devices that exploit fusion and then miniaturized them and loaded them into bombers, silos, and submarines.
They never made enough nukes to kill everyone on the planet but that bioweapon probably wouldn’t either. Bioweapon is more counterable, some groups would survive so long as they isolated long enough.
So… are you saying that if the nations of the world had gotten together to agree to ban nukes in 1950 or so, and the ban seemed to be generally working except that the USA said no and continued to develop nukes, the other nations of the world would have been justified in attacking said nuclear facilities?
Justified? Yes. Would the USA have caved in response? Of course not, it has nukes and they don’t. (Assuming it first gets everything in place for rapid exploitation of the nukes, it can use them danger close to vaporize invasions then bomb every country attackings most strategic assets. )
AGI has similar military benefits. Better attack fast or the country with it will rapidly become more powerful and you will be helpless to threaten anything in return, having not invested in AGI infrastructure.
So in this scenario each party has to have massive training facilities, smaller secret test runs, and warehouses full of robots so they can rapidly act if they think the other party is defecting. So everyone is a slight pressure on a button away from developing and using AGI.
If diplomacy failed, but yes, sure. I’ve previously wished out loud for China to sabotage US AI projects in retaliation for chip export controls, in the hopes that if all the countries sabotage all the other countries’ AI projects, maybe Earth as a whole can “uncoordinate” to not build AI even if Earth can’t coordinate.
Are you aware that AI safety is not considered a real issue by the Chinese intelligentsia? The limits of AI safety awareness here are surface-level discussions of Western AI Safety ideas. Not a single Chinese researcher, as far as I can recall, has actually said anything like “AI will kill us all by default if it is not aligned”.
Given the chip ban, any attempts at an AI control treaty will be viewed as an attempt to prevent China from overtaking the US in terms of AI hegemony. The only conditions to an AI control treaty that Beijing will accept will also allow it to reach transformative AGI first. Which it then will, because we don’t think AI safety is a real concern, the same way you don’t think the Christian rapture is a real concern.
The CCP does not think like the West. Nothing says it has to take Western concerns seriously. WE DON’T BELIEVE IN AI RUIN.
Nobody in the US cared either, three years earlier. That superintelligence will kill everyone on Earth is a truth, and once which has gotten easier and easier to figure out over the years. I have not entirely written off the chance that, especially as the evidence gets more obvious, people on Earth will figure out this true fact and maybe even do something about it and survive. I likewise am not assuming that China is incapable of ever figuring out this thing that is true. If your opinion of Chinese intelligence is lower than mine, you are welcome to say, “Even if this is true and the West figures out that it is true, the CCP could never come to understand it”. That could even be true, for all I know, but I do not have present cause to believe it. I definitely don’t believe it about everyone in China; if it were true and a lot of people in the West figured it out, I’d expect a lot of individual people in China to see it too.
American here. Yes, I would support it—even if it caused a lot of deaths because the data center is in a populated area. American AI researchers are a much bigger threat to what I care about (i.e., “the human project”) than Russia is.
Not sure if I would put it that strongly, but I think I would not support retaliation for the bombing if it legitimately (after diplomacy) came to that. The bombing country would have to claim to be acting in self-defense, try to minimize collateral damage, and not be doing large training runs themselves.
https://imgflip.com/i/7h9d2q
All AI safety was about bombing Russia?
It always was.