As far as I know, most datacenters are not currently secretly underground in bomb-proof bunkers. If this were known to be the case, I’d have different views of the probable events in the next few years.
Currently, as far as I know, most data centers are out in the open. They weren’t built with the knowledge that soon they would become the equivalent of nuclear ICBM silos. The current state of the world is heavily offense-favoring, so far as I know.
Do you disagree? Do you think ASI will be fully developed and utilized sufficiently to achieve complete worldwide hegemony by the user without there being any major international conflict?
Well, as I mentioned in the other post, but I will open a larger point here: anything physics permits can theoretically happen in the future, right? For complex situations like this, scientifically validated models like particle physics do not exist yet. All we can do is look at what historically happened in a similar scenario and our prior should be that the outcome draw is from the same probability distribution this round. Agree/disagree?
So for nuclear programs, all the outcomes have happened. Superpowers have built vast campuses and secret cities, and made the plutonium and assembled the devices in aboveground facilities. Israel apparently did it underground. Iran has been successfully decelerated from their nuclear ambitions for decades.
But the general trend I feel is that a superpower can’t be stopped by bombing, and another common element has happened a bunch of times historically. Bombing and military actions often harden a belligerents resolve. They hardened the UKs resolve, US resolve, Russian resolve, it goes on.
So in the hypothetical world where party A is too worried about AI dangers to build their own, party B is building it, unless A can kill B, B would respond to the attack by a total war effort and would develop AI and win or die. (Die from nukes, die from the AI, or win the planet)
That is what I predict would be the outcome and we can enumerate all the wars where this historically has happened if you would like. Trend wise the civilian and military leaders on B, consuming their internal propaganda, tend to commit to total war.
If B is a superpower (the USA and China, maybe EU) they can kill all the other parties with their nukes, so choosing to kill B is choosing suicide for yourself.
As far as I know, most datacenters are not currently secretly underground in bomb-proof bunkers. If this were known to be the case, I’d have different views of the probable events in the next few years.
Currently, as far as I know, most data centers are out in the open. They weren’t built with the knowledge that soon they would become the equivalent of nuclear ICBM silos. The current state of the world is heavily offense-favoring, so far as I know.
Do you disagree? Do you think ASI will be fully developed and utilized sufficiently to achieve complete worldwide hegemony by the user without there being any major international conflict?
Well, as I mentioned in the other post, but I will open a larger point here: anything physics permits can theoretically happen in the future, right? For complex situations like this, scientifically validated models like particle physics do not exist yet. All we can do is look at what historically happened in a similar scenario and our prior should be that the outcome draw is from the same probability distribution this round. Agree/disagree?
So for nuclear programs, all the outcomes have happened. Superpowers have built vast campuses and secret cities, and made the plutonium and assembled the devices in aboveground facilities. Israel apparently did it underground. Iran has been successfully decelerated from their nuclear ambitions for decades.
But the general trend I feel is that a superpower can’t be stopped by bombing, and another common element has happened a bunch of times historically. Bombing and military actions often harden a belligerents resolve. They hardened the UKs resolve, US resolve, Russian resolve, it goes on.
So in the hypothetical world where party A is too worried about AI dangers to build their own, party B is building it, unless A can kill B, B would respond to the attack by a total war effort and would develop AI and win or die. (Die from nukes, die from the AI, or win the planet)
That is what I predict would be the outcome and we can enumerate all the wars where this historically has happened if you would like. Trend wise the civilian and military leaders on B, consuming their internal propaganda, tend to commit to total war.
If B is a superpower (the USA and China, maybe EU) they can kill all the other parties with their nukes, so choosing to kill B is choosing suicide for yourself.