One possible way to the decisive strategic advantage is to combine rather mediocre AI with some also mediocre but rare real world capability.
Toy example: An AI is created with is capable to win in nuclear war by choosing right targets and other elements of nuclear strategy. The AI itself is not a superintelligence and maybe like something like AlphaZero for nukes. Many companies and people are capable to create such AI. However, only a nuclear power with a large nuclear arsenal could actually get any advantage of it, which could be only US, Russia and China. Lets assume that such AI gives +1000 in nuclear ELO rating between nuclear superpowers. Now the first of three countries which will get it, will have temporary decisive strategic advantage. This example is a toy example as it is unlikely that the first country which would get such “nuclear AI decisive advantage” will take a risk of first strike.
There are several other real world capabilities which could be combines with mediocre AI to get decisive strategic advantage: access to a very large training data, access to large surveillance capabilities like Prizm, access to large untapped computing power, to funds, to pool of scientists, to some other secret military capabilities, to some drone manufacturing capabilities.
All these capabilities are centered around largest military powers and their intelligence and military services. Thus, combining rather mediocre AI with a whole capabilities of a nuclear superpower could create a temporary strategic advantage. Assuming that we have around 3 nuclear superpowers, one of them could get temporary strategic advantage via AI. But each of them has some internal problems in implementing such project.
Hm.. It occurs to me that AI itself does not have to be capable of winning a nuclear war. The leaders just have to be convinced they have enough of a decisive advantage to start it.
More broadly, an AI only needs to think that starting a nuclear war has higher expected utility than not starting it.
E.g. if an AI thinks it is about to be destroyed by default, but that starting a nuclear war (which it expects to lose) will distract its enemies and maybe give it the chance to survive and continue pursuing its objectives, then the nuclear war may be the better bet. (I discuss this kind of thing in “Disjunctive Scenarios of Catastrophic AI Risk”.)
Not more broadly, different class. I’m thinking of, like, witch doctors making warriors bulletproof. If they believe its power will protect them, then breaking MAD becomes an option.
The AI in this scenario doesn’t need to think at all. It could actually just be a magic 8 ball.
One possible way to the decisive strategic advantage is to combine rather mediocre AI with some also mediocre but rare real world capability.
Toy example: An AI is created with is capable to win in nuclear war by choosing right targets and other elements of nuclear strategy. The AI itself is not a superintelligence and maybe like something like AlphaZero for nukes. Many companies and people are capable to create such AI. However, only a nuclear power with a large nuclear arsenal could actually get any advantage of it, which could be only US, Russia and China. Lets assume that such AI gives +1000 in nuclear ELO rating between nuclear superpowers. Now the first of three countries which will get it, will have temporary decisive strategic advantage. This example is a toy example as it is unlikely that the first country which would get such “nuclear AI decisive advantage” will take a risk of first strike.
There are several other real world capabilities which could be combines with mediocre AI to get decisive strategic advantage: access to a very large training data, access to large surveillance capabilities like Prizm, access to large untapped computing power, to funds, to pool of scientists, to some other secret military capabilities, to some drone manufacturing capabilities.
All these capabilities are centered around largest military powers and their intelligence and military services. Thus, combining rather mediocre AI with a whole capabilities of a nuclear superpower could create a temporary strategic advantage. Assuming that we have around 3 nuclear superpowers, one of them could get temporary strategic advantage via AI. But each of them has some internal problems in implementing such project.
Hm.. It occurs to me that AI itself does not have to be capable of winning a nuclear war. The leaders just have to be convinced they have enough of a decisive advantage to start it.
More broadly, an AI only needs to think that starting a nuclear war has higher expected utility than not starting it.
E.g. if an AI thinks it is about to be destroyed by default, but that starting a nuclear war (which it expects to lose) will distract its enemies and maybe give it the chance to survive and continue pursuing its objectives, then the nuclear war may be the better bet. (I discuss this kind of thing in “Disjunctive Scenarios of Catastrophic AI Risk”.)
Not more broadly, different class. I’m thinking of, like, witch doctors making warriors bulletproof. If they believe its power will protect them, then breaking MAD becomes an option.
The AI in this scenario doesn’t need to think at all. It could actually just be a magic 8 ball.
Ah, right, that’s indeed a different class. I guess I was too happy to pattern-match someone else’s thought to my great idea. :-)
A few plausible limited abilities that could provide decisive first move advantages:
The ability remotely take control of any networked computer
The ability to defeat all conventional cryptography would provide a decisive advantage in the type of conflict we’re currently seeing.
The ability to reliably market price movements