I don’t think a ‘fast FOOM’ is plausible; the existence of multiple competing AGI-equipped powers would serve to deter a ‘slow FOOM’.
Even if cryptography is not as threatening as advances in direct weapons (e.g. you could make a case for weapons nanobots), it is certainly a large source of potentially decisive military advances. Cyber attacks are faster than direct attacks and would be more difficult to defend against. Cyber attack technology (including cryptography) is harder to reverse-engineer, and its research and deployment involves no physical manufacturing, making its illicit development under a global weapons ban more difficult to detect.
I don’t think a ‘fast FOOM’ is plausible; the existence of multiple competing AGI-equipped powers would serve to deter a ‘slow FOOM’.
This leads to a variety of questions:
First, regarding the fast fooming issue:
How fast is “fast FOOM” in your framework?
How unlikely is it to be labeled as implausible?
How likely is do you think for P=NP?
How likely is it do you think for BQP to contain NP?
How plausible is it to you that a strong, not foomed AI could make practical quantum computers?
How likely do you consider fast fooming given P=NP or NP contained in BQP?
Note that for 1 and 6 to be consistent, the probability of 1 should be higher than whatever you gave for the probability in 6 times the probability in 3 (ETA: fixed), since 3-4-5 is but one pair of pathways for an AI to plausibly go foom.
the existence of multiple competing AGI-equipped powers would serve to deter a ‘slow FOOM’.
This is not obvious. Moreover, what is to prevent the AGIs from working together in a way that makes humans irrelevant? If there’s a paperclip maximizer and a stamp maximizer, but they can agree to cooperate (afterall, there’s very little overlap between the elements in stamps and the elements in metal paperclips) and humans are just as badly off then as if only one of them were around. Multiple strong AIs that don’t share human values means we have even more intelligent competitors for resources in our approximate light cone. Increasing the number of competing AIs might make it less likely for humans to survive in any way that we’d recognize as something we want.
Even if cryptography is not as threatening as advances in direct weapons (e.g. you could make a case for weapons nanobots), it is certainly a large source of potentially decisive military advances.
Not really. Military organizations rarely need to use cutting edge cryptography. Most interesting crypographic protocols are things like public key crypto which are useful when one has a large number of distinct economic actors who can’t be trusted and don’t have secure communication channels. Armies have things like centralized command structures which allow one to do things like distribute one time pads or have prior agreed upon signals which make most of these issues irrelevant. There situations where armies need cryptographic protocols are situations like World War 2, where one has many small groups that one needs to communicate securely with and one doesn’t have easy physical access to them. In that sort of context, modern crypto can help. But, large scale ground wars and similar situations seem like an unlikely form of warfare.
Cyber attacks are faster than direct attacks and would be more difficult to defend against.
Hang on. Are we now talking about security in general? That’s a much broader set of questions than just cryptography. I don’t know if it is in general more difficult to defend against such attacks. Most of those attacks have an easy answer: keep systems off line. Attacks through the internet can cause economic damage, but it is difficult for them to cause military damage unless high priority systems are connected to the internet, which is just stupid.
Cyber attack technology (including cryptography) is harder to reverse-engineer
Can you expand on this claim?
making its illicit development under a global weapons ban more difficult to detect.
Has anyone ever suggested a global ban on cryptography or anything similar? Why does that seem like a scenario worth worrying about?
6.How likely do you consider fast fooming given P=NP or NP contained in BQP?
Note that for 1 and 6 to be consistent, the probability of 1 should be higher than whatever you gave for 6, since 3-4-5 is but one pair of pathways for an AI to plausibly go foom.
(Emphasis added.) I think you’ve got that backwards? 1 is P(fast FOOM), 6 is P(fast FOOM | P=NP OR NP in BQP), and you’re arguing that P=NP or NP in BQP would make fast FOOM more likely, so 6 should be higher. That, or 6 should be changed to ( (fast FOOM) AND (P=NP OR NP in BQP) ). Yeah?
The thought was coherent. The typing was wrong. The intended probability estimate was given 3 and 6 together. That is, P(fast FOOM) >= P(fast FOOM| P=NP) * P(P=NP).
I don’t think a ‘fast FOOM’ is plausible; the existence of multiple competing AGI-equipped powers would serve to deter a ‘slow FOOM’.
Even if cryptography is not as threatening as advances in direct weapons (e.g. you could make a case for weapons nanobots), it is certainly a large source of potentially decisive military advances. Cyber attacks are faster than direct attacks and would be more difficult to defend against. Cyber attack technology (including cryptography) is harder to reverse-engineer, and its research and deployment involves no physical manufacturing, making its illicit development under a global weapons ban more difficult to detect.
This leads to a variety of questions:
First, regarding the fast fooming issue:
How fast is “fast FOOM” in your framework?
How unlikely is it to be labeled as implausible?
How likely is do you think for P=NP?
How likely is it do you think for BQP to contain NP?
How plausible is it to you that a strong, not foomed AI could make practical quantum computers?
How likely do you consider fast fooming given P=NP or NP contained in BQP?
Note that for 1 and 6 to be consistent, the probability of 1 should be higher than whatever you gave for the probability in 6 times the probability in 3 (ETA: fixed), since 3-4-5 is but one pair of pathways for an AI to plausibly go foom.
This is not obvious. Moreover, what is to prevent the AGIs from working together in a way that makes humans irrelevant? If there’s a paperclip maximizer and a stamp maximizer, but they can agree to cooperate (afterall, there’s very little overlap between the elements in stamps and the elements in metal paperclips) and humans are just as badly off then as if only one of them were around. Multiple strong AIs that don’t share human values means we have even more intelligent competitors for resources in our approximate light cone. Increasing the number of competing AIs might make it less likely for humans to survive in any way that we’d recognize as something we want.
Not really. Military organizations rarely need to use cutting edge cryptography. Most interesting crypographic protocols are things like public key crypto which are useful when one has a large number of distinct economic actors who can’t be trusted and don’t have secure communication channels. Armies have things like centralized command structures which allow one to do things like distribute one time pads or have prior agreed upon signals which make most of these issues irrelevant. There situations where armies need cryptographic protocols are situations like World War 2, where one has many small groups that one needs to communicate securely with and one doesn’t have easy physical access to them. In that sort of context, modern crypto can help. But, large scale ground wars and similar situations seem like an unlikely form of warfare.
Hang on. Are we now talking about security in general? That’s a much broader set of questions than just cryptography. I don’t know if it is in general more difficult to defend against such attacks. Most of those attacks have an easy answer: keep systems off line. Attacks through the internet can cause economic damage, but it is difficult for them to cause military damage unless high priority systems are connected to the internet, which is just stupid.
Can you expand on this claim?
Has anyone ever suggested a global ban on cryptography or anything similar? Why does that seem like a scenario worth worrying about?
(Emphasis added.) I think you’ve got that backwards? 1 is P(fast FOOM), 6 is P(fast FOOM | P=NP OR NP in BQP), and you’re arguing that P=NP or NP in BQP would make fast FOOM more likely, so 6 should be higher. That, or 6 should be changed to ( (fast FOOM) AND (P=NP OR NP in BQP) ). Yeah?
The thought was coherent. The typing was wrong. The intended probability estimate was given 3 and 6 together. That is, P(fast FOOM) >= P(fast FOOM| P=NP) * P(P=NP).
Ah, cool. Thanks for the clarification.
Fast FOOM is as plausible as P=NP, agreed.