Singleton could occur if a group of people developed Artificial General Intelligence with a significant lead over their competitors. The economic advantage from sole possession of AGI technology would allow the controllers of the technology the opportunity to gain a economic or even a political monopoly in a relatively short timescale.
I’d feel better about this if the “would” in the second became a “could”. It might turn out that AGIs don’t do much initially, if the hardware level required is very large, or if boosting their intelligence is very difficult, or if getting them to cooperate is not easy. Alternatively, they might go foom and then destroy everyone, leveling the economic playing field in the sense that all humans will be dead or transformed so much that they might as well be.
This particular risk, as Robin Hanson pointed out, is less plausible if the “race for AGI” involves many competitors, and no competitor can gain too large of a lead over others. This “close race” scenario is more likely if there is an “open-source” attitude in the AGI community.
If serious fooming is a risk then this makes things much worse. This will drastically increase the chances that any one group will activate their AGIs without adequate safety precautions (or any at all).
I don’t follow your logic that crypto is somehow more important than advances in direct weapons technologies. Sure, crypto is important. But it isn’t clear why it would be more important. There are historical examples where it has mattered a lot. There’s no question that the Allies’ cryptographic advantages in World War II mattered, but that’s one of the most extreme historic examples, and in that case the consensus seems to be that they would likely have won in both theaters even without it. Similar remarks apply to other wars where one side had a cryptographic advantage (such as say the North in the US Civil War).
I’m not also sure what exactly you are calling for that is different than what we do now. There are open source applications of a lot of cryptographic protocols. The protocols that don’t have open source implementations are things like fully homomorphic encryption where current protocols aren’t efficient enough to be useable with computers of current capability.
A first step to carrying out such a plan might include encoding of core mathematical results in an open-source database of formal proofs.
Note that most proofs of protocols’ correctness can be found easily online. There have been some attempts to make open source databases of formal proofs in general (Cameron Freer has done work in this regard) not just for encryption. This is a good thing but for purposes of crypto having that really won’t change much.
Followed the link and was disappointed to discover that there isn’t some new encryption scheme based on complex analyticity. :-) Reminds me of the “fractal quantum Hall effect”. In fact, maybe we could use that to realize “fully holomorphic encryption”…
I don’t think a ‘fast FOOM’ is plausible; the existence of multiple competing AGI-equipped powers would serve to deter a ‘slow FOOM’.
Even if cryptography is not as threatening as advances in direct weapons (e.g. you could make a case for weapons nanobots), it is certainly a large source of potentially decisive military advances. Cyber attacks are faster than direct attacks and would be more difficult to defend against. Cyber attack technology (including cryptography) is harder to reverse-engineer, and its research and deployment involves no physical manufacturing, making its illicit development under a global weapons ban more difficult to detect.
I don’t think a ‘fast FOOM’ is plausible; the existence of multiple competing AGI-equipped powers would serve to deter a ‘slow FOOM’.
This leads to a variety of questions:
First, regarding the fast fooming issue:
How fast is “fast FOOM” in your framework?
How unlikely is it to be labeled as implausible?
How likely is do you think for P=NP?
How likely is it do you think for BQP to contain NP?
How plausible is it to you that a strong, not foomed AI could make practical quantum computers?
How likely do you consider fast fooming given P=NP or NP contained in BQP?
Note that for 1 and 6 to be consistent, the probability of 1 should be higher than whatever you gave for the probability in 6 times the probability in 3 (ETA: fixed), since 3-4-5 is but one pair of pathways for an AI to plausibly go foom.
the existence of multiple competing AGI-equipped powers would serve to deter a ‘slow FOOM’.
This is not obvious. Moreover, what is to prevent the AGIs from working together in a way that makes humans irrelevant? If there’s a paperclip maximizer and a stamp maximizer, but they can agree to cooperate (afterall, there’s very little overlap between the elements in stamps and the elements in metal paperclips) and humans are just as badly off then as if only one of them were around. Multiple strong AIs that don’t share human values means we have even more intelligent competitors for resources in our approximate light cone. Increasing the number of competing AIs might make it less likely for humans to survive in any way that we’d recognize as something we want.
Even if cryptography is not as threatening as advances in direct weapons (e.g. you could make a case for weapons nanobots), it is certainly a large source of potentially decisive military advances.
Not really. Military organizations rarely need to use cutting edge cryptography. Most interesting crypographic protocols are things like public key crypto which are useful when one has a large number of distinct economic actors who can’t be trusted and don’t have secure communication channels. Armies have things like centralized command structures which allow one to do things like distribute one time pads or have prior agreed upon signals which make most of these issues irrelevant. There situations where armies need cryptographic protocols are situations like World War 2, where one has many small groups that one needs to communicate securely with and one doesn’t have easy physical access to them. In that sort of context, modern crypto can help. But, large scale ground wars and similar situations seem like an unlikely form of warfare.
Cyber attacks are faster than direct attacks and would be more difficult to defend against.
Hang on. Are we now talking about security in general? That’s a much broader set of questions than just cryptography. I don’t know if it is in general more difficult to defend against such attacks. Most of those attacks have an easy answer: keep systems off line. Attacks through the internet can cause economic damage, but it is difficult for them to cause military damage unless high priority systems are connected to the internet, which is just stupid.
Cyber attack technology (including cryptography) is harder to reverse-engineer
Can you expand on this claim?
making its illicit development under a global weapons ban more difficult to detect.
Has anyone ever suggested a global ban on cryptography or anything similar? Why does that seem like a scenario worth worrying about?
6.How likely do you consider fast fooming given P=NP or NP contained in BQP?
Note that for 1 and 6 to be consistent, the probability of 1 should be higher than whatever you gave for 6, since 3-4-5 is but one pair of pathways for an AI to plausibly go foom.
(Emphasis added.) I think you’ve got that backwards? 1 is P(fast FOOM), 6 is P(fast FOOM | P=NP OR NP in BQP), and you’re arguing that P=NP or NP in BQP would make fast FOOM more likely, so 6 should be higher. That, or 6 should be changed to ( (fast FOOM) AND (P=NP OR NP in BQP) ). Yeah?
The thought was coherent. The typing was wrong. The intended probability estimate was given 3 and 6 together. That is, P(fast FOOM) >= P(fast FOOM| P=NP) * P(P=NP).
I’d feel better about this if the “would” in the second became a “could”. It might turn out that AGIs don’t do much initially, if the hardware level required is very large, or if boosting their intelligence is very difficult, or if getting them to cooperate is not easy. Alternatively, they might go foom and then destroy everyone, leveling the economic playing field in the sense that all humans will be dead or transformed so much that they might as well be.
If serious fooming is a risk then this makes things much worse. This will drastically increase the chances that any one group will activate their AGIs without adequate safety precautions (or any at all).
I don’t follow your logic that crypto is somehow more important than advances in direct weapons technologies. Sure, crypto is important. But it isn’t clear why it would be more important. There are historical examples where it has mattered a lot. There’s no question that the Allies’ cryptographic advantages in World War II mattered, but that’s one of the most extreme historic examples, and in that case the consensus seems to be that they would likely have won in both theaters even without it. Similar remarks apply to other wars where one side had a cryptographic advantage (such as say the North in the US Civil War).
I’m not also sure what exactly you are calling for that is different than what we do now. There are open source applications of a lot of cryptographic protocols. The protocols that don’t have open source implementations are things like fully homomorphic encryption where current protocols aren’t efficient enough to be useable with computers of current capability.
Note that most proofs of protocols’ correctness can be found easily online. There have been some attempts to make open source databases of formal proofs in general (Cameron Freer has done work in this regard) not just for encryption. This is a good thing but for purposes of crypto having that really won’t change much.
Followed the link and was disappointed to discover that there isn’t some new encryption scheme based on complex analyticity. :-) Reminds me of the “fractal quantum Hall effect”. In fact, maybe we could use that to realize “fully holomorphic encryption”…
Thanks. Typo fixed.
I don’t think a ‘fast FOOM’ is plausible; the existence of multiple competing AGI-equipped powers would serve to deter a ‘slow FOOM’.
Even if cryptography is not as threatening as advances in direct weapons (e.g. you could make a case for weapons nanobots), it is certainly a large source of potentially decisive military advances. Cyber attacks are faster than direct attacks and would be more difficult to defend against. Cyber attack technology (including cryptography) is harder to reverse-engineer, and its research and deployment involves no physical manufacturing, making its illicit development under a global weapons ban more difficult to detect.
This leads to a variety of questions:
First, regarding the fast fooming issue:
How fast is “fast FOOM” in your framework?
How unlikely is it to be labeled as implausible?
How likely is do you think for P=NP?
How likely is it do you think for BQP to contain NP?
How plausible is it to you that a strong, not foomed AI could make practical quantum computers?
How likely do you consider fast fooming given P=NP or NP contained in BQP?
Note that for 1 and 6 to be consistent, the probability of 1 should be higher than whatever you gave for the probability in 6 times the probability in 3 (ETA: fixed), since 3-4-5 is but one pair of pathways for an AI to plausibly go foom.
This is not obvious. Moreover, what is to prevent the AGIs from working together in a way that makes humans irrelevant? If there’s a paperclip maximizer and a stamp maximizer, but they can agree to cooperate (afterall, there’s very little overlap between the elements in stamps and the elements in metal paperclips) and humans are just as badly off then as if only one of them were around. Multiple strong AIs that don’t share human values means we have even more intelligent competitors for resources in our approximate light cone. Increasing the number of competing AIs might make it less likely for humans to survive in any way that we’d recognize as something we want.
Not really. Military organizations rarely need to use cutting edge cryptography. Most interesting crypographic protocols are things like public key crypto which are useful when one has a large number of distinct economic actors who can’t be trusted and don’t have secure communication channels. Armies have things like centralized command structures which allow one to do things like distribute one time pads or have prior agreed upon signals which make most of these issues irrelevant. There situations where armies need cryptographic protocols are situations like World War 2, where one has many small groups that one needs to communicate securely with and one doesn’t have easy physical access to them. In that sort of context, modern crypto can help. But, large scale ground wars and similar situations seem like an unlikely form of warfare.
Hang on. Are we now talking about security in general? That’s a much broader set of questions than just cryptography. I don’t know if it is in general more difficult to defend against such attacks. Most of those attacks have an easy answer: keep systems off line. Attacks through the internet can cause economic damage, but it is difficult for them to cause military damage unless high priority systems are connected to the internet, which is just stupid.
Can you expand on this claim?
Has anyone ever suggested a global ban on cryptography or anything similar? Why does that seem like a scenario worth worrying about?
(Emphasis added.) I think you’ve got that backwards? 1 is P(fast FOOM), 6 is P(fast FOOM | P=NP OR NP in BQP), and you’re arguing that P=NP or NP in BQP would make fast FOOM more likely, so 6 should be higher. That, or 6 should be changed to ( (fast FOOM) AND (P=NP OR NP in BQP) ). Yeah?
The thought was coherent. The typing was wrong. The intended probability estimate was given 3 and 6 together. That is, P(fast FOOM) >= P(fast FOOM| P=NP) * P(P=NP).
Ah, cool. Thanks for the clarification.
Fast FOOM is as plausible as P=NP, agreed.