Once again we see a “single actor world” type mistake, if such bugs existed at the limits of being discoverable purpose-built teams and models would have found and exploited them.
But in this case, the idea is more obviously nonsensical since a large amount of the world’s thinking power, both humans and compute, is going into the task of finding bugs in the Linux kernel or common ARM and x86 architectures.
All new exploits (e.g. Specter) turn out to be very minor and difficult to profitably use, requiring gargantuan lengths of time and special circumstances to attack through.
The assumption that many bugs are hidden within the kernel also relies on the flawed premise that being “perfect” and being “good enough” are separated by a sea of potential. In practice there are no examples of this thus far, even when exploits were found in the past, they were niche and marginal, because the software (or hardware) they were found in was “good enough”, leaving space for exploits, but having a close-to-guarantee those exploits wouldn’t be all-powerful.
I initially upvoted because I like new perspectives on alignment that aren’t condescending, and then switched to a strong downvote immediately after being exposed to this frankly factually incorrect take. What on earth do you mean “if such bugs existed at the limits of being discoverable purpose-built teams and models would have found and exploited them”. They do?! There’s an entire legally legitimized subsection of the computer security industry dedicated to building and weaponizing such bugs, and no such parallel economic force trying to fix them. They don’t get patched because they’re sold to intelligence agencies and nation states instead of reported to the vendor out of the good of the attackers’ heart. This is to say nothing of the kind of bugs that exist unpatched because their exploitation has yet to be automated and made scalable (like social engineering). It almost sounds like a security engineer told you the (correct) opinion that rowhammer and hardware bugs were overrated and then you took away that “All new exploits (e.g. Specter) turn out to be very minor and difficult to profitably use”. Who? What?!
Computer security is asymmetric. Just because a model can compromise a system does not mean that model can propose a fix for the same or lower cost. And just because a model or team can propose a fix for the same or lower cost, doesn’t mean that the defenders have that model. And just because the defenders have that model doesn’t mean it’s cost effective for them to use it. All of this is raised to the Nth power for new innovations cooking inside of DeepMind’s labs that haven’t been released to the public yet.
See cases above, even if you assume asymmetry (how does using banks square with that belief?), you still are left with the adversarial problem that all easy to claim exploits are taken and new exploits are usually found on the same (insecure, old) software and hardware.
So all exploitable niches are close to saturation at any given time if an incentive exists (and it does) to find them.
Literally none of the factual things you just claimed are true. You have like eight components missing from your world model and that’s affecting your ability to understand the current computer security landscape. Zerodium and Raytheon and etc. exist and have kept the major government entities in stock of whatever bugs they need for decades now. It’s not an arms race—one side has been beating the other side into the curb for 15 fucking years. You need to figure out what’s missing from your perspective until it allocates sufficient probability mass for that fact. Here are three such things:
Two attackers can discover and use the same exploit at the same time. They’re not rivalrous goods. This is part of what makes these bugs so lucrative—Zerodium can sell them exclusively or to a number of groups. Only one easy to find bug has to exist for all of the existing agencies to break your security.
“Bank”s do not have to generally protect themselves from the kinds of people who 0day developers sell their products to, because nation states do not target their bottom line. Their primary concern is to protect themselves from the one in hundred computer hackers around the globe who is a criminal or links up with 1-5 other criminal computer hackers. Those criminals have to deal with severe coordination problems and do crime mainly because they’re not smart enough to develop weaponizations for said intelligence agencies, which is much more lucrative. If banks & technology companies had to protect themselves from Raytheon’s 0day developers the world would look way different than it currently does. Likewise, banks also do not currently protect themselves from superintelligent AIs because they do not currently exist.
Increased demand for or production of zero day vulnerabilities does not produce increased demand for protection from zero day vulnerabilities. The NSA could start spending 10x as much as it currently does on such weaponizations and it would not induce Apple to spend any more than they currently do because the NSA being in possession of a zero-click bug for iMessage does not affect their bottom line.
And literally ALL of that being false wouldn’t mean that DeepMind wouldn’t be able to make an AI mildly better than the other AIs or existing computer hackers at developing a 0day it could use to hijack a couple servers. This objection to a new SOTA AI being able to do this is complete nonsense. Come to think of it I bet I could make a SOTA AI able to grab you a few AWS keys.
I initially upvoted because I like new perspectives on alignment that aren’t condescending, and then switched to a strong downvote immediately after being exposed to this frankly factually incorrect take. What on earth do you mean “if such bugs existed at the limits of being discoverable purpose-built teams and models would have found and exploited them”. They do?! There’s an entire legally legitimized subsection of the computer security industry dedicated to building and weaponizing such bugs, and no such parallel economic force trying to fix them. They don’t get patched because they’re sold to intelligence agencies and nation states instead of reported to the vendor out of the good of the attackers’ heart. This is to say nothing of the kind of bugs that exist unpatched because their exploitation has yet to be automated and made scalable (like social engineering). It almost sounds like a security engineer told you the (correct) opinion that rowhammer and hardware bugs were overrated and then you took away that “All new exploits (e.g. Specter) turn out to be very minor and difficult to profitably use”. Who? What?!
Computer security is asymmetric. Just because a model can compromise a system does not mean that model can propose a fix for the same or lower cost. And just because a model or team can propose a fix for the same or lower cost, doesn’t mean that the defenders have that model. And just because the defenders have that model doesn’t mean it’s cost effective for them to use it. All of this is raised to the Nth power for new innovations cooking inside of DeepMind’s labs that haven’t been released to the public yet.
See cases above, even if you assume asymmetry (how does using banks square with that belief?), you still are left with the adversarial problem that all easy to claim exploits are taken and new exploits are usually found on the same (insecure, old) software and hardware.
So all exploitable niches are close to saturation at any given time if an incentive exists (and it does) to find them.
Literally none of the factual things you just claimed are true. You have like eight components missing from your world model and that’s affecting your ability to understand the current computer security landscape. Zerodium and Raytheon and etc. exist and have kept the major government entities in stock of whatever bugs they need for decades now. It’s not an arms race—one side has been beating the other side into the curb for 15 fucking years. You need to figure out what’s missing from your perspective until it allocates sufficient probability mass for that fact. Here are three such things:
Two attackers can discover and use the same exploit at the same time. They’re not rivalrous goods. This is part of what makes these bugs so lucrative—Zerodium can sell them exclusively or to a number of groups. Only one easy to find bug has to exist for all of the existing agencies to break your security.
“Bank”s do not have to generally protect themselves from the kinds of people who 0day developers sell their products to, because nation states do not target their bottom line. Their primary concern is to protect themselves from the one in hundred computer hackers around the globe who is a criminal or links up with 1-5 other criminal computer hackers. Those criminals have to deal with severe coordination problems and do crime mainly because they’re not smart enough to develop weaponizations for said intelligence agencies, which is much more lucrative. If banks & technology companies had to protect themselves from Raytheon’s 0day developers the world would look way different than it currently does. Likewise, banks also do not currently protect themselves from superintelligent AIs because they do not currently exist.
Increased demand for or production of zero day vulnerabilities does not produce increased demand for protection from zero day vulnerabilities. The NSA could start spending 10x as much as it currently does on such weaponizations and it would not induce Apple to spend any more than they currently do because the NSA being in possession of a zero-click bug for iMessage does not affect their bottom line.
And literally ALL of that being false wouldn’t mean that DeepMind wouldn’t be able to make an AI mildly better than the other AIs or existing computer hackers at developing a 0day it could use to hijack a couple servers. This objection to a new SOTA AI being able to do this is complete nonsense. Come to think of it I bet I could make a SOTA AI able to grab you a few AWS keys.