There’s a particular AI enabled cybersecurity attack vector that I expect is going to cause a lot of problems in the next year or two. Like, every large organization is gonna get hacked in the same way. But I don’t know the solution to the problem, and I fear giving particulars on how it would work at a particular FAANG would just make the issue worse.
I don’t understand why you wouldn’t just follow normal responsible disclosure practices here, e.g. just disclose this to Google and then leave it to them.
Google’s red team already knows. They have known about the problem for at least six months and abused the issue successfully in engagements to get very significant access. They’re just not really sure what to do because the only solutions they can come up with involve massively disruptive changes.
I know some pretty senior people in security for 2 FAANG companies, and passing acquaintance at others, and currently work in the Security org at a comparable company. All of them have reporting channels for specific threats, and none (that I know) are ignorant of the range of AI-enabled attacks that are likely in the near future (shockingly many already). The conversations I’ve had (regarding products or components I do know pretty well) have convinced me that everything I come up with is already on their radar (though some are of the form “Yeah, that’s gonna happen and it’s gonna suck. Current strategy is to watch for it and not talk much about it, in order not to encourage it”).
Without disclosing some details, there’s probably no way to determine whether your knowledge or theory is something they can update on. I’m happy to pass on any information, but I can’t see why you’d trust me more than more direct employees of the future victims.
The security team definitely know about the attack vector and I’ve spoken to them. It’s just that neither I nor they really know what the industry as a whole is going to do about it.
There’s a particular AI enabled cybersecurity attack vector that I expect is going to cause a lot of problems in the next year or two. Like, every large organization is gonna get hacked in the same way. But I don’t know the solution to the problem, and I fear giving particulars on how it would work at a particular FAANG would just make the issue worse.
I don’t understand why you wouldn’t just follow normal responsible disclosure practices here, e.g. just disclose this to Google and then leave it to them.
Google’s red team already knows. They have known about the problem for at least six months and abused the issue successfully in engagements to get very significant access. They’re just not really sure what to do because the only solutions they can come up with involve massively disruptive changes.
I know some pretty senior people in security for 2 FAANG companies, and passing acquaintance at others, and currently work in the Security org at a comparable company. All of them have reporting channels for specific threats, and none (that I know) are ignorant of the range of AI-enabled attacks that are likely in the near future (shockingly many already). The conversations I’ve had (regarding products or components I do know pretty well) have convinced me that everything I come up with is already on their radar (though some are of the form “Yeah, that’s gonna happen and it’s gonna suck. Current strategy is to watch for it and not talk much about it, in order not to encourage it”).
Without disclosing some details, there’s probably no way to determine whether your knowledge or theory is something they can update on. I’m happy to pass on any information, but I can’t see why you’d trust me more than more direct employees of the future victims.
The security team definitely know about the attack vector and I’ve spoken to them. It’s just that neither I nor they really know what the industry as a whole is going to do about it.
Sounds like the sort of thing I’d forward to Palisade research.