Computer hacking is not a particularly significant medium of communication between prominent AI research labs, nonprofits, or academic researchers. Much more often than leaked trade secrets, ML people will just use insights found in this online repository called arxiv, where many of them openly and intentionally publish their findings. Nor (as far as I am aware) are stolen trade secrets a significant source of foundational insights for researchers making capabilities gains, local to their institution or otherwise.
I don’t see this changing on its own, regardless of how “close” we are to developing AGI. So for now, increasing information security standards across the field seems to me like a waste of time, particularly when talking about alignment labs that (hopefully) pioneer a fraction of a fraction of relevant capabilities research. It’s hard to me to imagine a timeline in which MIRI is safeguarding a big red button from China that’s not also an Ultra Fucked timeline, without the above facts also changing.
An evil part of me would really love for cybersecurity to be very relevant to AI alignment, because it’s super interesting and also my field, but (fortunately?) I really don’t understand the people who claim that it is. I could be missing something very critical though.
Do we have a good idea of how prominent AI research labs compare to the resources that go into Five Eyes AI models for intelligence analysis and for Chinese government pursuits?
I’ve forgotten at this point who they are, but I will ask some of my friends later to give me some of the public URLs of the “big players” working in this space so you can partly see for yourself. Their marketing is really impressive because government contractors, but I encourage you to actually look at the product on a technical level.
Largely: the NSA and its military-industrial partners don’t come up with new innovations, except as applies to handling the massive amounts of data they have and their interesting information security requirements. They just apply technologies and insights from companies like OpenAI or DeepMind. They’re certainly using things like large language models to scan your emails now, but that’s because OpenAI did the hard work already.
More importantly, when they do come up with innovations, they don’t publish them on the internet, so they don’t burn much of the “commons”, as it were.
Largely: the NSA and its military-industrial partners don’t come up with new innovations, except as applies to handling the massive amounts of data they have and their interesting information security requirements.
There was a large amount of time when the NSA did come up with cryptography-related math innovations in secret and did not share that information publically.
The NSA does see itself as the leading employer of mathematicians in the United States. To the extent that those employees come up with groundbreaking insights, those are likely classified and you won’t find them in the marketing materials of government contractors.
Computer hacking is not a particularly significant medium of communication between prominent AI research labs, nonprofits, or academic researchers. Much more often than leaked trade secrets, ML people will just use insights found in this online repository called arxiv, where many of them openly and intentionally publish their findings. Nor (as far as I am aware) are stolen trade secrets a significant source of foundational insights for researchers making capabilities gains, local to their institution or otherwise.
I don’t see this changing on its own, regardless of how “close” we are to developing AGI. So for now, increasing information security standards across the field seems to me like a waste of time, particularly when talking about alignment labs that (hopefully) pioneer a fraction of a fraction of relevant capabilities research. It’s hard to me to imagine a timeline in which MIRI is safeguarding a big red button from China that’s not also an Ultra Fucked timeline, without the above facts also changing.
An evil part of me would really love for cybersecurity to be very relevant to AI alignment, because it’s super interesting and also my field, but (fortunately?) I really don’t understand the people who claim that it is. I could be missing something very critical though.
Do we have a good idea of how prominent AI research labs compare to the resources that go into Five Eyes AI models for intelligence analysis and for Chinese government pursuits?
I’ve forgotten at this point who they are, but I will ask some of my friends later to give me some of the public URLs of the “big players” working in this space so you can partly see for yourself. Their marketing is really impressive because government contractors, but I encourage you to actually look at the product on a technical level.
Largely: the NSA and its military-industrial partners don’t come up with new innovations, except as applies to handling the massive amounts of data they have and their interesting information security requirements. They just apply technologies and insights from companies like OpenAI or DeepMind. They’re certainly using things like large language models to scan your emails now, but that’s because OpenAI did the hard work already.
More importantly, when they do come up with innovations, they don’t publish them on the internet, so they don’t burn much of the “commons”, as it were.
I can’t give much insight on china, sadly.
There was a large amount of time when the NSA did come up with cryptography-related math innovations in secret and did not share that information publically.
The NSA does see itself as the leading employer of mathematicians in the United States. To the extent that those employees come up with groundbreaking insights, those are likely classified and you won’t find them in the marketing materials of government contractors.