I suspect it may be more practical to defend against this sort of attack using finite intelligence than previously assumed. We need to make the machine that knows how to guard against these sorts of things, but if we can make the vulnerability-closer, we don’t need to hit max ASI to stop other ASIs from destroying all pre-ASI life on earth.
If you read between the lines in my Human level AI can plausibly take over the world post, hacking computers is probably the lowest difficulty “take over the world” strategy and has the side benefit of giving control over all the internet connected AI clusters.
The easiest way to keep a new superintelligence from emerging is to seize control of the computers it would be trained on. The AI only needs to hack far enough to monitor AI researchers and AI training clusters and sabotage later AI runs in a non-suspicious way. It’s entirely plausible this has already happened and we are either in the clear or completely screwed depending on the alignment of the AI that won the race.
Also, hacking computers and writing software is something easy to test and therefore easy to train. I doubt that training an LLM to be a better hacker/coder is much harder than what’s already been done in the RL space by OpenAI and Deepmind (EG: playing DOTA and Starcraft).
Biotech is a lot harder to deal with since ground truth is less accessible. This can be true for computer security too but to a much lesser extent (EG: lack of access to chips in the latest Iphone and lack of complete understanding therof with which to develop/test attacks).
but also solves global warming and climate contamination and acts as a power & fuel grid. That and bio immortality is basically everything I personally want out of AGI. So I’d really like to have some idea how to build a machine that teaches a plant to do something like a safe, human-compatible version of this.
I’ll take mind backups, but for exactly the reasons you highlight here, I don’t think we’re going to find electronics to be more efficient than microkinetic computers like biology. I’m much more interested in significant refinements to what it means to be biological. Eventually I’ll probably substrate translate over to a reversible computer but that’s probably hundreds to thousands of years out
If you read between the lines in my Human level AI can plausibly take over the world post, hacking computers is probably the lowest difficulty “take over the world” strategy and has the side benefit of giving control over all the internet connected AI clusters.
The easiest way to keep a new superintelligence from emerging is to seize control of the computers it would be trained on. The AI only needs to hack far enough to monitor AI researchers and AI training clusters and sabotage later AI runs in a non-suspicious way. It’s entirely plausible this has already happened and we are either in the clear or completely screwed depending on the alignment of the AI that won the race.
Also, hacking computers and writing software is something easy to test and therefore easy to train. I doubt that training an LLM to be a better hacker/coder is much harder than what’s already been done in the RL space by OpenAI and Deepmind (EG: playing DOTA and Starcraft).
Biotech is a lot harder to deal with since ground truth is less accessible. This can be true for computer security too but to a much lesser extent (EG: lack of access to chips in the latest Iphone and lack of complete understanding therof with which to develop/test attacks).
Pshh, low expectations. Mind uploading or bust!
I’ll take mind backups, but for exactly the reasons you highlight here, I don’t think we’re going to find electronics to be more efficient than microkinetic computers like biology. I’m much more interested in significant refinements to what it means to be biological. Eventually I’ll probably substrate translate over to a reversible computer but that’s probably hundreds to thousands of years out