I doubt your optimism on the level of security that is realistically achievable. Don’t get me wrong: The software industry has made huge progress (at large costs!) in terms of security. Where before, most stuff popped a shell if you looked at it funny, it is now a large effort for many targets.
Further progress will be made.
If we extrapolate this progress—we will optimistically reach a point where impactful reliable 0day is out of reach for most hobbyists and criminals, and the domain of natsec of great powers.
But I don’t see how raising this waterline will help for AI risk in particular?
As in: godlike superintelligence is game over anyway. AI that is comparably good at exploitation as the rest of humanity taken together, is beyond what is realistically defendable against, in terms of wide-spread deployed security level. An AI that doesn’t reach that level without human assistance is probably not lethal anyways.
On the other hand, one could imagine pivotal acts by humans with limited-but-substantial AI assistance that rely on the lack of wide-spread security.
Pricing human + weakish AI collaborations out of the world-domination-via-hacking game might actually make matters worse, in so far as weakish non-independent AI might be easier to keep aligned.
A somewhat dystopian wholesale surveillance of almost every word written and said by humans, combined with AI that is good enough at text comprehension and energy efficient enough to pervasively and correctly identify scary-looking research and flag it to human operators for intervention is plausibly pivotal and alignable, and makes for much better cyberpunk novels than burning GPUs anyway (mentally paging cstross, I want my Gibson homage in form of a “Turing Police”/laundry-verse crossover).
Also, good that you mentioned rowhammer. Rowhammer and the DRAM industries half-baked pitiful response are humankinds capitulation in terms of “making at least some systems actually watertight”.
I doubt your optimism on the level of security that is realistically achievable. Don’t get me wrong: The software industry has made huge progress (at large costs!) in terms of security. Where before, most stuff popped a shell if you looked at it funny, it is now a large effort for many targets.
Further progress will be made.
If we extrapolate this progress—we will optimistically reach a point where impactful reliable 0day is out of reach for most hobbyists and criminals, and the domain of natsec of great powers.
But I don’t see how raising this waterline will help for AI risk in particular?
As in: godlike superintelligence is game over anyway. AI that is comparably good at exploitation as the rest of humanity taken together, is beyond what is realistically defendable against, in terms of wide-spread deployed security level. An AI that doesn’t reach that level without human assistance is probably not lethal anyways.
On the other hand, one could imagine pivotal acts by humans with limited-but-substantial AI assistance that rely on the lack of wide-spread security.
Pricing human + weakish AI collaborations out of the world-domination-via-hacking game might actually make matters worse, in so far as weakish non-independent AI might be easier to keep aligned.
A somewhat dystopian wholesale surveillance of almost every word written and said by humans, combined with AI that is good enough at text comprehension and energy efficient enough to pervasively and correctly identify scary-looking research and flag it to human operators for intervention is plausibly pivotal and alignable, and makes for much better cyberpunk novels than burning GPUs anyway (mentally paging cstross, I want my Gibson homage in form of a “Turing Police”/laundry-verse crossover).
Also, good that you mentioned rowhammer. Rowhammer and the DRAM industries half-baked pitiful response are humankinds capitulation in terms of “making at least some systems actually watertight”.