The cybersecurity aspect seems a good one. Maybe not so much to get people worried about x-risk, but to generally take the issue of rouge AI seriously. I admit I don’t know much about this, but I’m under the impression that:
models are getting more efficient
handling speech is a Hard task, but AIs are scarily good at it
Moore’s law still applies (if not in the same shape)
These points implies that it might be possible to make auto-infectors that would use AI to search for vulnerabilities, exploit them, and spread updated versions of themselves. It’s probably just a matter of time before a smart virus appears.
Maybe AGI, x-risk, alignment and safety can be separated into smaller issues? The “general” part of AGI seems to be a sticking point with many people—perhaps it would be good to start by showing that even totally dumb AI is dangerous? Especially when bad actors are taken into account—even if you grant that most AI won’t be evil, there are groups which will actively strive to create harmful AI.
Yeah, this was my motivation for writing this post—helping people get on the train (and do the same actions) without needing them to buy into eschatology or x-risk seems hugely valuable.
The cybersecurity aspect seems a good one. Maybe not so much to get people worried about x-risk, but to generally take the issue of rouge AI seriously. I admit I don’t know much about this, but I’m under the impression that:
models are getting more efficient
handling speech is a Hard task, but AIs are scarily good at it
Moore’s law still applies (if not in the same shape)
These points implies that it might be possible to make auto-infectors that would use AI to search for vulnerabilities, exploit them, and spread updated versions of themselves. It’s probably just a matter of time before a smart virus appears.
Maybe AGI, x-risk, alignment and safety can be separated into smaller issues? The “general” part of AGI seems to be a sticking point with many people—perhaps it would be good to start by showing that even totally dumb AI is dangerous? Especially when bad actors are taken into account—even if you grant that most AI won’t be evil, there are groups which will actively strive to create harmful AI.
Yeah, this was my motivation for writing this post—helping people get on the train (and do the same actions) without needing them to buy into eschatology or x-risk seems hugely valuable.