I’ve written substantially about AI-powered social manipulation in the context of the true AI risk (superintelligent AI) in my post on Clown Attacks. I don’t think that trying to deny governments and militaries access to AI-powered manipulation tech is a good idea for the AI safety community, that is just asking to be stomped on in retaliation since AI-powered manipulation through social media seems important to the current warfare paradigm, and it is probably not a neglected area anyway.
It makes more sense for the AI safety community itself to become hardened against the current AI manipulation paradigm, and focus on policies that avoid burning the remaining timeline without denying the US government/military the specific capabilities that it wants.
I’ve written substantially about AI-powered social manipulation in the context of the true AI risk (superintelligent AI) in my post on Clown Attacks. I don’t think that trying to deny governments and militaries access to AI-powered manipulation tech is a good idea for the AI safety community, that is just asking to be stomped on in retaliation since AI-powered manipulation through social media seems important to the current warfare paradigm, and it is probably not a neglected area anyway.
It makes more sense for the AI safety community itself to become hardened against the current AI manipulation paradigm, and focus on policies that avoid burning the remaining timeline without denying the US government/military the specific capabilities that it wants.