Something like it’ll lead you to make worse predictions?
Scary possible AI influence ops include like making friends on Discord via text chats. I predict that if your predictions about influence-ops-concerns are only about political-propaganda, you’ll make worse predictions.
Making friends on Discord and then using those friendships to disseminate propaganda through those friendships, no? Or to test the effectiveness of propaganda, or various other things. It’s still centrally mediated by the propaganda.
Like I agree that there are other potential AI dangers involving AIs making friends on Discord than just propaganda, but that doesn’t seem to be what “influence ops” are about? And there are other actors than political actors who could do it (e.g. companies could), but he seems to be focusing on geopolitical enemies rather than those actors.
Maybe he has concerns beyond this, but he doesn’t seem to emphasize them much?
Bad summary by what criterion?
Something like it’ll lead you to make worse predictions?
Scary possible AI influence ops include like making friends on Discord via text chats. I predict that if your predictions about influence-ops-concerns are only about political-propaganda, you’ll make worse predictions.
Making friends on Discord and then using those friendships to disseminate propaganda through those friendships, no? Or to test the effectiveness of propaganda, or various other things. It’s still centrally mediated by the propaganda.
Like I agree that there are other potential AI dangers involving AIs making friends on Discord than just propaganda, but that doesn’t seem to be what “influence ops” are about? And there are other actors than political actors who could do it (e.g. companies could), but he seems to be focusing on geopolitical enemies rather than those actors.
Maybe he has concerns beyond this, but he doesn’t seem to emphasize them much?