I checked, definitely directionally true, but “enemies will use it to generate propaganda” is a bad summary of legitimate concern about influence operations.
Something like it’ll lead you to make worse predictions?
Scary possible AI influence ops include like making friends on Discord via text chats. I predict that if your predictions about influence-ops-concerns are only about political-propaganda, you’ll make worse predictions.
Making friends on Discord and then using those friendships to disseminate propaganda through those friendships, no? Or to test the effectiveness of propaganda, or various other things. It’s still centrally mediated by the propaganda.
Like I agree that there are other potential AI dangers involving AIs making friends on Discord than just propaganda, but that doesn’t seem to be what “influence ops” are about? And there are other actors than political actors who could do it (e.g. companies could), but he seems to be focusing on geopolitical enemies rather than those actors.
Maybe he has concerns beyond this, but he doesn’t seem to emphasize them much?
Look at Chris Meserole’s twitter. He retweeted this opposition to pausing AI research, and the main AI worries he seems to retweet are about whether his political enemies will use it to generate propaganda that support themselves and oppose him and his allies. Looks to me like Frontier Model Forum is fundamentally compromised.
I checked, definitely directionally true, but “enemies will use it to generate propaganda” is a bad summary of legitimate concern about influence operations.
Bad summary by what criterion?
Something like it’ll lead you to make worse predictions?
Scary possible AI influence ops include like making friends on Discord via text chats. I predict that if your predictions about influence-ops-concerns are only about political-propaganda, you’ll make worse predictions.
Making friends on Discord and then using those friendships to disseminate propaganda through those friendships, no? Or to test the effectiveness of propaganda, or various other things. It’s still centrally mediated by the propaganda.
Like I agree that there are other potential AI dangers involving AIs making friends on Discord than just propaganda, but that doesn’t seem to be what “influence ops” are about? And there are other actors than political actors who could do it (e.g. companies could), but he seems to be focusing on geopolitical enemies rather than those actors.
Maybe he has concerns beyond this, but he doesn’t seem to emphasize them much?