Yes, like the example of “Clown Attacks” isn’t at all novel or limited to AI, it’s old stuff. And it’s not even true that you can’t be resistant to them, though these days going actively against peer pressure in these things isn’t very fashionable. That said, the biggest risk of LLMs right now is indeed IMO how well they can enact certain forms of propaganda and sentiment analysis en masse. No longer can I say “the government wouldn’t literally read ALL your emails to figure out what you think, you’re not worth the work it would take”: now it might, because the cost has dramatically dropped.
That said, the biggest risk of LLMs right now is indeed IMO how well they can enact certain forms of propaganda and sentiment analysis en masse.
I agree with the contents of this comment in general, but not the idea that propaganda generation is the greatest risk. Lots of people know about that already, and I’d argue that the risk to democracy via persuading the masses isn’t very tractable, whereas the risk to the AI safety community via manipulating elites in random ways with automated high-level psychology research, is very tractable (minimize sensor exposure).
My point wasn’t that it would be a very new capability in general, but it could be deployed at a scale and cost impossible before. Armies of extremely smart and believable bots flooding social media all around. The “huh, everyone else except me thinks X, maybe they do have a point/my own belief is hopeless” gregariousness effect is real, and has often been used already, but this allows bad actors to take it to a whole new level. This could also be deployed as you say against AI safety itself, but not exclusively.
Yes, like the example of “Clown Attacks” isn’t at all novel or limited to AI, it’s old stuff. And it’s not even true that you can’t be resistant to them, though these days going actively against peer pressure in these things isn’t very fashionable. That said, the biggest risk of LLMs right now is indeed IMO how well they can enact certain forms of propaganda and sentiment analysis en masse. No longer can I say “the government wouldn’t literally read ALL your emails to figure out what you think, you’re not worth the work it would take”: now it might, because the cost has dramatically dropped.
I agree with the contents of this comment in general, but not the idea that propaganda generation is the greatest risk. Lots of people know about that already, and I’d argue that the risk to democracy via persuading the masses isn’t very tractable, whereas the risk to the AI safety community via manipulating elites in random ways with automated high-level psychology research, is very tractable (minimize sensor exposure).
My point wasn’t that it would be a very new capability in general, but it could be deployed at a scale and cost impossible before. Armies of extremely smart and believable bots flooding social media all around. The “huh, everyone else except me thinks X, maybe they do have a point/my own belief is hopeless” gregariousness effect is real, and has often been used already, but this allows bad actors to take it to a whole new level. This could also be deployed as you say against AI safety itself, but not exclusively.