Hi! New to the forums and excited to keep reading.
Bit of a meta-question: given proliferation of LLM-powered bots in social media like twitter etc, do the LW mods/team have any concerns about AI-generated content becoming an issue here in a more targeted way?
For a more benign example, say one wanted to create multiple “personas” here to test how others react. They could create three accounts, and respond to posts always with all three accounts- one with a “disagreeable” persona, one neutral, and one “agreeable”.
A malicious example would be if someone hated an idea or person, X, on the forums. They could use GPT-4o to brainstorm any avenues of attack on X, then create any amount of accounts which will always flag posts about X to criticize and challenge. Thus they could bias readers through both creating a false “majority opinion”, as well as through sheer exposure & chance (someone skimming the comments might only see criticizing & skeptical ones).
Not a member of the LessWrong team, but historically the site had a lot of sockpuppetting problems that they (as far as I know) solidly fixed and keep an eye out for.
Hi! New to the forums and excited to keep reading.
Bit of a meta-question: given proliferation of LLM-powered bots in social media like twitter etc, do the LW mods/team have any concerns about AI-generated content becoming an issue here in a more targeted way?
Thanks for entertaining my random hypotheticals!
Not a member of the LessWrong team, but historically the site had a lot of sockpuppetting problems that they (as far as I know) solidly fixed and keep an eye out for.
Makes sense, thanks for the new vocab term!