Fixed the link, it has been discussed here several times already, the blue pill vs the red pill poll, just not as nearly-scissor statement.
I agree with your point that bandwagoning is a known exploit, and LLMs are a force multiplier there, compared to, say tweet bots. Assuming I got your point right. Get LLMs to generate and iterate the message in various media forms until something sticks is definitely a hazard. I guess my point is that this level of sophistication may not be necessary, as you said
The human brain is a kludge of spaghetti code, and it therefore follows that there will be exploitable “zero days” within most or all humans.
Suddenly it clicked and I realized what I was doing completely wrong. It was such a stupid mistake on my end, in retrospect:
I never mentioned anything about how public opinion causes regime change, tax noncompliance, military factors like popularity of wars and elite soldier/officer recruitment, and permanent cultural shifts like the 60s/70s that intergenerationally increase the risk of regime change/tax noncompliance/military popularity. Information warfare is a major factor driving government interest in AI.
I also didn’t mention anything about the power generated by steering elites vs. the masses, everything in this post was always about the masses unless stated otherwise, which it never was. And the masses always get steered, especially in democracies, and therefore is not interesting aside from people who care a ton about who wins elections. Whereas steering elites means steering people in AI safety, and all sorts of other places that are as distinct and elite relative to society as AI safety is.
Fixed the link, it has been discussed here several times already, the blue pill vs the red pill poll, just not as nearly-scissor statement.
I agree with your point that bandwagoning is a known exploit, and LLMs are a force multiplier there, compared to, say tweet bots. Assuming I got your point right. Get LLMs to generate and iterate the message in various media forms until something sticks is definitely a hazard. I guess my point is that this level of sophistication may not be necessary, as you said
Suddenly it clicked and I realized what I was doing completely wrong. It was such a stupid mistake on my end, in retrospect:
I never mentioned anything about how public opinion causes regime change, tax noncompliance, military factors like popularity of wars and elite soldier/officer recruitment, and permanent cultural shifts like the 60s/70s that intergenerationally increase the risk of regime change/tax noncompliance/military popularity. Information warfare is a major factor driving government interest in AI.
I also didn’t mention anything about the power generated by steering elites vs. the masses, everything in this post was always about the masses unless stated otherwise, which it never was. And the masses always get steered, especially in democracies, and therefore is not interesting aside from people who care a ton about who wins elections. Whereas steering elites means steering people in AI safety, and all sorts of other places that are as distinct and elite relative to society as AI safety is.