I think most “clown attacks” are performed by genuine clowns, not by competent intelligence agencies.
Does this make them better? Not really.
It’s also an attack that’s hard to pull off, especially against a plausible sounding idea that has been endorsed by someone high status.
Did we see an attempt at a clown attack against the lab leak hypothesis. Probably. Not a very successful one, but one that kind of worked for a few months. Because intelligence agencies aren’t that competent.
Yes, plausible deniability and the very high ratio of ambient/noise clowns is probably one of the main things that makes clown attacks powerful, and it resonates well with user-data based targeted influence systems (because attackers can automate the process of trying various things until they find manipulation strategies that work well on different kinds of people who are ordinarily difficult to persuade).
I’d argue that plausible deniability makes clown attacks easy to pull off, and that if a clown attacks was used to deny people cognition about the lab leak hypothesis, then it was wildly successful and still is; lab leak probably won’t be one of the main issues in the 2024 election even though it would naturally be more relevant than all the other issues combined. That’s the kind of thing that becomes possible with modern AI-powered psychological research systems, although the vastly weaker 20th century psychological research paradigm might have been sufficient there too.
lc and I have both written high-level posts about evaluating intelligence agency competence; it remains an open question since you would expect large numbers of case studies of incompetence regardless of the competence of the major players at the top 5-20%.
I think most “clown attacks” are performed by genuine clowns, not by competent intelligence agencies.
Does this make them better? Not really.
It’s also an attack that’s hard to pull off, especially against a plausible sounding idea that has been endorsed by someone high status.
Did we see an attempt at a clown attack against the lab leak hypothesis. Probably. Not a very successful one, but one that kind of worked for a few months. Because intelligence agencies aren’t that competent.
Yes, plausible deniability and the very high ratio of ambient/noise clowns is probably one of the main things that makes clown attacks powerful, and it resonates well with user-data based targeted influence systems (because attackers can automate the process of trying various things until they find manipulation strategies that work well on different kinds of people who are ordinarily difficult to persuade).
I’d argue that plausible deniability makes clown attacks easy to pull off, and that if a clown attacks was used to deny people cognition about the lab leak hypothesis, then it was wildly successful and still is; lab leak probably won’t be one of the main issues in the 2024 election even though it would naturally be more relevant than all the other issues combined. That’s the kind of thing that becomes possible with modern AI-powered psychological research systems, although the vastly weaker 20th century psychological research paradigm might have been sufficient there too.
lc and I have both written high-level posts about evaluating intelligence agency competence; it remains an open question since you would expect large numbers of case studies of incompetence regardless of the competence of the major players at the top 5-20%.