At a glance, this seems like a high-risk high-reward tactic. I approve of this if this is in fact effective at changing people’s minds, and disapprove of this if this is in fact making people cringe and become biased against the ideas she’s trying to spread.
My immediate impression was that it’s mostly the latter. I agree that these memes seem kind of cringe, especially if they’re being spread in communities that are hostile to AI Safety takes and dislike this kind of content (which my cursory familiarity with r/singularity suggests)...
… but glancing at the karma ratings her posts receive, it doesn’t seem that she’s speaking to a hostile audience using takes that don’t land with them. I think most of her posts have at least an average karma rating for a given community, some of them are big hits, and the comments are frequently skeptical but rarely outright derisive. This isn’t a major indicator and I’ve only spent ~5 minutes looking through them, but it does look like it’s working.
@KatWoods, do you have any more convincing metrics you’re using to evaluate the efficiency of your, ah, propaganda campaign? Genuinely interested in how effective it is.
Karma is tricky as a measure because subreddits are non-stationary. In particular, I feel like the “vibes” of all the subreddits I listed were different 6+ months ago, and they are becoming more homogenous (in part due to power users such as Kat Woods). I don’t know of a way to view what the “hot” page of any given subreddit would have looked like at some previous point in time, so it’s hard to find data to understand subreddit culture drift. Anyway, the high karma is also consistent with selection effects, where the users who do not like this content bounce off, and only the users that do stick around those subreddits in the long term.
At a glance, this seems like a high-risk high-reward tactic. I approve of this if this is in fact effective at changing people’s minds, and disapprove of this if this is in fact making people cringe and become biased against the ideas she’s trying to spread.
My immediate impression was that it’s mostly the latter. I agree that these memes seem kind of cringe, especially if they’re being spread in communities that are hostile to AI Safety takes and dislike this kind of content (which my cursory familiarity with r/singularity suggests)...
… but glancing at the karma ratings her posts receive, it doesn’t seem that she’s speaking to a hostile audience using takes that don’t land with them. I think most of her posts have at least an average karma rating for a given community, some of them are big hits, and the comments are frequently skeptical but rarely outright derisive. This isn’t a major indicator and I’ve only spent ~5 minutes looking through them, but it does look like it’s working.
@KatWoods, do you have any more convincing metrics you’re using to evaluate the efficiency of your, ah, propaganda campaign? Genuinely interested in how effective it is.
Well put and I agree.
Karma is tricky as a measure because subreddits are non-stationary. In particular, I feel like the “vibes” of all the subreddits I listed were different 6+ months ago, and they are becoming more homogenous (in part due to power users such as Kat Woods). I don’t know of a way to view what the “hot” page of any given subreddit would have looked like at some previous point in time, so it’s hard to find data to understand subreddit culture drift. Anyway, the high karma is also consistent with selection effects, where the users who do not like this content bounce off, and only the users that do stick around those subreddits in the long term.