I have a history in animal activism (both EA and mainstream) and I think PETA has been massively positive by pushing the Overton window. People think PETA isn’t working bc they feel angry at PETA when they feel judged or accused, but they update on how it’s okay to treat animals, and that’s the point. More moderate groups like the Humane Society get the credit, but it takes an ecosystem. You don’t have to be popular and well-liked to push the Overton window. You also don’t have to be a group that people want to identify with.
But I don’t think PETA’s an accurate comparison for Kat. It seems like you’re comparing Kat and PETA bc you would be embarrassed to be implicated by both, not bc they have the same tactics or extremity of message. And then the claim that other people will be turned off or misinformed becomes a virtuous pretext to get them and their ideas away from your social group and identity. But you haven’t open-mindedly tried to discover what’s good for the cause. You’re just using your kneejerk reaction to justify imposing your preferences.
There’s a missing mood here—you’re not interested in learning if Kat’s strategy is effective at AI Safety. You’re just asserting that what you like would be the best for saving everyone’s lives too and don’t really seem concerned about getting the right answer to the larger question.
Again, I have contempt for treating moral issues like a matter of ingroup coolness. This is the banality of evil as far as I’m concerned. It’s natural for humans but you can do better. The LessWrong community is supposed to help people not to do this but they aren’t honest with themselves about what they get out of AI Safety, which is something very similar to what you’ve expressed in this post (gatekept community, feeling smart, a techno-utopian aesthetic) instead of trying to discover in an open-minded way what’s actually the right approach to help the world.
“The LessWrong community is supposed to help people not to do this but they aren’t honest with themselves about what they get out of AI Safety, which is something very similar to what you’ve expressed in this post (gatekept community, feeling smart, a techno-utopian aesthetic) instead of trying to discover in an open-minded way what’s actually the right approach to help the world.
I have argued with this before—I have absolutely been through an open minded process to discover the right approach and I genuinely believe the likes of MIRI, pause AI movements are mistaken and harmful now, and increase P(doom). This is not gatekeeping or trying to look cool! You need to accept that there are people who have followed the field for >10 years, have heard all the arguments, used to believe Yud et all were mostly correct, and now agree with the positions of Pope/Belrose/Turntrout more. Do not belittle or insult them by assigning the wrong motives to them.
If you want a crude overview of my position
Superintelligence is extremely dangerous even though at least some of MIRI worldview is likely wrong.
P(doom) is a feeling, it is too uncertain to be rational about, however mine is about 20% if humanity develops TAI in the next <50 years. (This is probably more because of my personal psychology than a fact about the world and I am not trying to strongly pretend otherwise)
P(doom) if superintelligence was impossible is also about 20% for me, because the current tech (LLM etc) can clearly enable “1984” or worse type societies for which there is no comeback and extinction is preferable. Our current society/tech/world politics is not proven to stable.
Because of this, it is not at all clear what the best path forward is and people should have more humility about their proposed solutions. There is no obvious safe path forward given our current situation. (Yes if things had gone differently 20-50 years ago there perhaps could be...)
I left one comment replying to a critical comment this post got saying that it wasn’t being charitable (which turned into a series of replies) and now I find myself in a position (a habit?) of defending the OP from potentially-insufficiently-charitable criticisms. Hence, when I read your sentence...
There’s a missing mood here—you’re not interested in learning if Kat’s strategy is effective at AI Safety.
...my thought is: Are you sure? When I read the post I remember reading:
But if it’s for the greater good, maybe I should just stop being grumpy.
But honestly, is this content for the greater good? Are the clickbait titles causing people to earnestly engage? Are peoples’ minds being changed? Are people thinking thoughtfully about the facts and ideas being presented?
This series of questions seems to me like it’s wondering whether Kat’s strategy is effective at AI safety, which is the thing you’re saying it’s not doing.
(I just scrolled up on my phone and saw that OP actually quoted this herself in the comment you’re replying to. (Oops. I had forgotten this as I had read that comment yesterday.))
Sure, the OP is also clearly venting about her personal distaste for Kat’s posts, but it seems to me that she is also asking the question that you say she isn’t interested in: are Kat’s posts actually effective?
(Side note: I kind of regret leaving any comments on this post at all. It doesn’t seem like the post did a good job encouraging a fruitful discussion. Maybe OP and anyone else who wants to discuss the topic should start fresh somewhere else with a different context. Just to put an idea out there: Maybe it’d be a more productive use of everyone’s energy for e.g. OP, Kat, and you Holly to get on a call together and discuss what sort of content is best to create and promote to help the cause of AI safety, and then (if someone was interested in doing so) write up a summary of your key takeaways to share.)
Yeah, this is the first time I’ve commented on lesswrong in months and I would prefer to just be out of here. But OP was such nasty meangirl bullying that, when someone showed it to me, I wanted to push back.
If OP was geniunely curious, she could’ve looked for evidence beyond her personal feelings (e.g. ran an internet survey) and / or asked Kat privately. What OP did here is called “concern trolling”.
I agree that that would be evidence of OP being more curious. I just don’t think that given what OP actually did it can be said that she wasn’t curious at all.
I have a history in animal activism (both EA and mainstream) and I think PETA has been massively positive by pushing the Overton window. People think PETA isn’t working bc they feel angry at PETA when they feel judged or accused, but they update on how it’s okay to treat animals, and that’s the point. More moderate groups like the Humane Society get the credit, but it takes an ecosystem. You don’t have to be popular and well-liked to push the Overton window. You also don’t have to be a group that people want to identify with.
But I don’t think PETA’s an accurate comparison for Kat. It seems like you’re comparing Kat and PETA bc you would be embarrassed to be implicated by both, not bc they have the same tactics or extremity of message. And then the claim that other people will be turned off or misinformed becomes a virtuous pretext to get them and their ideas away from your social group and identity. But you haven’t open-mindedly tried to discover what’s good for the cause. You’re just using your kneejerk reaction to justify imposing your preferences.
There’s a missing mood here—you’re not interested in learning if Kat’s strategy is effective at AI Safety. You’re just asserting that what you like would be the best for saving everyone’s lives too and don’t really seem concerned about getting the right answer to the larger question.
Again, I have contempt for treating moral issues like a matter of ingroup coolness. This is the banality of evil as far as I’m concerned. It’s natural for humans but you can do better. The LessWrong community is supposed to help people not to do this but they aren’t honest with themselves about what they get out of AI Safety, which is something very similar to what you’ve expressed in this post (gatekept community, feeling smart, a techno-utopian aesthetic) instead of trying to discover in an open-minded way what’s actually the right approach to help the world.
I have argued with this before—I have absolutely been through an open minded process to discover the right approach and I genuinely believe the likes of MIRI, pause AI movements are mistaken and harmful now, and increase P(doom). This is not gatekeeping or trying to look cool! You need to accept that there are people who have followed the field for >10 years, have heard all the arguments, used to believe Yud et all were mostly correct, and now agree with the positions of Pope/Belrose/Turntrout more. Do not belittle or insult them by assigning the wrong motives to them.
If you want a crude overview of my position
Superintelligence is extremely dangerous even though at least some of MIRI worldview is likely wrong.
P(doom) is a feeling, it is too uncertain to be rational about, however mine is about 20% if humanity develops TAI in the next <50 years. (This is probably more because of my personal psychology than a fact about the world and I am not trying to strongly pretend otherwise)
P(doom) if superintelligence was impossible is also about 20% for me, because the current tech (LLM etc) can clearly enable “1984” or worse type societies for which there is no comeback and extinction is preferable. Our current society/tech/world politics is not proven to stable.
Because of this, it is not at all clear what the best path forward is and people should have more humility about their proposed solutions. There is no obvious safe path forward given our current situation. (Yes if things had gone differently 20-50 years ago there perhaps could be...)
Hey Holly, great points about PETA.
I left one comment replying to a critical comment this post got saying that it wasn’t being charitable (which turned into a series of replies) and now I find myself in a position (a habit?) of defending the OP from potentially-insufficiently-charitable criticisms. Hence, when I read your sentence...
...my thought is: Are you sure? When I read the post I remember reading:
This series of questions seems to me like it’s wondering whether Kat’s strategy is effective at AI safety, which is the thing you’re saying it’s not doing.
(I just scrolled up on my phone and saw that OP actually quoted this herself in the comment you’re replying to. (Oops. I had forgotten this as I had read that comment yesterday.))
Sure, the OP is also clearly venting about her personal distaste for Kat’s posts, but it seems to me that she is also asking the question that you say she isn’t interested in: are Kat’s posts actually effective?
(Side note: I kind of regret leaving any comments on this post at all. It doesn’t seem like the post did a good job encouraging a fruitful discussion. Maybe OP and anyone else who wants to discuss the topic should start fresh somewhere else with a different context. Just to put an idea out there: Maybe it’d be a more productive use of everyone’s energy for e.g. OP, Kat, and you Holly to get on a call together and discuss what sort of content is best to create and promote to help the cause of AI safety, and then (if someone was interested in doing so) write up a summary of your key takeaways to share.)
Yeah, this is the first time I’ve commented on lesswrong in months and I would prefer to just be out of here. But OP was such nasty meangirl bullying that, when someone showed it to me, I wanted to push back.
If OP was geniunely curious, she could’ve looked for evidence beyond her personal feelings (e.g. ran an internet survey) and / or asked Kat privately. What OP did here is called “concern trolling”.
I agree that that would be evidence of OP being more curious. I just don’t think that given what OP actually did it can be said that she wasn’t curious at all.