This place uses upvote/downvote mechanics, and authors of posts can ban commentors from writing there… which man, if you want to promote groupthink and all kinds of ingroup hidden rules and outgroup forbidden ideas, that’s how you’d do it.
You can see it at work—when a post is upvoted is it because it’s well-written/useful or because it’s saying the groupthink? When a post is downvoted is it because it contains forbidden ideas?
When you talk about making a new faction—that is what this place is. And naming it Rationalists says something very direct to those who don’t agree—they’re Irrationalists.
Perhaps looking to other communities is the useful path forward. Over on reddit there’s science and also askhistorians. Both have had “scandals” of a sort that resulted in some of the most iron-fisted moderation that site has to offer. The moderators are all in alignment about what is okay and not. Those communities function extremely well because a culture is maintained.
LessWrong has posts where nanites will kill us all. A post where someone is afraid, apparently, of criticizing Bing ChatGPT because it might come kill them later on.
There is moderation here but I can’t help to think of those reddit communities and ask whether a post claiming someone is scared of criticizing Bing ChatGPT should be here at all.
When I read posts like that I think this isn’t about rationality at all. Some of them are a kind of written cosplay, hyped up fiction, which when it remains, attracts others. Then we end up with someone claiming to be an AI running on a meat substrate… when in fact they’re just mentally ill.
I think those posts should have been removed entirely. Same for those gish gallop posts of AI takeover where it’s nanites or bioweapons and whatever else.
But at the core of it, they won’t be and will remain in the future because the bottom level of this website was never about raising the waterline of sanity—it was AI is coming, it will kill us, and here’s all the ways it will kill us.
It’s a keystone, a basic building block. It cannot be removed. It’s why you see so few posts here saying “hey, AI probably won’t kill us and even if something gets out of hand, we’ll be able to easily destroy it”.
When you have fundamental keystones in a community, sure there will be posts pointing out things but really the options become leave or stay.
This place uses upvote/downvote mechanics, and authors of posts can ban commentors from writing there… which man, if you want to promote groupthink and all kinds of ingroup hidden rules and outgroup forbidden ideas, that’s how you’d do it.
You can see it at work—when a post is upvoted is it because it’s well-written/useful or because it’s saying the groupthink? When a post is downvoted is it because it contains forbidden ideas?
When you talk about making a new faction—that is what this place is. And naming it Rationalists says something very direct to those who don’t agree—they’re Irrationalists.
Perhaps looking to other communities is the useful path forward. Over on reddit there’s science and also askhistorians. Both have had “scandals” of a sort that resulted in some of the most iron-fisted moderation that site has to offer. The moderators are all in alignment about what is okay and not. Those communities function extremely well because a culture is maintained.
LessWrong has posts where nanites will kill us all. A post where someone is afraid, apparently, of criticizing Bing ChatGPT because it might come kill them later on.
There is moderation here but I can’t help to think of those reddit communities and ask whether a post claiming someone is scared of criticizing Bing ChatGPT should be here at all.
When I read posts like that I think this isn’t about rationality at all. Some of them are a kind of written cosplay, hyped up fiction, which when it remains, attracts others. Then we end up with someone claiming to be an AI running on a meat substrate… when in fact they’re just mentally ill.
I think those posts should have been removed entirely. Same for those gish gallop posts of AI takeover where it’s nanites or bioweapons and whatever else.
But at the core of it, they won’t be and will remain in the future because the bottom level of this website was never about raising the waterline of sanity—it was AI is coming, it will kill us, and here’s all the ways it will kill us.
It’s a keystone, a basic building block. It cannot be removed. It’s why you see so few posts here saying “hey, AI probably won’t kill us and even if something gets out of hand, we’ll be able to easily destroy it”.
When you have fundamental keystones in a community, sure there will be posts pointing out things but really the options become leave or stay.
Do you believe encouraging the site maintainers to implement degamification techniques on the site would help with your criticisms?