Wouldn’t you expect that if the cause actually made sense though? (and not only if this is a cult)
There is secret knowledge that you pay for (ai-box)
Less than 0.01% of the users have played an ai-box game (to my knowledge) and even less have played it for money.
Members do some kooky things (cryonics, polyamory)
Again fairly small subset for the first thing, slightly larger for the second but I guess I will give you that one.
Members claim “rationality” has helped them lose weight or sleep better—subject things without controls—rather than something more measurable and where a mechanism is more obvious.
Probably a tiny subset of users claim that—I personally have never seen anyone claim that rationality helped them sleep better and if you mean that evidence-based reasoning helped them find an intervention designed to increase sleep quality you are grasping for straws.
At least one thing is not supposed to be discussed in public (banned memetic hazard). LW members seem significantly kookier when talking about this (and in the original deleted thread) than on more public subjects.
We are not supposed to write out the actual basilisk (there is only one) on lesswrong.com. There is no problems with talking about it in public and again this affects a tiny portion of users.
Members have a lot of jargon. It can seem like they’re speaking their own language. More, there’s a bunch of literature embedded in the organization’s worldview; publicly this is treated as simple fiction, but internally it’s clearly taken more seriously.
Giving you this one as well.
Although there’s no explicit severing advice, LW encourages (in practice if not in theory) members to act in ways that reduce their out-of-group friendships
Bullshit.
The hierarchy is opaque; it feels like there is a clique of high-level users, but this is not public.
There are just respected users and no clear-cut hierarchy—that’s what happens at most places. For a proxy of who is a high-level user look at the ‘Top Contributors’.
This sort of point-by-point refutation is the same sort of thing that would happen in a church that was trying to defend against allegations of cultyness.
I don’t think lmm’s list of reasons was utterly compelling—good, but not utterly compelling—but I don’t think it would matter if it were a perfect list, because there will always be a defense for accusations of cultyness that satisfies the church/forum.
It is more interesting watching it happen here vs. the church IMO because LW is all about rationality, where the church can always push the “faith” button when they are backed into a logical corner.
At the end of the day, it is just an online forum. But it does sound to me (based on what I can gather from perusing) like there are a group of people here who take this stuff seriously enough so as to make cultyness possible.
I’m sure the “LW/cryonics/transhumanism/basilisk stuff is so similar to how religion works” bit got old a long time ago, but Dear Lord is it apparent and fascinating to me.
This sort of point-by-point refutation is the same sort of thing that would happen in a church that was trying to defend against allegations of cultyness.
Ah, the perennial dilemma of how to respond to an accusation of cultiness. If one bothers to rebut it: that’s exactly what a cult would do! If one doesn’t rebut it: oh, the accusation must be unanswerable and hence true!
I completely understand. And I know mine is pretty cheap reasoning. But it just reminds me of what happens in a similar situation in the church. Feel free to ignore it. As I said, I’m confident it has probably been played out by now. I’m satisfied just to watch in awe.
Wouldn’t you expect that if the cause actually made sense though? (and not only if this is a cult)
Given how much LWers seem to care about effective charity, I’d expect more scrutiny, and a stronger insistence on measurable outcomes. I guess you’re right though; the money isn’t inherently a problem.
Less than 0.01% of the users have played an ai-box game (to my knowledge) and even less have played it for money.
It seems like a defining characteristic; it’s one place where the site clearly differs from more “mainstream” AI research (though this may be a distorted perception since it was how I first heard of LW)
if you mean that evidence-based reasoning helped them find an intervention designed to increase sleep quality you are grasping for straws.
Shrug. It looks dodgy to me. It pattern-matches with e.g. the unverifiable stories people tell of their personal experience of Jesus.
We are not supposed to write out the actual basilisk (there is only one) on lesswrong.com. There is no problems with talking about it in public
That’s not at all clear. I’ve never seen any explicit rules. I’ve seen articles that carefully avoid saying the name.
There are just respected users and no clear-cut hierarchy—that’s what happens at most places.
Even on internet forums there’s usually an explicit distinction between mod and not, and often layers to it. (The one exception I know is HN, and even there people know who pg is, who’s part of YC and who’s not, and stories are presented differently if they’re coming from YC members). And it’s unusual and suspicious for the high-ups to all be on first name terms with each other. It raises questions over objectivity, oversight, conflict resolution.
It seems like a defining characteristic; it’s one place where the site clearly differs from more “mainstream” AI research (though this may be a distorted perception since it was how I first heard of LW)
It’s not. Your view is definitely distorted.
Shrug. It looks dodgy to me. It pattern-matches with e.g. the unverifiable stories people tell of their personal experience of Jesus.
..
That’s not at all clear. I’ve never seen any explicit rules. I’ve seen articles that carefully avoid saying the name.
Look around then? Eliezer has even made a Reddit thread for things like that where the basilisk is freely discussed.
Even on internet forums there’s usually an explicit distinction between mod and not, and often layers to it. (The one exception I know is HN, and even there people know who pg is, who’s part of YC and who’s not, and stories are presented differently if they’re coming from YC members). And it’s unusual and suspicious for the high-ups to all be on first name terms with each other. It raises questions over objectivity, oversight, conflict resolution.
Yeah, and people here know who Eliezer Yudkowsky is and who is part of MIRI which is LW’s parent organization..
Look around then? Eliezer has even made a Reddit thread for things like that where the basilisk is freely discussed.
I’m not active on reddit. Most forums have a link to the rules right next to the comment box; this one does not. There clearly is a chilling effect going on, because I’ve seen posts that make carefully oblique references to memetic hazards rather than just saying “don’t post the basilisk in the comments please”.
Yeah, and people here know who Eliezer Yudkowsky is and who is part of MIRI which is LW’s parent organization
I have no idea who’s part of MIRI and which posts are or aren’t from MIRI, because we don’t do the equivalent of (YC 09) on stories here. (And HN was explicitly the worst other example I know; they could certainly stand to improve their transparency a lot).
And it’s unusual and suspicious for the high-ups to all be on first name terms with each other.
By “first-name terms with each other”, do you mean something more than the literal meaning of “familiar with someone, such that one can address that person by his or her first name”? Because in my experience, treating other users on a first name basis is the default for all users on many Internet forums, LW included.
I meant “talk about each other as if they’re close personal friends”. (Myself I generally try to avoid using first names for people who aren’t such, but I appreciate that that’s probably a cultural difference).
And it’s unusual and suspicious for the high-ups to all be on first name terms with each other. It raises questions over objectivity, oversight, conflict resolution.
I think this is more due to the number of people who have their real name as their LessWrong username than any sinister cabal.
Wouldn’t you expect that if the cause actually made sense though? (and not only if this is a cult)
Less than 0.01% of the users have played an ai-box game (to my knowledge) and even less have played it for money.
Again fairly small subset for the first thing, slightly larger for the second but I guess I will give you that one.
Probably a tiny subset of users claim that—I personally have never seen anyone claim that rationality helped them sleep better and if you mean that evidence-based reasoning helped them find an intervention designed to increase sleep quality you are grasping for straws.
We are not supposed to write out the actual basilisk (there is only one) on lesswrong.com. There is no problems with talking about it in public and again this affects a tiny portion of users.
Giving you this one as well.
Bullshit.
There are just respected users and no clear-cut hierarchy—that’s what happens at most places. For a proxy of who is a high-level user look at the ‘Top Contributors’.
This sort of point-by-point refutation is the same sort of thing that would happen in a church that was trying to defend against allegations of cultyness.
I don’t think lmm’s list of reasons was utterly compelling—good, but not utterly compelling—but I don’t think it would matter if it were a perfect list, because there will always be a defense for accusations of cultyness that satisfies the church/forum.
It is more interesting watching it happen here vs. the church IMO because LW is all about rationality, where the church can always push the “faith” button when they are backed into a logical corner.
At the end of the day, it is just an online forum. But it does sound to me (based on what I can gather from perusing) like there are a group of people here who take this stuff seriously enough so as to make cultyness possible.
I’m sure the “LW/cryonics/transhumanism/basilisk stuff is so similar to how religion works” bit got old a long time ago, but Dear Lord is it apparent and fascinating to me.
Ah, the perennial dilemma of how to respond to an accusation of cultiness. If one bothers to rebut it: that’s exactly what a cult would do! If one doesn’t rebut it: oh, the accusation must be unanswerable and hence true!
I completely understand. And I know mine is pretty cheap reasoning. But it just reminds me of what happens in a similar situation in the church. Feel free to ignore it. As I said, I’m confident it has probably been played out by now. I’m satisfied just to watch in awe.
Given how much LWers seem to care about effective charity, I’d expect more scrutiny, and a stronger insistence on measurable outcomes. I guess you’re right though; the money isn’t inherently a problem.
It seems like a defining characteristic; it’s one place where the site clearly differs from more “mainstream” AI research (though this may be a distorted perception since it was how I first heard of LW)
Shrug. It looks dodgy to me. It pattern-matches with e.g. the unverifiable stories people tell of their personal experience of Jesus.
That’s not at all clear. I’ve never seen any explicit rules. I’ve seen articles that carefully avoid saying the name.
Even on internet forums there’s usually an explicit distinction between mod and not, and often layers to it. (The one exception I know is HN, and even there people know who pg is, who’s part of YC and who’s not, and stories are presented differently if they’re coming from YC members). And it’s unusual and suspicious for the high-ups to all be on first name terms with each other. It raises questions over objectivity, oversight, conflict resolution.
It’s not. Your view is definitely distorted.
..
Look around then? Eliezer has even made a Reddit thread for things like that where the basilisk is freely discussed.
Yeah, and people here know who Eliezer Yudkowsky is and who is part of MIRI which is LW’s parent organization..
I’m not active on reddit. Most forums have a link to the rules right next to the comment box; this one does not. There clearly is a chilling effect going on, because I’ve seen posts that make carefully oblique references to memetic hazards rather than just saying “don’t post the basilisk in the comments please”.
I have no idea who’s part of MIRI and which posts are or aren’t from MIRI, because we don’t do the equivalent of (YC 09) on stories here. (And HN was explicitly the worst other example I know; they could certainly stand to improve their transparency a lot).
By “first-name terms with each other”, do you mean something more than the literal meaning of “familiar with someone, such that one can address that person by his or her first name”? Because in my experience, treating other users on a first name basis is the default for all users on many Internet forums, LW included.
I meant “talk about each other as if they’re close personal friends”. (Myself I generally try to avoid using first names for people who aren’t such, but I appreciate that that’s probably a cultural difference).
I think this is more due to the number of people who have their real name as their LessWrong username than any sinister cabal.