I know a thing or two (expert on Scientology, knowledgeable about lesser nasty memetic infections). In my opinion as someone who knows a thing or two about the subject, LW really isn’t in danger or the source of danger. It has plenty of weird bits, which set off people’s “this person appears to be suffering a damaging memetic infection” alarms (“has Bob joined a cult?”), but it’s really not off on crack.
SIAI, I can’t comment on. I’d hope enough people there (preferably every single one) are expressly mindful of Every Cause Wants To Be A Cult and of the dangers of small closed groups with confidential knowledge and the aim to achieve something big pulling members toward the cult attractor.
I was chatting with ciphergoth about this last night, while he worked at chipping away my disinterest in signing up for cryonics. I’m actually excessively cautious about new ideas and extremely conservative about changing my mind. I think I’ve turned myself into Mad Eye Moody when it comes to infectious memes. (At least in paranoia; I’m not bragging about my defences.) On the other hand, this doesn’t feel like it’s actually hampered my life. On the other other hand, I would not of course know.
SIAI, I can’t comment on. I’d hope enough people there (preferably every single one) are expressly mindful of Every Cause Wants To Be A Cult and of the dangers of small closed groups with confidential knowledge and the aim to achieve something big pulling members toward the cult attractor.
I don’t have extensive personal experience with SIAI (spent two weekends at their Visiting Fellows house, attended two meetups there, and talked to plenty of SIAI-affiliated people), but the following have been my impressions:
People there are generally expected to have read most of the Sequences… which could be a point for cultishness in some sense, but at least they’ve all read the Death Spirals & Cult Attractor sequence. :P
There’s a whole lot of disagreement there. They don’t consider that a good thing, of course, but any attempts to resolve disagreement are done by debating, looking at evidence, etc., not by adjusting toward any kind of “party line”. I don’t know of any beliefs that people there are required or expected to profess (other than basic things like taking seriously the ideas of technological singularity, existential risk, FAI, etc., not because it’s an official dogma, but just because if someone doesn’t take those seriously it just raises the question of why they’re interested in SIAI in the first place).
On one occasion, there were some notes on a whiteboard comparing and contrasting Singularitarians and Marxists. Similarities included “[expectation/goal of] big future happy event”, “Jews”, “atheists”, “smart folks”. Differences included “popularly popular vs. popularly unpopular”. (I’m not sure which was supposed to be the more popular one.) And there was a bit noting that both groups are at risk of fully general counterarguments — Marxists dismissed arguments they didn’t like by calling their advocates “counterrevolutionary”, and LW-type Singularitarians could do the same with categorical dismissals such as “irrational”, “hasn’t overcome their biases”, etc. Note that I haven’t actually observed SIAI people doing that, so I just read that as a precaution.
(And I don’t know who wrote that, or what the context was, so take that as you will; but I don’t think it’s anything that was supposed to be a secret, because (IIRC) it was still up during one of the meetups, and even if I’m mistaken about that, people come and go pretty freely.)
People are pretty critical of Eliezer. Of course, most people there have a great deal of respect and admiration for him, and to some degree, the criticism (which is usually on relatively minor things) is probably partly because people there are making a conscious effort to keep in mind that he’s not automatically right, and to keep themselves in “evaluate arguments individually” mode rather than “agree with everything” mode. (See also this comment.)
So yeah, my overall impression is that people there are very mindful that they’re near the cult attractor, and intentionally and successfully act so as to resist that.
So yeah, my overall impression is that people there are very mindful that they’re near the cult attractor, and intentionally and successfully act so as to resist that.
Sounds like it more so than any other small group I know of!
I would be surprised if less wrong itself ever developed fully into a cult. I’m not so sure about SIAI, but I guess it will probably just collapse at some point. LW doesn’t look like a cult now. But what was Scientology like in its earliest stages?
Is there mostly a single way how groups gradually turn into cults, or does it vary a lot?
My intuition was more about Ayn Rand and objectivists than Scientology, but I don’t really know much here. Anybody knows what were early objectivists like?
I didn’t put much thought into this, it’s just some impressions.
Humans have a natural tendency to form close-knit ingroups. This can turn into the cult attractor. If the group starts going a bit weird, evaporative cooling makes it weirder. edit: jimrandomh nailed it: it’s isolation from outside social calibration that lets a group go weird.
Predatory infectious memes are mostly not constructed, they evolve. Hence the cult attractor.
Scientology was actually constructed—Hubbard had a keen understanding of human psychology (and no moral compass and no concern as to the difference between truth and falsity, but anyway) and stitched it together entirely from existing components. He started with Dianetics and then he bolted more stuff onto it as he went.
But talking about Scientology is actually not helpful for the question you’re asking, because Scientology is the Godwin example of bad infectious memes—it’s so bad (one of the most damaging, in terms of how long it takes ex-members to recover—I couldn’t quickly find the cite) that it makes lesser nasty cults look really quite benign by comparison. It is literally as if your only example of authoritarianism was Hitler or Pol Pot and casual authoritarianism didn’t look that damaging at all compared to that.
Ayn Rand’s group turned cultish by evaporative cooling. These days, it’s in practice more a case of individual sufferers of memetic infection—someone reads Atlas Shrugged and turns into an annoying crank. It’s an example of how impossible it is to talk someone out of a memetic infection that turns them into a crank—they have to get themselves out of it.
Is there mostly a single way how groups gradually turn into cults, or does it vary a lot?
Yes, there is. One of the key features of cults is that they make their members sever all social ties to people outside the cult, so that they lose the safeguard of friends and family who can see what’s happening and pull them out if necessary. Sci*****ogy was doing that from the very beginning, and Less Wrong has never done anything like that.
Not all, just enough. Weakening their mental ties so they get their social calibration from the small group is the key point. But that’s just detail, you’ve nailed the biggie. Good one.
and Less Wrong has never done anything like that.
SIAI staff will have learnt to think in ways that are hard to calibrate against the outside world (singularitarian ideas, home-brewed decision theories). Also, they’re working on a project they think is really important. Also, they have information they can’t tell everyone (e.g. things they consider decision-theoretic basilisks). So there’s a few untoward forces there. As I said, hope they all have their wits about them.
/me makes mental note to reread piles of stuff on Scientology. I wonder who would be a good consulting expert, i.e. more than me.
Not all, just enough. Weakening their mental ties so they get their social calibration from the small group is the key point.
No, it’s much more than that. Scientology makes its members cut off communication with their former friends and families entirely. They also have a ritualized training procedure in which an examiner repeatedly tries to provoke them, and they have to avoid producing a detectable response on an “e-meter” (which measures stress response). After doing this for awhile, they learn to remain calm under the most extreme circumstances and not react. And so when Scientology’s leaders abuse them in terrible ways and commit horrible crimes, they continue to remain calm and not react.
Cults tear down members’ defenses and smash their moral compasses. Less Wrong does the exact opposite.
I was talking generally, not about Scientology in particular.
As I noted, Scientology is such a toweringly bad idea that it makes other bad ideas seem relatively benign. There are lots of cultish groups that are nowhere near as bad as Scientology, but that doesn’t make them just fine. Beware of this error. (Useful way to avoid it: don’t use Scientology as a comparison in your reasoning.)
But that error isn’t nearly as bad as accidentally violating containment procedures when handling virulent pathogens, so really, what is there to worry about?
Cults tear down members’ defenses and smash their moral compasses. Less Wrong does the exact opposite.
What defense against EY does EY strengthen? Because I’m somewhat surprised by the amount I hear Aumann’s Agreement Theorem bandied around with regards to what is clearly a mistake on EY’s part.
No, it’s much more than that. Scientology makes its members cut off communication with their former friends and families entirely.
I’d like to see some solid evidence for or against the claim that typical developing cults make their members cut off communication with their former friends and families entirely.
If the claim is of merely weakening these ties, then this is definitely happening. I especially mean commitment by signing up for cryonics. It will definitely increase mental distance between affected person and their formerly close friends and family, I guess about as much signing up for a weird religion but mostly perceived as benign would. I doubt anyone has much evidence about this demographics?
I’d like to see some solid evidence for or against the claim that typical developing cults make their members cut off communication with their former friends and families entirely.
I don’t think they necessarily make them—all that’s needed is for the person to loosen the ties in their head, and strengthen them to the group.
An example is terrorist cells, which are small groups with a goal who have gone weird together. They may not cut themselves off from their families, but their bad idea has them enough that their social calibrator goes group-focused. I suspect this is part of why people who decompartmentalise toxic waste go funny. (I haven’t worked out precisely how to get from the first to the second.)
There are small Christian churches that also go cultish in the same way. Note that in this case, the religious ideas are apparently mainstream—but there’s enough weird stuff in the Blble to justify all manner of strangeness.
At some stage cohesion of the group becomes very important, possibly more important than the supposed point of the group. (I’m not sure how to measure that.)
I need to ask some people about this. Unfortunately, the real experts on cult thinking include several of the people currently going wildly idiotic about cryonics on the Rick Ross boards … an example of overtraining on a bad experience and seeing a pattern where it isn’t.
Regardless of actual chances of both working and considering the issue from purely sociological perspective—signing up for cryonics seems to be to be a lot like “accepting Jesus” / born again / or joining some far-more-religious-than-average subgroups of mainstream religions.
In both situations there’s some underlying reasonably mainstream meme soup that is more or less accepted (Christianity / strict mind-brain correspondence) but which most people who accept it compartmentalize away. Then some groups decide not to compartmentalize it but accept consequences of their beliefs. It really doesn’t take much more than that.
Disclaimers:
I’m probably in some top 25 posters by karma, but I tend to feel like an outsider here a lot.
The only “rationalist” idea from LW canon I take more or less seriously is the outside view, and the outside view says taking ideas too seriously tends to have horrible consequences most of the time. So I cannot even take outside view too seriously, by outside view—and indeed I have totally violated outside view’s conclusions on several occasions, after careful consideration and fully aware of what I’m doing. Maybe I should write about it someday.
In my estimate all FAI / AI foom / nonstandard decision theories stuff is nothing but severe compartmentalization failure.
In my estimate cryonics will probably be feasible in some remote future, but right now costs of cryonics (very rarely honestly stated by proponents, backed by serious economic simulations instead of wishful thinking) are far too high and chances of it working now are far too slim to bother. I wouldn’t even take it for free, as it would interfere with me being an organ donor, and that has non-negligible value for me. And even without that personal cost of added weirdness would probably be too high relative to my estimate of it working.
I can imagine alternative universes where cryonics makes sense, and I don’t think people who take cryonics seriously are insane, I just think wishful thinking biases them. In non-zero but as far as I can tell very very tiny portion of possible future universes where cryonics turned out to work, well, enjoy your second life.
By the way, is there any reason for me to write articles expanding my points, or not really?
I’m probably in some top 25 posters by karma, but I tend to feel like an outsider here a lot.
My own situation is not so different although
(a) I have lower karma than you and
(b) There are some LW posters with whom I feel strong affinity
By the way, is there any reason for me to write articles expanding my points, or not really?
I myself am curious and would read what you had to say with interest and this is a weak indication that others would but of course it’s for you to say whether it would be worth the opportunity cost. Probably the community would be more receptive to such pieces if they were cautious & carefully argued than if not; but this takes still more time and effort.
You get karma mostly for contributing more, not by higher quality. Posts and comments both have positive expected karma.
Also you get more karma for more alignment with groupthink. I even recall how in early days of lesswrong I stated based on very solid outside view evidence (from every single subreddit I’ve been to) that karma and reception will come to correlate with not only quality but also alignment with groupthink—that on reddit-style karma system downvoting-as-disagreement / upvoting-as-agreement becomes very significant at some point. People disagreed, but the outside view prevailed.
This unfortunately means that one needs to put a lot more effort into writing something that disagrees with groupthink than something that agrees with it—and such trivial inconveniences matter.
(b) There are some LW posters with whom I feel strong affinity
I don’t think I feel particular “affinity” with anyone here, but I find many posters highly enjoyable to read and/or having a lot of insightful ideas.
I mostly write when I disagree with someone, so for a change (I don’t hate everyone all the time, honestly :-p) here are two among the best writings by lesswrong posters I’ve ever read:
Twilight fanfiction by Alicorn—it is ridiculously good, I guess a lot of people will avoid it because it’s Twilight, but it would be a horrible mistake.
I think it’s a plus point that a contrarian comment will get upvotes for effort and showing its work (links, etc) - that is, the moderation method still seems to be “More like this please” than “Like”. Being right and obnoxious gets downvotes.
(I think “Vote up” and “Vote down” might be profitably be replaced with “More like this” and “Less like this”, but I don’t think that’s needed now and I doubt it’d work if it was needed.)
More like this/Less like this makes sense for top posts, but is it helpful for comments?
It’s ok to keep an imperfect system—LW is nowhere near groupthink levels of subreddits or slashdot.
However—it seems to me that stealing HackerNews / Stackoverflow model of removing normal downvote, and keeping only upvote for comments (and report for spam/abuse, or possibly some highly restricted downvote for special situations only; or one which would count for a lot less than upvote) would reduce groupthink a lot, while keeping all major benefits of current system.
Other than “not fixing what ain’t broken”, are there any good reasons to keep downvote for comments? Low quality non-abusive coments will sink to the bottom just because of not getting upvotes, later reinforced by most people reading from highest rated first.
Disclaimers:
I’m obviously biased as a contrarian, and as someone who really likes reading a variety of contrarian opinions. I rarely bother posting comments saying that I totally agree with something. I occasionally send a private message with thanks when I read something particularly great, but I don’t recall ever doing it here yet, even though a lot of posts were that kind of great.
And I fully admit that on several occasions I downvoted a good comment just because I though one below it was far better and deserving a lot of extra promotion. I always felt like I’m abusing the system this way. Is this common?
You get karma mostly for contributing more, not by higher quality. Posts and comments both have positive expected karma.
Yes, I’ve noticed this; it seems like there’s a danger of there being an illusion that one is that one is actually getting something done by posting or commenting on LW on account of collecting karma by default.
On the upside I think that the net value of LW is positive so that (taking the outside view; ignoring the quality of particular posts/comments which is highly variable), the expected value of posts and comments is positive though probably less than one subjectively feels.
Also you get more karma for more alignment with groupthink [...]
Yes; I’ve noticed this too. A few months ago I came across Robin Hanson’s Most Rationalists Are Elsewhere which is in similar spirit.
This unfortunately means that one needs to put a lot more effort into writing something that disagrees with groupthink than something that agrees with it—and such trivial inconveniences matter.
Agree here. In defense of LW I would say that this seems like a pretty generic feature across groups in general. I myself try to be careful about interpreting statements made by those with views that clash with my own charitably but don’t know how well I succeed.
I mostly write when I disagree with someone, so for a change (I don’t hate everyone all the time, honestly :-p)
Good to know :-)
Twilight fanfiction by Alicorn—it is ridiculously good, I guess a lot of people will avoid it because it’s Twilight, but it would be a horrible mistake.
Fascinating; I had avoided it for this very reason but will plan on checking it out.
Agree here. In defense of LW I would say that this seems like a pretty generic feature across groups in general. I myself try to be careful about interpreting statements made by those with views that clash with my own charitably but don’t know how well I succeed.
I don’t consider LW particularly bad—it seems considerably saner than a typical internet forum of similar size. Level of drama seems a lot lower than is typical. Is my impression right that most of drama we get centers about obscure FAI stuff? I tend to ignore these posts unless I feel really bored. I’ve seen some drama about gender and politics, but honestly a lot less that these subject normally attract on other similar places.
I don’t consider LW particularly bad—it seems considerably saner than a typical internet forum of similar size.
I have a similar impression.
LW was the first internet forum that I had serious exposure to. I initially thought that I had stumbled onto a very bizarre cult. I complained about this to various friends and they said “no, no, the whole internet is like this!” After hearing this from enough people and perusing the internet some more I realized that they were right. Further contemplation and experience made me realize that it wasn’t only people on the internet who exhibit high levels of group think & strong ideological agendas; rather this is very common among humans in general! Real life interactions mask over the effects of group think & ideological agendas. I was then amazed at how oblivious I had been up until I learned about these things. All of this has been a cathartic and life-changing.
Is my impression right that most of drama we get centers about obscure FAI stuff?
Not sure, I don’t really pay enough attention. As a rule, I avoid drama in general on account of lack of interest in the arguments being made on either side. The things that I’ve noticed most are those connected with gender wars and with Roko’s post being banned. Then of course there were my own controversial posts back in August.
I’ve seen some drama about gender and politics, but honestly a lot less that these subject normally attract on other similar places.
The things that I’ve noticed most are those connected with gender wars and with Roko being banned
In the interest of avoiding the spread of false ideas, it should be pointed out that Roko was not banned; rather his post was “banned” (jargon for actually deleted, as opposed to “deleted”, which merely means removed from the various “feeds” (“New”, the user’s overview, etc)). Roko himself then proceded to delete (in the ordinary way) all his other posts and comments.
By the way, is there any reason for me to write articles expanding my points, or not really?
I’m just some random lurker, but I’d be very interested in these articles. I share your view on cryonics and would like to read some more clarification on what you mean by “compartmentalization failure” and some examples of a rejection of the outside view.
On compartmentalization failure and related issues there are two schools present on less wrong:
Pro-compartmentalization view—expressed in reason as memetic immune disorder—seems to correlate with outside view, and reasoning from experience. Typical example: me
Anti-compartmentalization view—expressed in taking Ideas seriously—seems to correlate with “weak inside view” and reasoning from theory. Typical example: Eliezer
Right now there doesn’t seem to be any hope of reaching Aumann agreement between these points of view, and at least some members of both camps view many of other camp’s ideas with contempt. The primary reason seems to be that the kind of arguments that people on one end of the spectrum find convincing people on the other end see as total nonsense, and with full reciprocity.
Of course there’s plenty of issues on which both views agree as well—like religion, evolution, akrasia, proper approach to statistics, and various biases (I think outside viewers seem to demand more evidence that these are also a problem outside laboratory than inside viewers, but it’s not a huge disagreement). And many other disagreements seem to be unrelated to this.
Is this outside-viewers/pro-compartmentalization/firm-rooting-in-experience/caution vs weak-inside-viewers/anti-compartmentalization/pure-reason/taking-ideas-seriously spectrum only my impression, or do other people see it this way as well?
I might be very well biased, as I feel very strongly about this issue, and the most prominent poster Eliezer seems to feel very strongly about this in exactly the opposite way. It seems to me that most people here have reasonably well defined position on this issue—but I know better than to trust my impressions of people on an internet forum.
And second question—can you think of any good way for people holding these two positions to reach Aumann agreement?
As for cryonics it’s a lot of number crunching, textbook economics, outside view arguments etc. - all leading to very very low numbers. I might do that someday if I’m really bored.
Phrasing it as pro-compartmentalization might cause unnecessary negative affect for a lot of aspiring rationalists here at LW, though I’m too exhausted to imagine a good alternative. (Just in case you were planning on writing a post about this or the like. Also, Anna Salamon’s posts on compartmentalization were significantly better than my own.)
I’m trying to write up something on this without actually giving readers fear of ideas. I think I could actually scare the crap out of people pretty effectively, but, ah. (This is why it’s been cooking for two months and is still a Google doc of inchoate scribbles.)
A quick observation: Perfect Bayesian mind is impossible to actually build, that much we all know, and nobody cares.
But it’s a lot worse—it is impossible even mathematically—even if we expected as little from it as consistently following the rule that P(b|a)=P(c|b)=100% implies P(c|a)=100% (without getting into choice of prior, infinite precision, transfinite induction, uncountable domains etc., merely the merest minimum still recognizable as Bayesian inference) over unbounded but finite chains of inference over countable set of statements, it can trivially solve the halting problem.
Yes, it will always tell you which theorem is true, and which is false, Goedel theorem be damned. It cannot say anything like P(Riemann hypothesis|basic math axioms)=50% as this automatically implies a violation of Bayes rule somewhere in the network (and there are no compartments to limit damage once it happens—the whole network becomes invalid).
Perfect Bayesian minds people here so willingly accepted as the gold standard of rationality are mathematically impossible, and there’s no workaround, and no approximation that is of much use.
Ironically, perfect Bayesian inference systems works really well inside finite or highly regular compartments, with something else limiting its interactions with rest of the universe.
If you want an outside view argument that this is a serious problem, if Bayesian minds were so awesome, how is it that even in the very limited machine learning world, Bayesian-inspired systems are only one of many competing paradigms, better applicable to some compartments, not working well in others.
I realize that I just explicitly rejected one of the most basic premises accepted by pretty much everyone here, including me until recently. It surprised me that we were all falling for something as obvious retrospectively.
Robin Hanson’s post on contrarians being wrong most of the time was amazingly accurate again. I’m still not sure which ideas I’ve came to believe that relied on perfect Bayesian minds being gold standard of rationality I’ll need to reevaluate, but it doesn’t bother me as much now that I fully accepted that compartmentalization is unavoidable, and a pretty good thing in practice.
I think there’s a nice correspondence between outside view and set of preferred reference classes and Bayesian inference and set of preferred priors. Except outside view can be very easily extended to say “I don’t know”, estimate accuracy of itself as applied to different compartments, give more complex answers, evolve in time by reference classes formerly too small to be of any use now having enough data to return useful answers, and so on.
For very simple systems, these two should correspond to each other in a straightforward way. For complex systems, we have a choice of sometimes answering “I don’t know” or being inconsistent.
I wanted to write this as a top level post, but “one of your most cherished beliefs is totally wrong, here’s a sketch of mathematical proof” post would take a lot more effort to write well.
I tried a few extensions of Bayesian inference that I hoped would be able to deal with it, but this is really fundamental.
You can still use subjective Bayesian worldview—that P(Riemann hypothesis|basic math axioms)=50% is just your intuition. But you must accept that your probabilities can change with no new data, by just more thinking. This sort of Bayesian inference is just another tool of limited use, with biases, inconsistencies, and compartments protecting it from rest of the universe.
There is no gold standard of rationality. There simply isn’t. I have a fall back position of outside view, otherwise it would be about as difficult to accept this as a Christian finally figuring out there is no God, but still wanting to keep the good parts of his or her faith.
Would anyone be willing to write a top level post out of my comment? You’ll either be richly rewarded by a lot of karma, or we’ll both be banned.
Perfect Bayesian minds people here so willingly accepted as the gold standard of rationality are mathematically impossible, and there’s no workaround, and no approximation that is of much use.
A perfect Bayesian is logically omniscient (and logically omniscient agents are perfect Bayesians) and come with the same problem (of being impossible). I don’t see why this fact should be particularly troubling.
If you want an outside view argument that this is a serious problem, if Bayesian minds were so awesome, how is it that even in the very limited machine learning world, Bayesian-inspired systems are only one of many competing paradigms, better applicable to some compartments, not working well in others.
An outside view is only as good as the reference class you use. Your reference class does not appear to have many infinitely long levers, infinitely fast processors or a Maxwell’s Demon. I don’t have any reason to expect your hunch to be accurate.
“Outside View” doesn’t mean go with your gut instinct and pick a few superficial similarities.
I have a fall back position of outside view, otherwise it would be about as difficult to accept this as a Christian finally figuring out there is no God, but still wanting to keep the good parts of his or her faith.
There is more to that analogy than you’d like to admit.
A perfect Bayesian is logically omniscient (and logically omniscient agents are perfect Bayesians) and come with the same problem (of being impossible). I don’t see why this fact should be particularly troubling.
The only way to be “omniscient” over even very simple countable universe is to be inconsistent. There is no way to assign probabilities to every node which obeys Bayes theorem. It’s a lot like Kolmogorov complexity—they can be useful philosophical tools, but neither is really part of mathematics, they’re just logically impossible.
Finite perfect Bayesian systems are complete and consistent. We’re so used to every example of a Bayesian system ever used being finite, that we totally forgot that they are not logically possible to expand to simplest countable systems. We just accepted handwaving results in finite systems into countable domain.
An outside view is only as good as the reference class you use.
This is a feature, not a bug.
No outside view systems you can build will be omniscient. But this is precisely what lets them be consistent.
Different outside view systems will give you different results. It’s not so different from Bayesian priors, except you can have some outside view systems for countable domains, and there are no Bayesian priors like it at all.
You can easily have nested outside view system judging outside view systems on which ones work and which don’t. Or some other interesting kind of nesting. Or use different reference classes for different compartments.
Or you could use something else. What we have here are in a way all computer programs anyway—and representing them as outside view systems is just human convenience.
But every single description of reality must either be allowed to say “I don’t know” or blatantly violate rules of logic. Either way, you will need some kind of compartmentalization to describe reality.
It’s not in any way related. Taleb’s point is purely practical—that we rely on very simple models that work reasonably well most of the time, but very rare cases where they fail often also have huge huge impact. You wouldn’t guess that life or human-level intelligence might happen looking at the universe up until that point. Their reference class was empty. And then they happened just once and had massive impact.
Taleb would be more convincing if he didn’t act as if nobody knew even the power law. Everything he writes is about how actual humans currently model things, and that can easily be improved (well, there are some people who don’t know even the power law...; or with prediction markets to overcome pundit groupthink).
You could easily imagine that while humans really suck at this, and there’s only so much improvement we can make, perhaps there’s a certain gold standard of rationality—something telling us how to do it right at least in theory, even if we cannot actually implement it ever due to physical constraints of the universe. Like perfect Bayesians.
My point is that perfect Bayesians can only deal with finite domains. Gold standard of rationality—basically something that would assign some probabilities to every outcome within some fairly regular countable domain, and they would merely be self-consistent and follow basic rules of probability—it turns that even the simplest such assignment of probabilities is not possible, even in theory.
You can be self-consistent by sacrificing completeness—for some questions you’d answer “no idea”; or you can be complete by sacrificing self-consistency (subjective Bayesianism is exactly like that, your probabilities will change if you just think more about something, even without observing any new data).
And not only perfect Bayesianism, nothing else can work the way people wish. Without some gold standard of rationality, without some one true way of describing reality, a lot of other common beliefs just fail.
Compartmentalization, biases, heuristics, and so on—they are not possible to avoid even in theory, in fact they’re necessary in nearly any useful model of reasoning. Extreme reductionism is out, emergence comes back as an important concept, it’d be a very different less wrong.
More down to earth subjects like akrasia, common human biases, prediction markets, religion, evopsy, cryonics, luminosity, winning, science, scepticism, techniques, self-deception, overconfidence, signaling etc. would be mostly unaffected.
On the other hand so much of theoretical side of less wrong is based on flawed assumption that perfect Bayesians are at least theoretically possible on infinite domains, so that true always answer exists even if we don’t know it, it would need something between a very serious update and simply throwing it away.
Some parts of theory would don’t rely on this at all—like outside view. But these are not terribly popular here.
I don’t think you’d see even much of sequences surviving without a major update.
My point is that perfect Bayesians can only deal with finite domains. Gold standard of rationality—basically something that would assign some probabilities to every outcome within some fairly regular countable domain, and they would merely be self-consistent and follow basic rules of probability—it turns that even the simplest such assignment of probabilities is not possible, even in theory.
What are the smallest and/or simplest domains which aren’t amenable to Bayesian analysis?
I’m not sure you’re doing either me or Taleb justice (though he may well be having too much fun going on about how much smarter he than just about everyone else) -- I don’t think he’s just talking about completely unknown unknowns, or implying that people could get things completely right—just that people could do a great deal better than they generally do.
For example, Taleb talks about a casino which had the probability and gaming part of its business completely nailed down. The biggest threats to the casino turned out to be a strike, embezzlement (I think), and one of its performers being mauled by his tiger. None of these are singularity-level game changers.
In any case, I would be quite interested in more about the limits of Bayesian analysis and how that affects the more theoretical side of LW, and I doubt you’d be downvoted into oblivion for posting about it.
What are the smallest and/or simplest domains which aren’t amenable to Bayesian analysis?
Notice that you’re talking domains already, you’ve accepted it, more or less.
I’d like to ask the opposite question—are there any non-finite domains where perfect Bayesian analysis makes sense?
On any domain where you can have even extremely limited local rules you can specify as conditions, and unbounded size of the world, you can use perfect Bayesian analysis to say if any Turing machine stops, or to prove any statement about natural number arithmetics.
The only difficulty is bridging language of Bayesian analysis and language of computational incompleteness. Because nobody seems to be really using Bayes like that, I cannot even give a convincing example how it fails. Nobody tried other than in handwaves.
Thanks, I now understand what you mean. I’ll have to think further about this.
Personally, I find myself strongly drawn to the anti-compartmentalization position. However, I had bad enough problems with it (let’s just say I’m exactly the kind of person that becomes a fundamentalist, given the right environment) that I appreciate an outside view and want to adopt it a lot more. Making my underlying assumptions and motivations explicit and demanding the same level of proof and consistency of them that I demand from some belief has served me well—so far anyway.
Also, I’d have to admit that I enjoy reading disagreements most, even if just for disagreement’s sake, so I’m not sure I actually want to see Aumann agreement. “Someone is wrong on the internet” syndrome has, on average, motivated me more than reasonable arguments, I’m afraid.
Does it seem to you as well that removing downvote for comments (keeping report for spam and other total garbage etc.) would result in more of this? Hacker News seems to be doing a lot better than subreddits of similar size, and this seems like the main structural difference between them.
Probably yes. I don’t read HN much (reddit provides enough mind crack already), but I never block any comments based on score, only downvote spam and still kinda prefer ye olde days of linear, barely moderated forums. I particularly disagree with “don’t feed the trolls” because I learned tons about algebra, evolution and economics from reading huge flame wars. I thank the cranks for their extreme stubbornness and the ensuing noob-friendly explanations by hundreds of experts.
I particularly disagree with “don’t feed the trolls”
And indeed, a very interesting discussion grew out of this otherwise rather unfortunate post.
I’m quite well acquainted with irc, mailing lists, wikis, wide variety of chans, somethingawful, slashdot, reddit, hn, twitter, and more such forums I just haven’t used in a while.
There are upsides and downsides of all communication formats and karma/moderation systems, but as far as I can tell HN karma system seems to strictly dominate reddit karma system.
If you feel adventurous and don’t mind trolls, I highly recommend giving chans a try (something sane, not /b/ on 4chan) - anonymity (on chans where it’s widely practised, in many namefagging is rampant) makes people drastically reduce effort they normally put into signalling and status games.
What you can see there is human thought far less filtered than usual, and there are very few other opportunities to observe that anywhere. When you come back from such environment to normal life, you will be able to see a lot more clearly how much monkey tribe politics is present in everyday human communication.
(For some strange reasons online pseudonyms don’t work like full anonymity.)
When you come back from such environment to normal life, you will be able to see a lot more clearly how much monkey tribe politics is present in everyday human communication.
I find that working with animals is good for this, too. Though it’s rarely politic to say so.
What you can see there is human thought far less filtered than usual, and there are very few other opportunities to observe that anywhere. When you come back from such environment to normal life, you will be able to see a lot more clearly how much monkey tribe politics is present in everyday human communication.
This is the sort of thing that I was referring to here. Very educational experience.
And second question—can you think of any good way for people holding these two positions to reach Aumann agreement?
Sure: compartmentalisation is clearly an intellectual sin—reality is all one piece—but we’re running on corrupt hardware so due caution applies.
That’s my view after a couple of months’ thought. Does that work for you?
(And that sums up about 2000 semi-readable words of inchoate notes on tne subject. (ctrl-C ctrl-V))
In the present Headless Chicken Mode, by the way, Eliezer is specifically suggesting compartmentalising the very bad idea, having seen people burnt by it. There’s nothing quite like experience to help one appreciate the plus points of compartmentalisation. It’s still an intellectual sin, though.
Sure: compartmentalisation is clearly an intellectual sin
Compartmentalisation is an “intellectual sin” in certain idealized models of reasoning. Outside views says that not only 100% of human level intelligences in the universe, but 100% of thing even remotely intelligent-ish were messy systems that used compartmentalisation as one of basic building blocks, and 0% were implementations of these idealized models—and that in spite of many decades of hard effort, and a lot of ridiculous optimism.
So by outside view the only conclusion I see is that models condemning compertmentalisation are all conclusively proven wrong, and nothing they say about actual intelligent beings is relevant.
reality is all one piece
And yet we our organize knowledge about reality into extremely complicated system of compartments.
Attempts at abandoning that and creating one theory of everything like objectivism (Ayn Rand famously had an opinion about absolutely everything, no disagreements allowed) are disastrous.
but we’re running on corrupt hardware so due caution applies.
I don’t think our hardware is meaningfully “corrupt”. All thinking hardware ever made and likely to be made must take appropriate trade-offs and use appropriate heuristics. Ours seems to be pretty good most of the time when it matters. Shockingly good. Expecting some ideal reasoner that has no constraints is not only physically impossible, it’s not even mathematically possible by Rice Theorem etc.
Compertmentalisation is one of the most basic techniques for efficient reasoning with limited resources—otherwise complexity explodes far more than linearly, and plenty of ideas that made a lot of sense in old context are now transplanted to another context where they’re harmful.
The hardware stays what it was and it was already pretty much fully utilized, so to deal with this extra complexity model needs to be either prunned of a lot of detail mind could otherwise manage just fine, and/or other heuristics and shortcuts, possibly with far worse consequences need to be empleyed a lot more aggressively.
I like this pro-compartmentalization theory, but it is primarily experience which convinces me that abandoning compartmentalization is dangerous and rarely leads to anything good.
it is primarily experience which convinces me that abandoning compartmentalization is dangerous and rarely leads to anything good.
Do you mean abandoning it completely, or abandoning it at all?
The practical reason for decompartmentalisation, despite its dangers, is that science works and is effective. It’s not a natural way for savannah apes to think, it’s incredibly difficult for most. But the payoff is ridiculously huge.
So we get quite excellent results if we decompartmentalise right. Reality does not appear to come in completely separate magisteria. If you want to form a map, that makes compartmentalisation an intellectual sin (which is what I meant).
By “appears to”, I mean that if we assume that reality—the territory—is all of a piece, and we then try to form a map that matches that territory, we get things like Facebook and enough food and long lifespans. That we have separate maps called physics, chemistry and biology is a description of our ignorance; if the maps contradict (e.g. when physics and chemistry said the sun couldn’t be more than 20 million years old and geology said the earth was at least 300 million years old [1]), everyone understands something is wrong and in need of fixing. And the maps keep leaking into each other.
This is keeping in mind the dangers of decompartmentalisation. The reason for bothering with it is an expected payoff in usefully superior understanding. People who know science works like this realise that a useful map is one that matches the territory, so decompartmentalise with wild abandon, frequently not considering dangers. And if you tell a group of people not to do something, at least a few will promptly do it. This does help explain engineer terrorists who’ve inadvertently decompartmentalised toxic waste and logically determined that the infidel must be killed. And why if you have a forbidden thread, it’s an obvious curiosity object.
The problem, if you want the results of science, is then not whether to decompartmentalise, but how and when to decompartmentalise. And that there may be dragons there.
The practical reason for decompartmentalisation, despite its dangers, is that science works and is effective.
But science itself is extremely compartmentalized! Try getting economists and psychologists to agree on anything, and both have pretty good results, most of the time.
Even microeconomics and macroeconomics make far better predictions when they’re separate, and repeated attempt at bringing them together consistently result in a disaster.
Don’t imagine that compartmentalization sets up impenetrable barriers once and for all—there’s a lot of cautious exchange between nearby compartments, and their boundaries keep changing all the time. I quite like the “compartments as scientific disciplines” image. You have a lot of highly fuzzy boundaries—like for example computer science to math to theoretical physics to quantum chemistry to biochemistry to medicine. But when you’re sick you don’t ask on programming reddit for advice.
The best way to describe a territory is to use multiple kinds of maps.
I don’t think anything you’ve said and anything I said actually contradict.
Try getting economists and psychologists to agree on anything, and both have pretty good results, most of the time.
What are the examples you’re thinking of, where both are right and said answers contradict, and said contradiction is not resolvable even in principle?
Upvoted. I think this is a useful way to think about things like this. Compartmentalizing and decompartmentalizing aren’t completely wrong, but are wrongly applied in different contexts. So part of the challenge is to convince the person you’re talking to that it’s safe to decompartmentalize in the realm needed to see what you are talking about.
For example, it took me quite some time to decompartmentalize on evolution versus biology because I had a distrust of evolution. It looked like toxic waste to me, and indeed has arguably generated some (social darwinism, e.g.). People who mocked creationists actually contributed to my sense of distrust in the early stages, given that my subjective experience with (young-earth) creationists was not of particularly unintelligent or gullible people. However this got easier when I learned more biology and could see the reference points, and the vacuum of solid evidence (as opposed to reasonable-sounding speculation) for creationism. Later the creationist speculation started sounding less reasonable and the advocates a bit more gullible—but until I started making the connections from evolution to the rest of science, there wasn’t reason for these things to be on my map yet.
I’m starting to think arguments for cryonics should be presented in the form of “what are the rational reasons to decompartmentalize (or not) on this?” instead of “just shut up and decompartmentalize!” It takes time to build trust, and folks are generally justifiably skeptical when someone says “just trust me”. Also it is a quite valid point that topics like death and immortality (not to mention futurism, etc.) are notorious for toxic waste to begin with.
ciphergoth and I talked about cryonics a fair bit a couple of nights ago. He posits that I will not sign up for cryonics until it is socially normal. I checked my internal readout and it came back “survey says you’re right” and nodded my head. I surmise this is what it will take in general.
(The above is the sort of result my general memetic defence gives. Possibly-excessive conservatism in actually buying an idea.)
So that’s your whole goal. How do you make cryonics normal without employing the dark arts?
I think some additional training in DADA would do me a lot of good here. That is, I don’t want to be using the dark arts, but I don’t want to be vulnerable to them either. And dark arts is extremely common, especially when people are looking for excuses to keep on compartmentalizing something.
A contest for bored advertising people springs to mind: “How would you sell cryonics to the public?” Then filter the results that use dark arts. This will produce better ideas than you ever dreamed.
The hard part of this plan is making it sound like fun for the copywriters. Ad magazine competition? That’s the sort of thing that gets them working on stuff for fun and kudos.
(My psychic powers predict approximately 0 LessWrong regulars in the advertising industry. I hope I’m wrong.)
(And no, I don’t think b3ta is quite what we’re after here.)
And second question—can you think of any good way for people holding these two positions to reach Aumann agreement?
I’ve been thinking about this a lot lately. It may be that there is a tendency to jump to solutions too much on this topic. If more time was spent talking about what the questions are that need to be answered for a resolution, perhaps it would have more success in triggering updates.
The Wikipedia articles on Scientology are pretty good, by the way. (If I say so myself. I started WikiProject Scientology :-) Mostly started by critics but with lots of input from Scientologists, and the Neutral Point Of View turns out to be a fantastically effective way of writing about the stuff—before Wikipedia, there were CoS sites which were friendly and pleasant but rather glaringly incomplete in important ways, and critics’ sites which were highly informative but frequently so bitter as to be all but unreadable.
(Despite the key rule of NPOV—write for your opponent—I doubt the CoS is a fan of WP’s Scientology articles. Ah well!)
I know a thing or two (expert on Scientology, knowledgeable about lesser nasty memetic infections). In my opinion as someone who knows a thing or two about the subject, LW really isn’t in danger or the source of danger. It has plenty of weird bits, which set off people’s “this person appears to be suffering a damaging memetic infection” alarms (“has Bob joined a cult?”), but it’s really not off on crack.
SIAI, I can’t comment on. I’d hope enough people there (preferably every single one) are expressly mindful of Every Cause Wants To Be A Cult and of the dangers of small closed groups with confidential knowledge and the aim to achieve something big pulling members toward the cult attractor.
I was chatting with ciphergoth about this last night, while he worked at chipping away my disinterest in signing up for cryonics. I’m actually excessively cautious about new ideas and extremely conservative about changing my mind. I think I’ve turned myself into Mad Eye Moody when it comes to infectious memes. (At least in paranoia; I’m not bragging about my defences.) On the other hand, this doesn’t feel like it’s actually hampered my life. On the other other hand, I would not of course know.
I don’t have extensive personal experience with SIAI (spent two weekends at their Visiting Fellows house, attended two meetups there, and talked to plenty of SIAI-affiliated people), but the following have been my impressions:
People there are generally expected to have read most of the Sequences… which could be a point for cultishness in some sense, but at least they’ve all read the Death Spirals & Cult Attractor sequence. :P
There’s a whole lot of disagreement there. They don’t consider that a good thing, of course, but any attempts to resolve disagreement are done by debating, looking at evidence, etc., not by adjusting toward any kind of “party line”. I don’t know of any beliefs that people there are required or expected to profess (other than basic things like taking seriously the ideas of technological singularity, existential risk, FAI, etc., not because it’s an official dogma, but just because if someone doesn’t take those seriously it just raises the question of why they’re interested in SIAI in the first place).
On one occasion, there were some notes on a whiteboard comparing and contrasting Singularitarians and Marxists. Similarities included “[expectation/goal of] big future happy event”, “Jews”, “atheists”, “smart folks”. Differences included “popularly popular vs. popularly unpopular”. (I’m not sure which was supposed to be the more popular one.) And there was a bit noting that both groups are at risk of fully general counterarguments — Marxists dismissed arguments they didn’t like by calling their advocates “counterrevolutionary”, and LW-type Singularitarians could do the same with categorical dismissals such as “irrational”, “hasn’t overcome their biases”, etc. Note that I haven’t actually observed SIAI people doing that, so I just read that as a precaution.
(And I don’t know who wrote that, or what the context was, so take that as you will; but I don’t think it’s anything that was supposed to be a secret, because (IIRC) it was still up during one of the meetups, and even if I’m mistaken about that, people come and go pretty freely.)
People are pretty critical of Eliezer. Of course, most people there have a great deal of respect and admiration for him, and to some degree, the criticism (which is usually on relatively minor things) is probably partly because people there are making a conscious effort to keep in mind that he’s not automatically right, and to keep themselves in “evaluate arguments individually” mode rather than “agree with everything” mode. (See also this comment.)
So yeah, my overall impression is that people there are very mindful that they’re near the cult attractor, and intentionally and successfully act so as to resist that.
Sounds like it more so than any other small group I know of!
I would be surprised if less wrong itself ever developed fully into a cult. I’m not so sure about SIAI, but I guess it will probably just collapse at some point. LW doesn’t look like a cult now. But what was Scientology like in its earliest stages?
Is there mostly a single way how groups gradually turn into cults, or does it vary a lot?
My intuition was more about Ayn Rand and objectivists than Scientology, but I don’t really know much here. Anybody knows what were early objectivists like?
I didn’t put much thought into this, it’s just some impressions.
I don’t have a quick comment-length intro to how cults work. Every Cause Wants To Be A Cult will give you some idea.
Humans have a natural tendency to form close-knit ingroups. This can turn into the cult attractor. If the group starts going a bit weird, evaporative cooling makes it weirder. edit: jimrandomh nailed it: it’s isolation from outside social calibration that lets a group go weird.
Predatory infectious memes are mostly not constructed, they evolve. Hence the cult attractor.
Scientology was actually constructed—Hubbard had a keen understanding of human psychology (and no moral compass and no concern as to the difference between truth and falsity, but anyway) and stitched it together entirely from existing components. He started with Dianetics and then he bolted more stuff onto it as he went.
But talking about Scientology is actually not helpful for the question you’re asking, because Scientology is the Godwin example of bad infectious memes—it’s so bad (one of the most damaging, in terms of how long it takes ex-members to recover—I couldn’t quickly find the cite) that it makes lesser nasty cults look really quite benign by comparison. It is literally as if your only example of authoritarianism was Hitler or Pol Pot and casual authoritarianism didn’t look that damaging at all compared to that.
Ayn Rand’s group turned cultish by evaporative cooling. These days, it’s in practice more a case of individual sufferers of memetic infection—someone reads Atlas Shrugged and turns into an annoying crank. It’s an example of how impossible it is to talk someone out of a memetic infection that turns them into a crank—they have to get themselves out of it.
Is this helpful?
Yes, there is. One of the key features of cults is that they make their members sever all social ties to people outside the cult, so that they lose the safeguard of friends and family who can see what’s happening and pull them out if necessary. Sci*****ogy was doing that from the very beginning, and Less Wrong has never done anything like that.
Not all, just enough. Weakening their mental ties so they get their social calibration from the small group is the key point. But that’s just detail, you’ve nailed the biggie. Good one.
SIAI staff will have learnt to think in ways that are hard to calibrate against the outside world (singularitarian ideas, home-brewed decision theories). Also, they’re working on a project they think is really important. Also, they have information they can’t tell everyone (e.g. things they consider decision-theoretic basilisks). So there’s a few untoward forces there. As I said, hope they all have their wits about them.
/me makes mental note to reread piles of stuff on Scientology. I wonder who would be a good consulting expert, i.e. more than me.
No, it’s much more than that. Scientology makes its members cut off communication with their former friends and families entirely. They also have a ritualized training procedure in which an examiner repeatedly tries to provoke them, and they have to avoid producing a detectable response on an “e-meter” (which measures stress response). After doing this for awhile, they learn to remain calm under the most extreme circumstances and not react. And so when Scientology’s leaders abuse them in terrible ways and commit horrible crimes, they continue to remain calm and not react.
Cults tear down members’ defenses and smash their moral compasses. Less Wrong does the exact opposite.
I was talking generally, not about Scientology in particular.
As I noted, Scientology is such a toweringly bad idea that it makes other bad ideas seem relatively benign. There are lots of cultish groups that are nowhere near as bad as Scientology, but that doesn’t make them just fine. Beware of this error. (Useful way to avoid it: don’t use Scientology as a comparison in your reasoning.)
But that error isn’t nearly as bad as accidentally violating containment procedures when handling virulent pathogens, so really, what is there to worry about?
(ducks)
The forbidden topic, obviously.
What defense against EY does EY strengthen? Because I’m somewhat surprised by the amount I hear Aumann’s Agreement Theorem bandied around with regards to what is clearly a mistake on EY’s part.
I’d like to see some solid evidence for or against the claim that typical developing cults make their members cut off communication with their former friends and families entirely.
If the claim is of merely weakening these ties, then this is definitely happening. I especially mean commitment by signing up for cryonics. It will definitely increase mental distance between affected person and their formerly close friends and family, I guess about as much signing up for a weird religion but mostly perceived as benign would. I doubt anyone has much evidence about this demographics?
I don’t think they necessarily make them—all that’s needed is for the person to loosen the ties in their head, and strengthen them to the group.
An example is terrorist cells, which are small groups with a goal who have gone weird together. They may not cut themselves off from their families, but their bad idea has them enough that their social calibrator goes group-focused. I suspect this is part of why people who decompartmentalise toxic waste go funny. (I haven’t worked out precisely how to get from the first to the second.)
There are small Christian churches that also go cultish in the same way. Note that in this case, the religious ideas are apparently mainstream—but there’s enough weird stuff in the Blble to justify all manner of strangeness.
At some stage cohesion of the group becomes very important, possibly more important than the supposed point of the group. (I’m not sure how to measure that.)
I need to ask some people about this. Unfortunately, the real experts on cult thinking include several of the people currently going wildly idiotic about cryonics on the Rick Ross boards … an example of overtraining on a bad experience and seeing a pattern where it isn’t.
Regardless of actual chances of both working and considering the issue from purely sociological perspective—signing up for cryonics seems to be to be a lot like “accepting Jesus” / born again / or joining some far-more-religious-than-average subgroups of mainstream religions.
In both situations there’s some underlying reasonably mainstream meme soup that is more or less accepted (Christianity / strict mind-brain correspondence) but which most people who accept it compartmentalize away. Then some groups decide not to compartmentalize it but accept consequences of their beliefs. It really doesn’t take much more than that.
Disclaimers:
I’m probably in some top 25 posters by karma, but I tend to feel like an outsider here a lot.
The only “rationalist” idea from LW canon I take more or less seriously is the outside view, and the outside view says taking ideas too seriously tends to have horrible consequences most of the time. So I cannot even take outside view too seriously, by outside view—and indeed I have totally violated outside view’s conclusions on several occasions, after careful consideration and fully aware of what I’m doing. Maybe I should write about it someday.
In my estimate all FAI / AI foom / nonstandard decision theories stuff is nothing but severe compartmentalization failure.
In my estimate cryonics will probably be feasible in some remote future, but right now costs of cryonics (very rarely honestly stated by proponents, backed by serious economic simulations instead of wishful thinking) are far too high and chances of it working now are far too slim to bother. I wouldn’t even take it for free, as it would interfere with me being an organ donor, and that has non-negligible value for me. And even without that personal cost of added weirdness would probably be too high relative to my estimate of it working.
I can imagine alternative universes where cryonics makes sense, and I don’t think people who take cryonics seriously are insane, I just think wishful thinking biases them. In non-zero but as far as I can tell very very tiny portion of possible future universes where cryonics turned out to work, well, enjoy your second life.
By the way, is there any reason for me to write articles expanding my points, or not really?
My own situation is not so different although
(a) I have lower karma than you and
(b) There are some LW posters with whom I feel strong affinity
I myself am curious and would read what you had to say with interest and this is a weak indication that others would but of course it’s for you to say whether it would be worth the opportunity cost. Probably the community would be more receptive to such pieces if they were cautious & carefully argued than if not; but this takes still more time and effort.
You get karma mostly for contributing more, not by higher quality. Posts and comments both have positive expected karma.
Also you get more karma for more alignment with groupthink. I even recall how in early days of lesswrong I stated based on very solid outside view evidence (from every single subreddit I’ve been to) that karma and reception will come to correlate with not only quality but also alignment with groupthink—that on reddit-style karma system downvoting-as-disagreement / upvoting-as-agreement becomes very significant at some point. People disagreed, but the outside view prevailed.
This unfortunately means that one needs to put a lot more effort into writing something that disagrees with groupthink than something that agrees with it—and such trivial inconveniences matter.
I don’t think I feel particular “affinity” with anyone here, but I find many posters highly enjoyable to read and/or having a lot of insightful ideas.
I mostly write when I disagree with someone, so for a change (I don’t hate everyone all the time, honestly :-p) here are two among the best writings by lesswrong posters I’ve ever read:
Twilight fanfiction by Alicorn—it is ridiculously good, I guess a lot of people will avoid it because it’s Twilight, but it would be a horrible mistake.
Contrarian excuses by Robin Hanson—are you able to admit this about your own views?
I think it’s a plus point that a contrarian comment will get upvotes for effort and showing its work (links, etc) - that is, the moderation method still seems to be “More like this please” than “Like”. Being right and obnoxious gets downvotes.
(I think “Vote up” and “Vote down” might be profitably be replaced with “More like this” and “Less like this”, but I don’t think that’s needed now and I doubt it’d work if it was needed.)
More like this/Less like this makes sense for top posts, but is it helpful for comments?
It’s ok to keep an imperfect system—LW is nowhere near groupthink levels of subreddits or slashdot.
However—it seems to me that stealing HackerNews / Stackoverflow model of removing normal downvote, and keeping only upvote for comments (and report for spam/abuse, or possibly some highly restricted downvote for special situations only; or one which would count for a lot less than upvote) would reduce groupthink a lot, while keeping all major benefits of current system.
Other than “not fixing what ain’t broken”, are there any good reasons to keep downvote for comments? Low quality non-abusive coments will sink to the bottom just because of not getting upvotes, later reinforced by most people reading from highest rated first.
Disclaimers:
I’m obviously biased as a contrarian, and as someone who really likes reading a variety of contrarian opinions. I rarely bother posting comments saying that I totally agree with something. I occasionally send a private message with thanks when I read something particularly great, but I don’t recall ever doing it here yet, even though a lot of posts were that kind of great.
And I fully admit that on several occasions I downvoted a good comment just because I though one below it was far better and deserving a lot of extra promotion. I always felt like I’m abusing the system this way. Is this common?
Yes, I’ve noticed this; it seems like there’s a danger of there being an illusion that one is that one is actually getting something done by posting or commenting on LW on account of collecting karma by default.
On the upside I think that the net value of LW is positive so that (taking the outside view; ignoring the quality of particular posts/comments which is highly variable), the expected value of posts and comments is positive though probably less than one subjectively feels.
Yes; I’ve noticed this too. A few months ago I came across Robin Hanson’s Most Rationalists Are Elsewhere which is in similar spirit.
Agree here. In defense of LW I would say that this seems like a pretty generic feature across groups in general. I myself try to be careful about interpreting statements made by those with views that clash with my own charitably but don’t know how well I succeed.
Good to know :-)
Fascinating; I had avoided it for this very reason but will plan on checking it out.
Great article! I hadn’t seen it before.
I don’t consider LW particularly bad—it seems considerably saner than a typical internet forum of similar size. Level of drama seems a lot lower than is typical. Is my impression right that most of drama we get centers about obscure FAI stuff? I tend to ignore these posts unless I feel really bored. I’ve seen some drama about gender and politics, but honestly a lot less that these subject normally attract on other similar places.
I have a similar impression.
LW was the first internet forum that I had serious exposure to. I initially thought that I had stumbled onto a very bizarre cult. I complained about this to various friends and they said “no, no, the whole internet is like this!” After hearing this from enough people and perusing the internet some more I realized that they were right. Further contemplation and experience made me realize that it wasn’t only people on the internet who exhibit high levels of group think & strong ideological agendas; rather this is very common among humans in general! Real life interactions mask over the effects of group think & ideological agendas. I was then amazed at how oblivious I had been up until I learned about these things. All of this has been a cathartic and life-changing.
Not sure, I don’t really pay enough attention. As a rule, I avoid drama in general on account of lack of interest in the arguments being made on either side. The things that I’ve noticed most are those connected with gender wars and with Roko’s post being banned. Then of course there were my own controversial posts back in August.
Sounds about right.
In the interest of avoiding the spread of false ideas, it should be pointed out that Roko was not banned; rather his post was “banned” (jargon for actually deleted, as opposed to “deleted”, which merely means removed from the various “feeds” (“New”, the user’s overview, etc)). Roko himself then proceded to delete (in the ordinary way) all his other posts and comments.
Good point; taw and I both know this but others may not; grandparent corrected accordingly.
Thank you! :D
I’m just some random lurker, but I’d be very interested in these articles. I share your view on cryonics and would like to read some more clarification on what you mean by “compartmentalization failure” and some examples of a rejection of the outside view.
Here’s my view of current lesswrong situation.
On compartmentalization failure and related issues there are two schools present on less wrong:
Pro-compartmentalization view—expressed in reason as memetic immune disorder—seems to correlate with outside view, and reasoning from experience. Typical example: me
Anti-compartmentalization view—expressed in taking Ideas seriously—seems to correlate with “weak inside view” and reasoning from theory. Typical example: Eliezer
Right now there doesn’t seem to be any hope of reaching Aumann agreement between these points of view, and at least some members of both camps view many of other camp’s ideas with contempt. The primary reason seems to be that the kind of arguments that people on one end of the spectrum find convincing people on the other end see as total nonsense, and with full reciprocity.
Of course there’s plenty of issues on which both views agree as well—like religion, evolution, akrasia, proper approach to statistics, and various biases (I think outside viewers seem to demand more evidence that these are also a problem outside laboratory than inside viewers, but it’s not a huge disagreement). And many other disagreements seem to be unrelated to this.
Is this outside-viewers/pro-compartmentalization/firm-rooting-in-experience/caution vs weak-inside-viewers/anti-compartmentalization/pure-reason/taking-ideas-seriously spectrum only my impression, or do other people see it this way as well?
I might be very well biased, as I feel very strongly about this issue, and the most prominent poster Eliezer seems to feel very strongly about this in exactly the opposite way. It seems to me that most people here have reasonably well defined position on this issue—but I know better than to trust my impressions of people on an internet forum.
And second question—can you think of any good way for people holding these two positions to reach Aumann agreement?
As for cryonics it’s a lot of number crunching, textbook economics, outside view arguments etc. - all leading to very very low numbers. I might do that someday if I’m really bored.
Phrasing it as pro-compartmentalization might cause unnecessary negative affect for a lot of aspiring rationalists here at LW, though I’m too exhausted to imagine a good alternative. (Just in case you were planning on writing a post about this or the like. Also, Anna Salamon’s posts on compartmentalization were significantly better than my own.)
I’m trying to write up something on this without actually giving readers fear of ideas. I think I could actually scare the crap out of people pretty effectively, but, ah. (This is why it’s been cooking for two months and is still a Google doc of inchoate scribbles.)
A quick observation: Perfect Bayesian mind is impossible to actually build, that much we all know, and nobody cares.
But it’s a lot worse—it is impossible even mathematically—even if we expected as little from it as consistently following the rule that P(b|a)=P(c|b)=100% implies P(c|a)=100% (without getting into choice of prior, infinite precision, transfinite induction, uncountable domains etc., merely the merest minimum still recognizable as Bayesian inference) over unbounded but finite chains of inference over countable set of statements, it can trivially solve the halting problem.
Yes, it will always tell you which theorem is true, and which is false, Goedel theorem be damned. It cannot say anything like P(Riemann hypothesis|basic math axioms)=50% as this automatically implies a violation of Bayes rule somewhere in the network (and there are no compartments to limit damage once it happens—the whole network becomes invalid).
Perfect Bayesian minds people here so willingly accepted as the gold standard of rationality are mathematically impossible, and there’s no workaround, and no approximation that is of much use.
Ironically, perfect Bayesian inference systems works really well inside finite or highly regular compartments, with something else limiting its interactions with rest of the universe.
If you want an outside view argument that this is a serious problem, if Bayesian minds were so awesome, how is it that even in the very limited machine learning world, Bayesian-inspired systems are only one of many competing paradigms, better applicable to some compartments, not working well in others.
I realize that I just explicitly rejected one of the most basic premises accepted by pretty much everyone here, including me until recently. It surprised me that we were all falling for something as obvious retrospectively.
Robin Hanson’s post on contrarians being wrong most of the time was amazingly accurate again. I’m still not sure which ideas I’ve came to believe that relied on perfect Bayesian minds being gold standard of rationality I’ll need to reevaluate, but it doesn’t bother me as much now that I fully accepted that compartmentalization is unavoidable, and a pretty good thing in practice.
I think there’s a nice correspondence between outside view and set of preferred reference classes and Bayesian inference and set of preferred priors. Except outside view can be very easily extended to say “I don’t know”, estimate accuracy of itself as applied to different compartments, give more complex answers, evolve in time by reference classes formerly too small to be of any use now having enough data to return useful answers, and so on.
For very simple systems, these two should correspond to each other in a straightforward way. For complex systems, we have a choice of sometimes answering “I don’t know” or being inconsistent.
I wanted to write this as a top level post, but “one of your most cherished beliefs is totally wrong, here’s a sketch of mathematical proof” post would take a lot more effort to write well.
I tried a few extensions of Bayesian inference that I hoped would be able to deal with it, but this is really fundamental.
You can still use subjective Bayesian worldview—that P(Riemann hypothesis|basic math axioms)=50% is just your intuition. But you must accept that your probabilities can change with no new data, by just more thinking. This sort of Bayesian inference is just another tool of limited use, with biases, inconsistencies, and compartments protecting it from rest of the universe.
There is no gold standard of rationality. There simply isn’t. I have a fall back position of outside view, otherwise it would be about as difficult to accept this as a Christian finally figuring out there is no God, but still wanting to keep the good parts of his or her faith.
Would anyone be willing to write a top level post out of my comment? You’ll either be richly rewarded by a lot of karma, or we’ll both be banned.
A perfect Bayesian is logically omniscient (and logically omniscient agents are perfect Bayesians) and come with the same problem (of being impossible). I don’t see why this fact should be particularly troubling.
An outside view is only as good as the reference class you use. Your reference class does not appear to have many infinitely long levers, infinitely fast processors or a Maxwell’s Demon. I don’t have any reason to expect your hunch to be accurate.
“Outside View” doesn’t mean go with your gut instinct and pick a few superficial similarities.
There is more to that analogy than you’d like to admit.
I’m quite troubled by this downvote.
The only way to be “omniscient” over even very simple countable universe is to be inconsistent. There is no way to assign probabilities to every node which obeys Bayes theorem. It’s a lot like Kolmogorov complexity—they can be useful philosophical tools, but neither is really part of mathematics, they’re just logically impossible.
Finite perfect Bayesian systems are complete and consistent. We’re so used to every example of a Bayesian system ever used being finite, that we totally forgot that they are not logically possible to expand to simplest countable systems. We just accepted handwaving results in finite systems into countable domain.
This is a feature, not a bug.
No outside view systems you can build will be omniscient. But this is precisely what lets them be consistent.
Different outside view systems will give you different results. It’s not so different from Bayesian priors, except you can have some outside view systems for countable domains, and there are no Bayesian priors like it at all.
You can easily have nested outside view system judging outside view systems on which ones work and which don’t. Or some other interesting kind of nesting. Or use different reference classes for different compartments.
Or you could use something else. What we have here are in a way all computer programs anyway—and representing them as outside view systems is just human convenience.
But every single description of reality must either be allowed to say “I don’t know” or blatantly violate rules of logic. Either way, you will need some kind of compartmentalization to describe reality.
Just to check, is this an expansion of “Nature never tells you how many slots there are on the roulette wheel”?
I thought I’d gotten the idea about Nature and roulette wheels from Taleb, but a fast googling doesn’t confirm that.
It’s not in any way related. Taleb’s point is purely practical—that we rely on very simple models that work reasonably well most of the time, but very rare cases where they fail often also have huge huge impact. You wouldn’t guess that life or human-level intelligence might happen looking at the universe up until that point. Their reference class was empty. And then they happened just once and had massive impact.
Taleb would be more convincing if he didn’t act as if nobody knew even the power law. Everything he writes is about how actual humans currently model things, and that can easily be improved (well, there are some people who don’t know even the power law...; or with prediction markets to overcome pundit groupthink).
You could easily imagine that while humans really suck at this, and there’s only so much improvement we can make, perhaps there’s a certain gold standard of rationality—something telling us how to do it right at least in theory, even if we cannot actually implement it ever due to physical constraints of the universe. Like perfect Bayesians.
My point is that perfect Bayesians can only deal with finite domains. Gold standard of rationality—basically something that would assign some probabilities to every outcome within some fairly regular countable domain, and they would merely be self-consistent and follow basic rules of probability—it turns that even the simplest such assignment of probabilities is not possible, even in theory.
You can be self-consistent by sacrificing completeness—for some questions you’d answer “no idea”; or you can be complete by sacrificing self-consistency (subjective Bayesianism is exactly like that, your probabilities will change if you just think more about something, even without observing any new data).
And not only perfect Bayesianism, nothing else can work the way people wish. Without some gold standard of rationality, without some one true way of describing reality, a lot of other common beliefs just fail.
Compartmentalization, biases, heuristics, and so on—they are not possible to avoid even in theory, in fact they’re necessary in nearly any useful model of reasoning. Extreme reductionism is out, emergence comes back as an important concept, it’d be a very different less wrong.
More down to earth subjects like akrasia, common human biases, prediction markets, religion, evopsy, cryonics, luminosity, winning, science, scepticism, techniques, self-deception, overconfidence, signaling etc. would be mostly unaffected.
On the other hand so much of theoretical side of less wrong is based on flawed assumption that perfect Bayesians are at least theoretically possible on infinite domains, so that true always answer exists even if we don’t know it, it would need something between a very serious update and simply throwing it away.
Some parts of theory would don’t rely on this at all—like outside view. But these are not terribly popular here.
I don’t think you’d see even much of sequences surviving without a major update.
What are the smallest and/or simplest domains which aren’t amenable to Bayesian analysis?
I’m not sure you’re doing either me or Taleb justice (though he may well be having too much fun going on about how much smarter he than just about everyone else) -- I don’t think he’s just talking about completely unknown unknowns, or implying that people could get things completely right—just that people could do a great deal better than they generally do.
For example, Taleb talks about a casino which had the probability and gaming part of its business completely nailed down. The biggest threats to the casino turned out to be a strike, embezzlement (I think), and one of its performers being mauled by his tiger. None of these are singularity-level game changers.
In any case, I would be quite interested in more about the limits of Bayesian analysis and how that affects the more theoretical side of LW, and I doubt you’d be downvoted into oblivion for posting about it.
Notice that you’re talking domains already, you’ve accepted it, more or less.
I’d like to ask the opposite question—are there any non-finite domains where perfect Bayesian analysis makes sense?
On any domain where you can have even extremely limited local rules you can specify as conditions, and unbounded size of the world, you can use perfect Bayesian analysis to say if any Turing machine stops, or to prove any statement about natural number arithmetics.
The only difficulty is bridging language of Bayesian analysis and language of computational incompleteness. Because nobody seems to be really using Bayes like that, I cannot even give a convincing example how it fails. Nobody tried other than in handwaves.
Check things from Goedel incompleteness theorem and Turing completeness lists.
It seems that mainstream philosophy have figured it out long time ago. Contrarians turn out to be wrong once again. It’s not new stuff, we just never bothered checking.
Thanks, I now understand what you mean. I’ll have to think further about this.
Personally, I find myself strongly drawn to the anti-compartmentalization position. However, I had bad enough problems with it (let’s just say I’m exactly the kind of person that becomes a fundamentalist, given the right environment) that I appreciate an outside view and want to adopt it a lot more. Making my underlying assumptions and motivations explicit and demanding the same level of proof and consistency of them that I demand from some belief has served me well—so far anyway.
Also, I’d have to admit that I enjoy reading disagreements most, even if just for disagreement’s sake, so I’m not sure I actually want to see Aumann agreement. “Someone is wrong on the internet” syndrome has, on average, motivated me more than reasonable arguments, I’m afraid.
Does it seem to you as well that removing downvote for comments (keeping report for spam and other total garbage etc.) would result in more of this? Hacker News seems to be doing a lot better than subreddits of similar size, and this seems like the main structural difference between them.
Probably yes. I don’t read HN much (reddit provides enough mind crack already), but I never block any comments based on score, only downvote spam and still kinda prefer ye olde days of linear, barely moderated forums. I particularly disagree with “don’t feed the trolls” because I learned tons about algebra, evolution and economics from reading huge flame wars. I thank the cranks for their extreme stubbornness and the ensuing noob-friendly explanations by hundreds of experts.
And indeed, a very interesting discussion grew out of this otherwise rather unfortunate post.
I’m quite well acquainted with irc, mailing lists, wikis, wide variety of chans, somethingawful, slashdot, reddit, hn, twitter, and more such forums I just haven’t used in a while.
There are upsides and downsides of all communication formats and karma/moderation systems, but as far as I can tell HN karma system seems to strictly dominate reddit karma system.
If you feel adventurous and don’t mind trolls, I highly recommend giving chans a try (something sane, not /b/ on 4chan) - anonymity (on chans where it’s widely practised, in many namefagging is rampant) makes people drastically reduce effort they normally put into signalling and status games.
What you can see there is human thought far less filtered than usual, and there are very few other opportunities to observe that anywhere. When you come back from such environment to normal life, you will be able to see a lot more clearly how much monkey tribe politics is present in everyday human communication.
(For some strange reasons online pseudonyms don’t work like full anonymity.)
I find that working with animals is good for this, too. Though it’s rarely politic to say so.
This is the sort of thing that I was referring to here. Very educational experience.
I know what you’re talking about here.
Sure: compartmentalisation is clearly an intellectual sin—reality is all one piece—but we’re running on corrupt hardware so due caution applies.
That’s my view after a couple of months’ thought. Does that work for you?
(And that sums up about 2000 semi-readable words of inchoate notes on tne subject. (ctrl-C ctrl-V))
In the present Headless Chicken Mode, by the way, Eliezer is specifically suggesting compartmentalising the very bad idea, having seen people burnt by it. There’s nothing quite like experience to help one appreciate the plus points of compartmentalisation. It’s still an intellectual sin, though.
Compartmentalisation is an “intellectual sin” in certain idealized models of reasoning. Outside views says that not only 100% of human level intelligences in the universe, but 100% of thing even remotely intelligent-ish were messy systems that used compartmentalisation as one of basic building blocks, and 0% were implementations of these idealized models—and that in spite of many decades of hard effort, and a lot of ridiculous optimism.
So by outside view the only conclusion I see is that models condemning compertmentalisation are all conclusively proven wrong, and nothing they say about actual intelligent beings is relevant.
And yet we our organize knowledge about reality into extremely complicated system of compartments.
Attempts at abandoning that and creating one theory of everything like objectivism (Ayn Rand famously had an opinion about absolutely everything, no disagreements allowed) are disastrous.
I don’t think our hardware is meaningfully “corrupt”. All thinking hardware ever made and likely to be made must take appropriate trade-offs and use appropriate heuristics. Ours seems to be pretty good most of the time when it matters. Shockingly good. Expecting some ideal reasoner that has no constraints is not only physically impossible, it’s not even mathematically possible by Rice Theorem etc.
Compertmentalisation is one of the most basic techniques for efficient reasoning with limited resources—otherwise complexity explodes far more than linearly, and plenty of ideas that made a lot of sense in old context are now transplanted to another context where they’re harmful.
The hardware stays what it was and it was already pretty much fully utilized, so to deal with this extra complexity model needs to be either prunned of a lot of detail mind could otherwise manage just fine, and/or other heuristics and shortcuts, possibly with far worse consequences need to be empleyed a lot more aggressively.
I like this pro-compartmentalization theory, but it is primarily experience which convinces me that abandoning compartmentalization is dangerous and rarely leads to anything good.
Do you mean abandoning it completely, or abandoning it at all?
The practical reason for decompartmentalisation, despite its dangers, is that science works and is effective. It’s not a natural way for savannah apes to think, it’s incredibly difficult for most. But the payoff is ridiculously huge.
So we get quite excellent results if we decompartmentalise right. Reality does not appear to come in completely separate magisteria. If you want to form a map, that makes compartmentalisation an intellectual sin (which is what I meant).
By “appears to”, I mean that if we assume that reality—the territory—is all of a piece, and we then try to form a map that matches that territory, we get things like Facebook and enough food and long lifespans. That we have separate maps called physics, chemistry and biology is a description of our ignorance; if the maps contradict (e.g. when physics and chemistry said the sun couldn’t be more than 20 million years old and geology said the earth was at least 300 million years old [1]), everyone understands something is wrong and in need of fixing. And the maps keep leaking into each other.
This is keeping in mind the dangers of decompartmentalisation. The reason for bothering with it is an expected payoff in usefully superior understanding. People who know science works like this realise that a useful map is one that matches the territory, so decompartmentalise with wild abandon, frequently not considering dangers. And if you tell a group of people not to do something, at least a few will promptly do it. This does help explain engineer terrorists who’ve inadvertently decompartmentalised toxic waste and logically determined that the infidel must be killed. And why if you have a forbidden thread, it’s an obvious curiosity object.
The problem, if you want the results of science, is then not whether to decompartmentalise, but how and when to decompartmentalise. And that there may be dragons there.
Though Kelvin thought he could stretch the sun’s age to 500MY at a push.
But science itself is extremely compartmentalized! Try getting economists and psychologists to agree on anything, and both have pretty good results, most of the time.
Even microeconomics and macroeconomics make far better predictions when they’re separate, and repeated attempt at bringing them together consistently result in a disaster.
Don’t imagine that compartmentalization sets up impenetrable barriers once and for all—there’s a lot of cautious exchange between nearby compartments, and their boundaries keep changing all the time. I quite like the “compartments as scientific disciplines” image. You have a lot of highly fuzzy boundaries—like for example computer science to math to theoretical physics to quantum chemistry to biochemistry to medicine. But when you’re sick you don’t ask on programming reddit for advice.
The best way to describe a territory is to use multiple kinds of maps.
I don’t think anything you’ve said and anything I said actually contradict.
What are the examples you’re thinking of, where both are right and said answers contradict, and said contradiction is not resolvable even in principle?
Upvoted. I think this is a useful way to think about things like this. Compartmentalizing and decompartmentalizing aren’t completely wrong, but are wrongly applied in different contexts. So part of the challenge is to convince the person you’re talking to that it’s safe to decompartmentalize in the realm needed to see what you are talking about.
For example, it took me quite some time to decompartmentalize on evolution versus biology because I had a distrust of evolution. It looked like toxic waste to me, and indeed has arguably generated some (social darwinism, e.g.). People who mocked creationists actually contributed to my sense of distrust in the early stages, given that my subjective experience with (young-earth) creationists was not of particularly unintelligent or gullible people. However this got easier when I learned more biology and could see the reference points, and the vacuum of solid evidence (as opposed to reasonable-sounding speculation) for creationism. Later the creationist speculation started sounding less reasonable and the advocates a bit more gullible—but until I started making the connections from evolution to the rest of science, there wasn’t reason for these things to be on my map yet.
I’m starting to think arguments for cryonics should be presented in the form of “what are the rational reasons to decompartmentalize (or not) on this?” instead of “just shut up and decompartmentalize!” It takes time to build trust, and folks are generally justifiably skeptical when someone says “just trust me”. Also it is a quite valid point that topics like death and immortality (not to mention futurism, etc.) are notorious for toxic waste to begin with.
ciphergoth and I talked about cryonics a fair bit a couple of nights ago. He posits that I will not sign up for cryonics until it is socially normal. I checked my internal readout and it came back “survey says you’re right” and nodded my head. I surmise this is what it will take in general.
(The above is the sort of result my general memetic defence gives. Possibly-excessive conservatism in actually buying an idea.)
So that’s your whole goal. How do you make cryonics normal without employing the dark arts?
Hang out with cryonicists all the time!
Mike Darwin had a funny idea for that. :)
I think some additional training in DADA would do me a lot of good here. That is, I don’t want to be using the dark arts, but I don’t want to be vulnerable to them either. And dark arts is extremely common, especially when people are looking for excuses to keep on compartmentalizing something.
A contest for bored advertising people springs to mind: “How would you sell cryonics to the public?” Then filter the results that use dark arts. This will produce better ideas than you ever dreamed.
The hard part of this plan is making it sound like fun for the copywriters. Ad magazine competition? That’s the sort of thing that gets them working on stuff for fun and kudos.
(My psychic powers predict approximately 0 LessWrong regulars in the advertising industry. I hope I’m wrong.)
(And no, I don’t think b3ta is quite what we’re after here.)
I’ve been thinking about this a lot lately. It may be that there is a tendency to jump to solutions too much on this topic. If more time was spent talking about what the questions are that need to be answered for a resolution, perhaps it would have more success in triggering updates.
Quick reading suggests that Hubbard first founded “dianetics” in late 1949/early 1950, and it became “scientology” only in late 1953/early 1954. As far as I can tell it took them many years until they became Scientology we know. There’s some evidence of evaporative cooling at that stage.
And just as David Gerard says, modern Scientology is extreme case. By cult I meant something more like objectivists.
The Wikipedia articles on Scientology are pretty good, by the way. (If I say so myself. I started WikiProject Scientology :-) Mostly started by critics but with lots of input from Scientologists, and the Neutral Point Of View turns out to be a fantastically effective way of writing about the stuff—before Wikipedia, there were CoS sites which were friendly and pleasant but rather glaringly incomplete in important ways, and critics’ sites which were highly informative but frequently so bitter as to be all but unreadable.
(Despite the key rule of NPOV—write for your opponent—I doubt the CoS is a fan of WP’s Scientology articles. Ah well!)