Cult impressions of Less Wrong/Singularity Institute
I have several questions related to this:
Did anyone reading this initially get the impression that Less Wrong was cultish when they first discovered it?
If so, can you suggest any easy steps we could take?
Is it possible that there are aspects of the atmosphere here that are driving away intelligent, rationally inclined people who might otherwise be interested in Less Wrong?
Do you know anyone who might fall into this category, i.e. someone who was exposed to Less Wrong but failed to become an enthusiast, potentially due to atmosphere issues?
Is it possible that our culture might be different if these folks were hanging around and contributing? Presumably they are disproportionately represented among certain personality types.
If you visit any Less Wrong page for the first time in a cookies-free browsing mode, you’ll see this message for new users:
Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Here are the worst violators I see on that about page:
Some people consider the Sequences the most important work they have ever read.
Generally, if your comment or post is on-topic, thoughtful, and shows that you’re familiar with the Sequences, your comment or post will be upvoted.
Many of us believe in the importance of developing qualities described in Twelve Virtues of Rationality: [insert mystical sounding description of how to be rational here]
And on the sequences page:
If you don’t read the sequences on Mysterious Answers to Mysterious Questions and Reductionism, little else on Less Wrong will make much sense.
This seems obviously false to me.
These may not seem like cultish statements to you, but keep in mind that you are one of the ones who decided to stick around. The typical mind fallacy may be at work. Clearly there is some population that thinks Less Wrong seems cultish, as evidenced by Google’s autocomplete, and these look like good candidates for things that makes them think this.
We can fix this stuff easily, since they’re both wiki pages, but I thought they were examples worth discussing.
In general, I think we could stand more community effort being put into improving our about page, which you can do now here. It’s not that visible to veteran users, but it is very visible to newcomers. Note that it looks as though you’ll have to click the little “Force reload from wiki” button on the about page itself for your changes to be published.
- Making Rationality General-Interest by 24 Jul 2013 22:02 UTC; 45 points) (
- against “AI risk” by 11 Apr 2012 22:46 UTC; 35 points) (
- 13 Aug 2013 19:16 UTC; 2 points) 's comment on Engaging Intellectual Elites at Less Wrong by (
AAAAARRRGH! I am sick to death of this damned topic. It has been done to death.
I have become fully convinced that even bringing it up is actively harmful. It reminds me of a discussion on IRC, about how painstakingly and meticulously Eliezer idiot-proofed the sequences, and it didn’t work because people still manage to be idiots about it. It’s because of the Death Spirals and the Cult Attractor sequence that people bring the stupid “LW is a cult hur hur” meme, which would be great dramatic irony if you were reading a fictional version of the history of Less Wrong, since it’s exactly what Eliezer was trying to combat by writing it. Does anyone else see this? Is anyone else bothered by:
&
Really, am I the only one seeing the problem with this?
People thinking about this topic just seem to instantaneously fail basic sanity checks. I find it hard to believe that people even know what they’re saying when they parrot out “LW looks kinda culty to me” or whatever. It’s like people only want to convey pure connotation. Remember sneaking in connotations, and how you’re not supposed to do that? How about, instead of saying “LW is a cult”, “LW is bad for its members”? This is an actual message, one that speaks negatively of LW but contains more information than negative affective valence. Speaking of which, one of the primary indicators of culthood is being unresponsive or dismissal of criticism. People regularly accuse LW of this, which is outright batshit. XiXiDu regularly posts SIAI criticism, and it always gets upvoted, no matter how wrong. Not to mention all the other posts (more) disagreeing with claims in what are usually called the Sequences, all highly upvoted by Less Wrong members.
The more people at Less Wrong naively wax speculatively on how the community appears from the outside, throwing around vague negative-affective-valence words and phrases like “cult” and “telling people exactly how they should be”, the worse this community will be perceived, and the worse this community will be. I reiterate: I am sick to death of people playing color politics on “whether LW is a cult” without doing any of making the discussion precise and explicit rather than vague and implicit, taking into account that dissent is not only tolerated but encouraged here, remembering that their brains instantly mark “cult” as being associated to wherever it’s seen, and any of a million other factors. The “million other factors” is, I admit, a poor excuse, but I am out of breath and emotionally exhausted; forgive the laziness.
Everything that should have needed to be said about this has been said in the Cult Attractor sequence, and, from the Less Wrong wiki FAQ:
Talking about this all the time makes it worse, and worse every time someone talks about it.
What the bleeding fuck.
LW doesn’t do as much as I’d like to discourage people from falling into happy death spirals about LW-style rationality, like this. There seem to be more and more people who think sacrificing their life to help build FAI is an ethical imperative. If I were Eliezer, I would run screaming in the other direction the moment I saw the first such person, but he seems to be okay with that. That’s the main reason why I feel LW is becoming more cultish.
How do you distinguish a happy death spiral from a happy life spiral? Wasting one’s life on a wild goose chase from spending one’s life on a noble cause?
“I take my beliefs seriously, you are falling into a happy death spiral, they are a cult.”
I guess you meant to ask, “how do you distinguish ideas that lead to death spirals from ideas that lead to good things?” My answer is that you can’t tell by looking only at the idea. Almost any idea can become a subject for a death spiral if you approach it the wrong way (the way Will_Newsome wants you to), or a nice research topic if you approach it right.
I’ve recanted; maybe I should say so somewhere. I think my post on the subject was sheer typical mind fallacy. People like Roko and XiXiDu are clearly damaged by the “take things seriously” meme, and what it means in my head is not what it means in the heads of various people who endorse the meme.
You mean when he saw himself in the mirror? :)
Seriously, do you think sacrificing one’s life to help build FAI is wrong (or not necessarily wrong but not an ethical imperative either), or is it just bad PR for LW/SI to be visibly associated with such people?
I think it’s not an ethical imperative unless you’re unusually altruistic.
Also I feel the whole FAI thing is a little questionable from a client relations point of view. Rationality education should be about helping people achieve their own goals. When we meet someone who is confused about their goals, or just young and impressionable, the right thing for us is not to take the opportunity and rewrite their goals while we’re educating them.
It’s hard not to rewrite someone’s goals while educating them, because one of our inborn drives is to gain the respect and approval of people around us, and if that means overwriting some of our goals, well that’s a small price to pay as far as that part of our brain is concerned. For example, I stayed for about a week at the SIAI house a few years ago when attending the decision theory workshop, and my values shift in obvious ways just by being surrounded by more altruistic people and talking with them. (The effect largely dissipated after I left, but not completely.)
Presumably the people they selected for the rationality mini-camp were already more altruistic than average, and the camp itself pushed some of them to the “unusually altruistic” level. Why should SIAI people have qualms about this (other than possible bad PR)?
Pointing out that religious/cultic value rewriting is hard to avoid hardly refues the idea that LW is a cult.
I don’t think “unusually altruistic” is a good characterization of “doesn’t value personal preferences about some life choices more than the future of humanity”...
Do you believe most people are already quite altruistic in that sense? Why? It seems to me that many people give lip service to altruism, but their actions (e.g. reluctance to donate to highly efficient charities) speak otherwise. I think rationality education should help people achieve the goals they’re already trying to achieve, not the goals that the teacher wants them to achieve.
False dichotomy. Humans are not automatically strategic, we often act on urges, not goals, and even our explicitly conceptualized goals can be divorced from reality, perhaps more so than the urges. There are general purpose skills that have an impact on behavior (and explicit goals) by correcting errors in reasoning, not specifically aimed at aligning students’ explicit goals with those of their teachers.
Rationality is hard to measure. If LW doesn’t make many people more successful in mundane pursuits but makes many people subscribe to the goal of FAI, that’s reason to suspect that LW is not really teaching rationality, but rather something else.
(My opinions on this issue seem to become more radical as I write them down. I wonder where I will end up!)
I didn’t say anything about “rationality”. Whether the lessons help is a separate question from whether they’re aimed at correcting errors of reasoning or at shifting one’s goals in a specific direction. The posts I linked also respond to the objection about people “giving lip service to altruism” but doing little in practice.
Yes, the reasoning in the linked posts implies that deep inside, humans should be as altruistic as you say. But why should I believe that reasoning? I’d feel a lot more confident if we had an art of rationality that made people demonstrably more successful in mundane affairs and also, as a side effect, made some of them support FAI. If we only get the side effect but not the main benefit, something must be wrong with the reasoning.
This is not what the posts are about, even if this works as one of the conclusions. The idea that urges and goals should be distinguished, for example, doesn’t say what your urges or goals should be, it stands separately on its own. There are many such results, and ideas such as altruism or importance of FAI are only few among them. Do these ideas demonstrate comparatively more visible measurable effect than the other ideas?
if prediction markets were legal, we could much more easily measure if LW helped rationality. Just ask people to make n bets or predictions per month and see 1) it they did better than the population average and 2) if they improved over time.
In fact, trying to get intrade legal in the US might be a very worthwhile project for just this reason ( beyond all the general social reasons to like prediction markets)
There is no need to wish or strive for regulatory changes that may never happen: I’ve pointed out in the past that non-money prediction markets generally are pretty accurate and competitive with money prediction markets; so money does not seem to be a crucial factor. Just systematic tracking and judgment.
(Being able to profit may attract some people, like me, but the fear of loss may also serve as a potent deterrent to users.)
I have written at length about how I believe prediction markets helped me but I have been helped even more by the free active you-can-sign-up-right-now-and-start-using-it,-really,-right-now http://www.PredictionBook.com
I routinely use LW-related ideas and strategies in predicting, and I believe my calibration graph reflects genuine success at predicting.
Very nice idea, thanks! After some googling I found someone already made this suggestion in 2009.
If other people have suggested this before, there may be enouph background support to make it worth following up on this idea.
When I get home from work, I will post in the discussion forum to see if people would be interested in working to legalize prediction markets ( like intrade) it the US.
[EDITED: shortly after making this post, I saw Gwern’s post above suggesting that an alternative like prediction book would be just as good. As a result I did not make a post about legalizing prediction markets and instead tried prediction book for a month and a half. After this trial, I still think that making a push to legalize predictions markets would be worthwhile]
It doesn’t sound like you know all that many humans, then. In most times and places, the “future of humanity” is a signal that someone shouldn’t be taken seriously, not an actual goal.
I was talking about the future of humanity, not the “future of humanity” (a label that can be grossly misinterpreted).
… or you estimate the risk to be significant and you want to live past the next N years.
I don’t think this calculation works out, actually. If you’re purely selfish (don’t care about others at all), and the question is whether to devote your whole life to developing FAI, then it’s not enough to believe that the risk is high (say, 10%). You also need to believe that you can make a large impact. Most people probably wouldn’t agree to surrender all their welfare just to reduce the risk to themselves from 10% to 9.99%, and realistically their sacrifice won’t have much more impact than that, because it’s hard to influence the whole world.
Funny in which way? Do you want to avoid an automatic “makro-of-denial”-invocation or are you afraid of them joining Eliezers evergrowing crowd of memetically subverted FAI-lers ?
The latter, I think.
If I teach rationality and deliberately change my students’ goals, that means I fail as a teacher. It’s even worse if their new goal happens to be donating all their money to my organization.
I have always been extremely curious about this. Do people really sacrifice their lifes or is it largely just empty talk?
It seems like nobody who wouldn’t do anything else anyway is doing something. I mean, I don’t think Eliezer Yudkowsky or Luke Muehlhauser would lead significantly different lifes if there were no existential risks. They are just the kind of people who enjoy doing what they do.
Are there people who’d rather play games all day but sacrifice their lifes to solve friendly AI?
If developing AGI were an unequivocally good thing, as Eliezer used to think, then I guess he’d be happily developing AGI instead of trying to raise the rationality waterline. I don’t know what Luke would do if there were no existential risks, but I don’t think his current administrative work is very exciting for him. Here’s a list of people who want to save the world and are already changing their life accordingly. Also there have been many LW posts by people who want to choose careers that maximize the probability of saving the world. Judge the proportion of empty talk however you want, but I think there are quite a few fanatics.
Indeed, Eliezer once told me that he was a lot more gung-ho about saving the world when he thought it just meant building AGI as quickly as possible.
I think at one point Eliezer said that, if not for AGI/FAI/singularity stuff, he would probably be a sci-fi writer. Luke explicitly said that when he found out about x-risks he realized that he had to change his life completely.
I sacrificed some very important relationships and the life that could have gone along with them so I could move to California, and the only reason I really care about humans in the first place is because of those relationships, so...
— Nick Tarleton’s twist on T.S. Eliot
Due to comparative advantage not changing much is actually a relativly good, straightforward strategy: just farm and redirect money.
As an example of these Altruistic Ones user Rain been mentioned, so they are out there. They all be praised!
Factor in time and demographics. A lot of LWlers are young people, looking for ways to make money; they are not able to spend much yet, and haven’t had much impact yet. Time will have to show whether these stay true to their goals, or wether they are tempted to go the vicious path of always-growing investments into status.
I’m too irreparably lazy to actually change my life but my charitable donations are definitely affected by believing in FAI.
Sacrificing or devoting? Those are different things. If FAI succeeds they will have a lot more life to party than they would have otherwise so devoting your life to FAI development might be a good bet even from a purely selfish standpoint.
Pascal? Izzat you?
That comment doesn’t actually argue for contributing to FAI development. So I guess I’m not Pascal (damn).
You probably don’t wanna be Pascal anyway. I’m given to understand he’s been a metabolic no-show for about 350 years.
I agree entirely. That post made me go “AAAH” and its rapid karma increase at first made me go “AAAAHH”
My post was mostly about how to optimize appearances, with some side speculation on how our current appearances might be filtering potential users. I agree LW rocks in general. I think we’re mostly talking past each other; I don’t see this discussion post as fitting into the genre of “serious LW criticism” as the other stuff you link to.
In other words, I’m talking about first impressions, not in-depth discussions.
I’d be curious where you got the idea that writing the cult sequence was what touched off the “LW cult” meme. That sounds pretty implausible to me. Keep in mind that no one who is fully familiar with LW is making this accusation (that I know of), but it does look like it might be a reaction that sometimes occurs in newcomers.
Let’s keep in mind that LW being bad is a logically distinct proposition, and if it is bad, we want to know it (since we want to know what is true right?)
And if we can make optimizations to LW culture to broaden participation from intelligent people, that’s also something we want to do, right? Although, on reflection, I’m not sure I see an opportunity for improvement where this is concerned, except maybe on the wiki (but I do think we could stand to be a bit nicer everywhere).
Criticism rocks dude. I’m constantly realizing that I did something wrong and thinking that if I had a critical external observer maybe I wouldn’t have persisted in my mistake for so long. Let’s keep this social norm up.
Okay.
I said stop talking about it and implied that maybe it shouldn’t have been talked about so openly in the first place, and here you are talking about it.
Where else could it have come from? Eliezer’s extensive discussion of cultish behavior gets automatically pattern-matched into helpless cries of “LW is not a cult!” (even though that isn’t what he’s saying and isn’t what he’s trying to say), and this gets interpreted as, “LW is a cult.” Seriously, any time you put two words together like that, people assume they’re actually related.
Elsewise, the only thing I can think of is our similar demographics and a horribly mistaken impression that we all agree on everything (I don’t know where this comes from).
Okay. (I hope you didn’t interpret anything I said as meaning otherwise.)
Point taken; I’ll leave the issue alone for now.
Ya know, if LW and SIAI are serious about optimizing appearances, they might consider hiring a Communications professional. PR is a serious skill and there are people who do it for a living. Those people tend to be on the far end of the spectrum of what we call neurotypical here. That is, they are extremely good at modeling other people, and therefore predicting how other people will react to a sample of copy. I would not be surprised if literally no one who reads LW regularly could do the job adequately.
Edit to add: it’s nice to see that they’re attempting to do this, but again, LW readership is probably the wrong place to look for this kind of expertise.
People who do this for a living (effectively) cost a lot of money. Given the budget of SIAI putting a communications professional on the payroll at market rates represents a big investment. Transitioning a charity to a state where a large amount of income goes into improving perception (and so securing more income) is a step not undertaken lightly.
It’s at least plausible that a lot of the people who can be good for SIAI would be put off more by professional marketing than by science fiction-flavored weirdness.
That’s a good point. I’m guessing though that there’s a lot of low hanging fruit, e.g. a front page redesign, that would represent a more modest (and one-time) expense than hiring a full-time flack. In addition to costing less this would go a long way to mitigate concerns of corruption. Let’s use the Pareto Principle to our advantage!
It looks a bit better if you consider the generalization in the intro to be mere padding around a post that is really about several specific changes that need to be made to the landing pages.
Unfortunately, Grognor reverts me every time I try to make those changes… Bystanders, please weigh in on this topic here.
I didn’t like your alternative for the “Many of us believe” line either, even though I don’t like that line (it was what I came up with to improve on Luke’s original text). To give the context: the current About page introduces twelve virtues with:
John’s edit was to change it to:
P.S. I no longer supervise the edits to the wiki, but someone should...
He didn’t like my other three attempts at changes either… I could come up with 10 different ways of writing that sentence, but I’d rather let him make some suggestions.
If you made the suggestions here and received public support for one of them it wouldn’t matter much what Grognor thought.
Why don’t you make a suggestion?
*cough* Mine is ‘delete the sentence entirely’. I never really liked that virtues page anyway!
Sounds like a great idea.
I entirely agree with this.
To be clear, you are in favor of leaving the virtues off of the about page, correct?
For what it is worth, yes.
Okay, thanks. One of the other wiki editors didn’t think you meant that.
Whatever wedrifid actually meant is not “apparent consensus”, given that there’s just 2 upvotes on the statement where it wasn’t apparent to the voters what he actually meant… Reverted with suggestion to escalate to a discussion post and voting more clearly. Also, this started from talking about bad wording, which is a separate question from leaving the section out altogether, so the hypothetical discussion posting should distinguish those questions.
Okay.
That change is less bad than the original but it is sometimes better to hold off on changes that may reduce the impetus for further improvement without quite satisfying the need.
To be honest, I don’t have much energy left to fight this. I’d like to rethink the entire page, but if I have to fight tooth and nail for every sentence I won’t.
Who on earth is Grognor?
Hi?
In. Who in earth.
Is this a jest about Grognor sounding like the name of a dwarf or a mythical beast of the depths?
I’m afraid so.
A rambling, cursing tirade against a polite discussion of things that might be wrong with the group (or perceptions of the group) doesn’t improve my perception of the group. I have to say, I have a significant negative impression from Grognor’s response here. In addition to the tone of his response, a few things that added to this negative impression were:
“how painstakingly and meticulously Eliezer idiot-proofed the sequences, and it didn’t work because people still manage to be idiots about it”
Again, the name dropping of Our Glorious Leader Eliezer, long may He reign. (I’m joking here for emphasis.)
“LW is a cult hur hur”
People might not be thinking completely rationally, but this kind of characterization of people who have negative opinions of the group doesn’t win you any friends.
“since it’s exactly what Eliezer was trying to combat by writing it.”
There’s Eliezer again, highlighting his importance as the group’s primary thought leader. This may be true, and probably is, but highlighting it all the time can lead people to think this is cultish.
Thanks for saying that I significantly helped to make Less Wrong look less cultish ;-)
By the way...
Actually, I believe what he said was that you generated evidence that Less Wrong is not cultish, which makes it look more cultish to people who aren’t thinking carefully.
A widely revered figure who has written a million+ words that form the central pillars of LW and has been directly (or indirectly) responsible for bringing many people into the rationality memespace says “don’t do X” so it is obvious that X must be false.
Dismissing accusations of a personality cult around Eliezer by saying Eliezer said “no personality cult” is a fairly poor way of going about it. Two key points:
saying “as a group, we don’t worship Eliezer” doesn’t guarantee that it is true (groupthink could easily suck us into ignore evidence)
someone might interpret what Eliezer said as false modesty or an attempt to appear to be a reluctant saviour/messiah (i.e. using dark arts to suck people in)
“I have become fully convinced that even bringing it up is actively harmful.”
What evidence leads you to this conclusion?
Can you provide evidence to support this characterization?
Can you provide evidence to support this characterization?
I would like to see some empirical analysis of the points made here and by the original poster. We should gather some data about perceptions from real users and use that to inform future discussion on this topic. I think we have a starting point in the responses to this post, and comments in other posts could probably be mined for information, but we should also try to find some rational people who are not familiar with less wrong and introduce them to it and ask them for their impressions (from someone acting like they just found the site, are not affiliated with it, and are curious about their friend’s impressions, or something like that).
No, it is not. A lack of self-criticism and evaluation is one of the reasons for why people assign cult status to communities.
P.S. Posts with titles along the lines of ‘Epistle to the New York Less Wrongians’ don’t help in reducing cultishness ;-)
(Yeah, I know it was just fun.)
Actually, I believe the optimal utilitarian attitude would be to make fun of them. If you don’t take them at all seriously, they will grow to doubt themselves. If you’re persistently humorous enough, some of them, thinking themselves comedians, will take your side in poking fun at the rest. In time, LW will have assembled its own team of Witty Defenders responsible for keeping non-serious accusations at bay. This will ultimately lead to long pages of meaningless back and forth between underlings, allowing serious LWians to ignore these distracting subjects altogether. Also, the resulting dialogue will advertize the LW community, while understandably disgusting self-respecting thinkers of every description, thus getting them interested in evaluating the claims of LW on its own terms.
Personally, I think all social institutions are inevitably a bit cultish, (society = mob—negative connotations) and they all use similarly irrational mechanisms to shield themselves from criticism and maintain prestige. A case could be made that they have to, one reason being that most popular “criticism” is of the form “I’ve heard it said or implied that quality X is to be regarded as a Bad Thing, and property Y of your organization kind of resembles X under the influence of whatever it is that I’m smoking,” or of equally abysmal quality. Heck, the United States government, the most powerful public institution in the world, is way more cultish than average. Frankly, more so than LW has ever been accused of being, to my knowledge. Less Wrong: Less cultish than America!
The top autocompletes for “Less Wrong” are
sequences
harry potter
meetups
These are my (logged-in) Google results for searching “Less Wrong_X” for each letter of the alphabet (some duplicates appear):
akrasia
amanda knox
atheism
australia
blog
bayes
basilisk
bayes theorem
cryonics
charity
cult
discussion
definition
decoherence
decision theory
epub
evolutionary psychology
eliezer yudkowsky
evidence
free will
fanfiction
fanfic
fiction
gender
games
goals
growing up is hard
harry potter
harry potter and the methods of rationality
how to be happy
hindsight bias
irc
inferential distance
iq
illusion of transparency
joint configurations
joy in the merely real
kindle
amanda knox
lyrics
luminosity
lost purposes
leave a line of retreat
meetup
mobi
meditation
methods of rationality
newcomb’s problem
nyc
nootropics
neural categories
optimal employment
overcoming bias
open thread
outside the laboratory
procrastination
pdf
polyamory
podcast
quantum physics
quotes
quantum mechanics
rationality quotes
rationality quotes
rationalwiki
reading list
rationality
sequences
survey
survey results
sequences pdf
twitter
textbooks
three worlds collide
toronto
ugh fields
universal fire
value is fragile
village idiot
wiki
wikipedia
words
what is evidence
yudkowsky
yvain
your strength as a rationalist
your rationality is my business
zombies
zombies the movie
The autocomplete bit doesn’t seem to be too big a problem for Less Wrong.
However, it is one of the immediate autocompletes for “Singularity Institute.” What pages do you get on the first page of results if you search “singularity institute cult”? I see the wikipedia page, the SI website, Michael Anissimov’s blog, RationalWiki, Less Wrong posts about cultishness and death spirals, Lukeprog’s blog, a Forbes article mention of “cargo-cult enthusiasm,” and at the bottom a blog post making a case against SI and other transhumanist organizations.
Luke’s link to How Cults work is pretty funny.
Google’s autocomplete has a problem, which has produced controversy in other contexts: when people want to know whether X is trustworthy, the most informative search they can make is “X scam”. Generally speaking, they’ll find no results and that will be reassuring. Unfortunately, Google remembers those searches, and presents them later as suggestions—implying that there might be results behind the query. Once the “X scam” link starts showing up in the autocomplete, people who weren’t really suspicious of X click on it, so it stays there.
Personal anecdote warning. I semi-routinely google the phrase “X cult” when looking into organizations.
Does this ever work?
I think so, but it’s hard to say. I look into organizations infrequently enough that semi-routinely leaves me a very small sample size. The one organization that had prominent cult results(not going to name it for obvious reasons) does have several worrying qualities. And they seem related to why it was called a cult. -edit minor grammar/style fix
Thanks; I updated the post to reflect this.
Eliezer addressed this in part with his “Death Spiral” essay, but there are some features to LW/SI that are strongly correlated with cultishness, other than the ones that Eliezer mentioned such as fanaticism and following the leader:
Having a house where core members live together.
Asking followers to completely adjust their thinking processes to include new essential concepts, terminologies, and so on to the lowest level of understanding reality.
Claiming that only if you carry out said mental adjustment can you really understand the most important parts of the organization’s philosophy.
Asking for money for a charity, particularly one which does not quite have the conventional goals of a charity, and claiming that one should really be donating a much larger percentage of one’s income than most people donate to charity.
Presenting an apocalyptic scenario including extreme bad and good possibilities, and claiming to be the best positioned to deal with it.
[Added] Demand you leave any (other) religion.
Sorry if this seems over-the-top. I support SI. These points have been mentioned, but has anyone suggested how to deal with them? Simply ignoring the problem does not seem to be the solution; nor does loudly denying the charges; nor changing one’s approach just for appearances.
Perhaps consider adding the high fraction of revenue that ultimately goes to paying staff wages to the list.
Oh yes, and fact that the leader wants to SAVE THE WORLD.
About a third in 2009, the last year for which we have handy data.
Practically all of it goes to them or their “associates”—by my reckoning. In 2009 some was burned on travel expenses and accomodation, some was invested - and some was stolen.
Who was actually helped? Countless billions in the distant future—supposedly.
What else should it go to? (Under the assumption that SI’s goals are positive.)
As Larks said above, they are doing thought work: they are not trying to ship vast quantities of food or medical supplies. The product of SI is the output from their researchers, the only way to get more output is to employ more people (modulo improving the output of the current researchers, but that is limited).
So, to recap, this is a proposed part of a list of ways in which the SIAI resembles a cult. It redistribtutes economic resources from the “rank and file” members up the internal heirarchy without much expenditure on outsiders—just like many cults do.
(Eh. Yes, I think I lost track of that a bit.)
Keeping that in mind: SI has a problem because acting to avoid appearing to exist to give money to the upper ranks means that they can’t pay their researchers. There are three broad classes of solutions to this (that I can see):
Give staff little to no compensation for their work
Use tricky tactics to try to conceal how much money goes to the staff
Try to explain to everyone why such a large proportion of the money goes to the staff
All of those seem suboptimal.
Why was this downvoted instead of responded to? Downvoting people who are simply stating negative impressions of the group doesn’t improve impressions of the group.
Most organizations spend most of their money on staff. What else could you do with it? Paying fellowships for “external staff” is a possibility. But in general, good people are exactly what you need.
Often goods or needy beneficiaries are also involved. Charity actions are sometimes classified into:
Program Expenses
Administrative Expenses
Fundraising Expenses
This can be used as a heuristic for identifying good charities.
Not enough in category 1 and too much in categories 2 and 3 is often a bad sign.
But they’re not buying malaria nets, they’re doing thought-work. Do you expect to see an invoice for TDT?
Quite appart from the standard complaint about how awful a metric that is.
And yet there are plenty of things that don’t cost much money that they could be doing right now, that I have previously mentioned to SIAI staff and will not repeat (edit: in detail) because it might interfere with my own similar efforts in the near future.
Basically I’m referring to public outreach, bringing in more members of the academic community, making people aware that LW even exists (I wasn’t except when I randomly ran into a few LWers in person), etc.
What’s the reason for downvoting this? Please comment.
As I’ve discussed with several LWers in person, including some staff and visiting fellows, one of the things I disliked about LW/SIAI was that so much of the resources of the organization go to pay the staff. They seemingly wouldn’t even consider proposals to spend a few hundred dollars on other things because they claimed it was “too expensive”.
add
Leader(s) are credited with expertise beyond that convenrional experts in subjects they are not conventionally qualified in.
Studying conventional versions of subjects is deprecated in favour of in group versions.
Also:
Associated with non-standard and non-monogamous sexual practices.
(Just some more pattern-matching on top of what you see in the parent and grandparent comment. I don’t actually think this is a strong positive indicator.)
The usual version of that indicator is “leader has sex with followers”
One fundamental difference between LW and most cults is that LW tells you to question everything, even itself.
Most, but not all. The Randians come to mind. Even the Buddha encouraged people to be critical, but doesn’t seem to have stopped the cults. I was floored to learn a few weeks ago that Buddhism has formalized even when you stop doubting! When you stop doubting, you become a Sotāpanna; a Sotāpanna is marked by abandoning ‘3 fetters’, the second fetter according to Wikipedia being
As well, as unquestioningness becomes a well known trait of cults, cults tend to try to hide it. Scientology hides the craziest dogmas until you’re well and hooked, for example.
If the Randians are a cult, LW is a cult.
Like the others, the members just think it’s unique in being valid.
If a person disagrees with Rand about a number of key beliefs, do they still count as a Randian?
If they don’t count as an Orthodox Randian, they can always become a Liberal Randian
That depends the largest part on what “a number of key beliefs” is.
Could you elaborate on this?
So there comes a point in Buddhism where you’re not supposed to be skeptical anymore. And Objectivists aren’t supposed to question Ayn Rand.
Would it be productive to be skeptical about whether your login really starts with the letter “M”? Taking an issue off the table and saying, we’re done with that, is not in itself a bad sign. The only question is whether they really do know what they think they know.
I personally endorse the very beginning of Objectivist epistemology—I mean this: “Existence exists—and the act of grasping that statement implies two corollary axioms: that something exists which one perceives and that one exists possessing consciousness, consciousness being the faculty of perceiving that which exists.” It’s the subsequent development which is a mix of further gemlike insights, paths not taken, and errors or uncertainties that are papered over.
In the case of Buddhism, one has the usual problem of knowing, at this historical distance, exactly what psychological and logical content defined “enlightenment”. One of its paradoxes is that it sounds like the experience of a phenomenological truth, and yet the key realization is often presented as the discovery that there is no true self or substantial self. I would have thought that achieving reflective consciousness implied the existence of a reflector, just as in the Objectivist account. Then again, reflection can also produce awareness that traits with which you have identified yourself are conditioned and contingent, so it can dissolve a naive concept of self, and that sounds more like the Buddhism we hear about today. The coexistence of a persistent observing consciousness, and a stream of transient identifications, in certain respects is like Hinduism; though the Buddhists can strike back by saying that the observing consciousness is not eternal and free of causality, it too exists only if it has been caused to exist.
So claims to knowledge, and the existence of a stage where you no longer doubt that this really is knowledge, and get on with developing the implications, do not in themselves imply falsity. In a systematic philosophy based on reason, a description which covers Objectivism, Buddhism, and Less-Wrong-ism, there really ought to be some notion of a development that occurs as you as learn.
The alternative is Zen Rationalism: if you meet a belief on the road (of life), doubt it! It’s a good heuristic if you are beset by nonsense, and it even has a higher form in phenomenological or experiential rationalism, where you test the truth of a proposition about consciousness by seeing whether you can plausibly deny it, even as the experience is happening. But if you do this, even while you keep returning to beginner’s mind, you should still be dialectically growing your genuine knowledge about the nature of reality.
There seems to be some detailed substructure there—which I go over here.
Not just a cult—an END OF THE WORLD CULT.
My favourite documentary on the topic: The End of The World Cult.
I only discovered LW about a week ago, and I got the “cult” impression strongly at first, but decided to stick around anyway because I am having fun talking to you guys, and am learning a lot. The cult impression faded once I carefully read articles and threads on here and realized that they really are rational, well argued concepts rather than blindly followed dogma. However, it takes time and effort to realize this, and I suspect that the initial appearance of a cult would turn many people off from putting out that time and effort.
For a newcomer expecting discussions about practical ways to overcome bias and think rationally, the focus on things like transhumanism and singularity development seem very weird- those appear to be pseudo-religous ideas with no obvious connection to rationality or daily life.
AI and transhumanism are very interesting, but are distinct concepts from rationality. I suggest moving singularity and AI specific articles to a different site, and removing the singularity institute and FHI links from the navigation bar.
There’s also the problem of having a clearly defined leader, with strong controversial opinions which are treated like gospel. I would expect a community which discusses rationality to be more of an open debate/discussion between peers without any philosophical leaders that everybody agrees with. I don’t see any easy solution here, because Eliezer Yudkowsky’s reputation here is well earned- he actually is exceptionally brilliant and rational.
I would also like to see more articles on how to avoid bias, and apply bayesian methods to immediate present day problems and decision making. How can we avoid bias and correctly interpret data from scientific experiments, and then apply this knowledge to make good choices about things such as improving our own health?
Random nitpick: a substantial portion of LW disagrees with Eliezer on various issues. If you find yourself actually agreeing with everything he has ever said, then something is probably wrong.
Slightly less healthy for overall debate is that many people automatically attribute a toxic/weird meme to Eliezer whenever it is encountered on LW, even in instances where he has explicitly argued against it (such as utility maximization in the face of very small probabilities).
Upvoted for sounding a lot like the kinds of complaints I’ve heard people say about LW and SIAI.
There is a large barrier to entry here, and if we want to win more, we can’t just blame people for not understanding the message. I’ve been discussing with a friend what is wrong with LW pedagogy (though he admits that it is certainly getting better). To paraphrase his three main arguments:
We often use nomenclature without necessary explanation for a general audience. Sure, we make generous use of hyperlinks, but without some effort to bridge the gap in the body of our text, we aren’t exactly signalling openness or friendliness.
We have a tendency to preach to the converted. Or as the friend said:
He brought up an example for how material might be introduced to newly exposed folk.
The curse of knowledge can be overcome, but it takes desire and some finesse.
If we intend to win the hearts and minds of the people (or at least make a mark in the greater world), we might want to work on evocative imagery that isn’t immediately cool to futurists and technophiles and sci-fi geeks. Sure, keep the awesome stuff we have, but maybe look for metaphors that work in other domains. In my mind, ideally, we should build a database of ideas and their parallels in other fields (using some degree of field work to actually find the words that work). Eliezer has done some great work this way, like with HP:MoR, and some of his short stories. Maybe the SIAI could shell out money to fund focus groups and interviews a la Luntz, who in my mind is a great Dark Side example of winning.
Edit for formatting and to mention that outreach and not seeming culty seem to be intertwined in a weird way. It is obvious to me that being The Esoteric Order Of LessWrong doesn’t do the world any favors (or us, for that matter), but that by working on outreach, we can be accused of proselytizing. I think it comes down to doing what works without doing the death spiral stuff. And it seems to me that no matter what is done, detractors are going to detract.
That’s an inspiring goal, but it might be worth pointing out that the This American Life episode was extraordinary—when I heard it, it seemed immediately obvious that this was the most impressively clear and efficient hour I’d heard in the course of a lot of years of listening to NPR.
I’m not saying it’s so magical that it can’t be equaled, I’m saying that it might be worth studying.
Here’s what an outsider might see:
“doomsday beliefs” (something “bad” may happen eschatologically, and we must work to prevent this): check
a gospel (The Sequences): check
vigorous assertions of untestable claims (Everett interpretation): check
a charismatic leader extracting a living from his followers: check
is sometimes called a cult: check
This is enough to make up a lot of minds, regardless of any additional distinctions you may want to make, sadly.
But an outsider would have to spend some time here to see all those things. If they think LW is accurately described by the c-word even after getting acquainted with the site, there might be no point in trying to change their minds. It’s better to focus on people who are discouraged by first impressions.
I recently read an article about Keith Raniere, the founder of a cult called NXIVM (pronounced “nexium”):
http://www.timesunion.com/local/article/Secrets-of-NXIVM-2880885.php
Raniere reminds me of Yudkowsky, especially after reading cult expert Rick Ross’s assessment of Raniere:
I’m here for only a couple of months, and I didn’t have any impression of cultishness. I saw only a circle of friends doing a thing together, and very enthusiastic about it.
What I also did see (and still do) is specific people just sometimes being slightly crazy, in a nice way. As in: Eliezer’s threatment of MWI. Or way too serious fear of weird acausal dangers that fall out of currently best decision theories.
Note: this impression is not because of craziness of the ideas, but because of taking them too seriously too early. However, the relevant posts always have sane critical comments, heavily upvoted.
I’m slightly more alarmed by posts like How would you stop Moore’s Law?. I mean, seriously thinking of AI dangers is good. Seriously considering nuking Intel’s fabs in order to stop the dangers is… not good.
Agreed, except the treatment of WMI does not seem the least bit crazy to me. But what do I know—I’m a crazy physicist.
The conclusions don’t seem crazy (well, they seem “crazy-but-probably-correct”, just like even the non-controversial parts of quantum mechanics), but IIRC the occasional emphasis on “We Have The One Correct Answer And You All Are Wrong” rang some warning bells.
On the other hand: Rationality is only useful to the extent that it reaches conclusions that differ from e.g. the “just believe what everyone else does” heuristic. Yet when any other heuristic comes up with new conclusions that are easily verified, or even new conclusions which sound plausible and aren’t disproveable, “just believe what everyone else does” quickly catches up. So if you want a touchstone for rationality in an individual, you need to find a question for which rational analysis leads to an unverifiable, implausible sounding answer. Such a question makes a great test, but not such a great advertisement...
Choosing between mathematically equivalent interpretations adds 1 bit of complexity that doesn’t need to be added. Now, if EY had derived the Born probabilities from first principles, that’d be quite interesting.
That’s a positive impression. People really look that enthusiastic and well bonded?
Yes to well bonded. People here seem to understand each other far better than average on the net, and it is immediately apparent.
Enthusiastic is a wrong word, I suppose. I meant, sure of doing a good thing, happy to be doing it, etc, not in the sense of applauding and cheering.
Thankyou. It is good to be reminded that these things are relative. Sometimes I forget to compare interactions to others on the internet and instead compare them to interactions with people as I would prefer them to be or even just interactions with people I know in person (and have rather ruthlessly selected for not being annoying).
Speaking for myself, I know of at least four people who know of Less Wrong/SI but are not enthusiasts, possibly due to atmosphere issues.
An acquaintance of mine attends Less Wrong meetups and describes most of his friends as being Less Wrongers, but doesn’t read Less Wrong and privately holds reservations about the entire singularity thing, saying that we can’t hope to say much about the future more than 10 years in advance. He told me that one of his coworkers is also skeptical of the singularity.
A math student/coder I met at an entrepreneurship event told me Less Wrong had good ideas but was “too pretentious”.
I was interviewing for an internship once, and the interviewer and I realized we had a mutual acquaintance who was a Less Wronger and SI donor. He asked me if I was part of that entire group, and I said yes. His attitude was a bit derisive.
The FHI are trying to do a broadly similar thing from within academia. They seem less kooky and cultish—probably as a result of trying harder to avoid cultishness.
I don’t know why you would assume that it’s “probably as a result of trying harder to avoid cultishness.” My prior is that they just don’t seem cultish because academics are often expected to hold unfamiliar positions.
I will say that I feel 95% confident that SIAI is not a cult because I spent time there (mjcurzi was there also), learned from their members, observed their processes of teaching rationality, hung out for fun, met other people who were interested, etc. Everyone involved seemed well meaning, curious, critical, etc. No one was blindly following orders. In the realm of teaching rationality, there was much agreement it should be taught, some agreement on how, but total openness to failure and finding alternate methods. I went to the minicamp wondering (along with John Salvatier) whether the SIAI was a cult and obtained lots of evidence to push me far away from that position.
I wonder if the cult accusation in part comes from the fact that it seems too good to be true, so we feel a need for defensive suspicion. Rationality is very much about changing one’s mind and thinking about this we become suspicious that the goals of SIAI are to change our minds in a particular way. Then we discover that in fact the SIAI’s goals (are in part) to change our minds in a particular way so we think our suspicions are justified.
My model tells me that stepping into a church is several orders of magnitude more psychologically dangerous than stepping into a Less Wrong meetup or the SIAI headquarters.
(The other 5% goes to things like “they are a cult and totally duped me and I don’t know it”, “they are a cult and I was too distant from their secret inner cabals to discover it”, “they are a cult and I don’t know what to look for”, “they aren’t a cult but they want to be one and are screwing it up”, etc. I should probably feel more confident about this than 95%, but my own inclination to be suspicious of people who want to change how I think means I’m being generous with my error. I have a hard time giving these alternate stories credit.)
I would consider myself a pretty far outlier on LessWrong (as a female, ENFP (people-person, impulsive/intuitive), Hufflepuff type). So on one hand, my opinion may mean less, because I am not generally the “type” of person associated with LW. On the other hand, if you want to expand LW to more people, then I think some changes need to be made for other “types” of people to also feel comfortable here.
Along with the initial “cult” impression (which eventually dissipates, IMO), what threw me most is the harshness of the forums. I’ve been on here for about 4 months now, and it’s still difficult for me to deal with. Also, I agree that topics like FAI, and Singularitarianism aren’t necessarily the best things to be discussing when trying to get people interested in rationality.
I am well-aware that the things that would make LW more comfortable for me and others like me, would make it less comfortable for many of the current posters. So there is definitely a conflict of goals.
Goal A- Grow LW and make rationality more popular- Need to make LW more “nice” and perhaps focused on Instrumental Rationality rather than Singularity and FAI issues.
Goal B- Maintain current culture and level of posts.- Need to NOT significantly change LW, and perhaps focus more on the obscure posts that are extremely difficult for newer people to understand.
AFAICT pursuit of either of these goals will be at the detriment of the other goal.
Could you be more specific about what comes off as harsh to you?
If you’d rather address this as a private message, I’m still interested.
What comes across as harsh to me: down voting discussion posts because they’re accidental duplicates/don’t fit some idea of what a discussion post is supposed to be, a lot of down voting that goes on in general, unbridled or curt disagreement (like grognor’s response to my post. You saw him cursing and yelling, right? I made this post because I thought the less wrong community could use optimization on the topics I wrote about, not because I wanted to antagonize anyone.)
PM’d response. General agreement with John below (which I didn’t see until just now).
This person might have been in the same place as a math grad student I know. They read a little Less Wrong and were turned off. Then they attended a LW-style rationality seminar and responded positively, because it was more “compassionate”. What they mean is this: A typical epistemology post on Less Wrong might sound something like
(That’s not a quote.) Whereas the seminar sounded more like
Similarly, an instrumental-rationality post here might sound like
Whereas the seminar sounds more like
Of course, both approaches are good and necessary, and you can find both on Less Wrong.
Defending oneself from the cult accusation just makes it worse. Did you write a long excuse why you are not a cult? Well, that’s exactly what a cult would do, isn’t it?
To be accused is to be convicted, because the allegation is unfalsifiable.
Trying to explain something is drawing more attention to the topic, from which people will notice only the keywords. The more complex explanation you make, especially if it requires reading some of your articles, the worse it gets.
The best way to win is to avoid the topic.
Unfortunately, someone else can bring this topic and be persistent enough to make it visible. (Did it really happen on a sufficient scale, or are we just creating it by our own imagination?) Then, the best way is to make some short (not necessarily rational, but cached-thought convincing) answer and then avoid the topic. For example: “So, what exactly is that evil thing people on LW did? Downvote someone’s forum post? Seriously, guys, you need to get some life.”
And now, everybody stop worrying and get some life. ;-)
It could also help to make the site seem a bit less serious. For example put more emphasis on the instrumental rationality on the front page. People discussing best diet habits don’t seem like a doomsday cult, right?
The Sequences could be recommended somewhat differently, for example: “In this forum we sometimes discuss some complicated topics. To make the discussion more efficient and avoid endlessly repeating the same arguments about statistics, evolution, quantum mechanics, et cetera, it is recommended to read the Sequences.” Not like ‘you have to do this’, but rather like ‘read the FAQ, please’. Also in discussion, instead of “read the Sequences” it is better to recommend one specific sequence, or one article.
Relax, be friendly. But don’t hesitate to downvote a stupid post, even if the downvotee threatens to accuse you of whatever.
I’m having trouble thinking up examples of cults, real or fictional, that don’t take an interest in what their members eat and drink.
I don’t think the best way to win is to avoid the topic. A healthy discussion of false impressions and how to correct them, or other failings a group may have, is a good indication to me of a healthy community. This post for example caused my impression of LW to increase somewhat, but some of the responses to it have caused my impression to decrease below its original level.
Then let’s discuss “false impressions” or even better “impressions” in general, not focusing on cultishness, which even cannot be defined (because there are so many different kind of cults). If we focus on making things right, we do not have to discuss hundred ways they could go wrong.
What is our community (trying to be) like?
Friendly. In more senses of the word: we speak about ethics, we are trying to make a nice community, we try to help each other become stronger and win.
Rational. Instead of superstition and gossip, we discuss how and why things really happen. Instead of happy death spirals, we learn about the world around us.
Professional. By that I do not mean that everyone here is an AI expert, but that the things we do and value here (studying, politeness, exactness, science) are things that for most people correlate positively with their jobs, rather than free time. Even when we have fun, it’s adult people having fun.
So where exactly in the space of human organizations do we belong? Which of the cached-thoughts can be best applied to us? People will always try to fit us to some existing model (for example: cult), so why not choose this model rationally? I am not sure, but “educational NGO” sounds close. Science, raising the sanity waterline, et cetera. By seeming as something well-known, we become less suspicious, more normal.
This.
Seriously, we need to start doing all the stuff recommended here, but this is perhaps the simplest and most immediate. Someone go do it.
No. The best way is to not be a cult. Since cults are the most efficient known ways of engendering irrational bias, and since LW’s mission statement is the avoidance of irrationality and bias, that is something LW should be doing anyway.
Here’s how:
Don’t have a leader
Don″t have a gospel
Don’t have a dogma
Don’t have quasi-religious “meetups”
Don’t have quasi-religious rituals (!)
Don’t have an eschatology
Don’t have a God.
WELCOME CRITIICISM AND DISSENT
Suimmary:
Don’t be a religion of rationality. Be the opposite of a religion.
Reversed stupidity is not intelligence.
Worship of Yud-Suthoth is not rationality.
… and?
Intoning that “reversed stupidity is mot intelligence” is not going to switch any less wrongers brain back on.
You misunderstand me. No-one is worshiping Yud-Suthoth and calling it rationality. You proposed, in essence, that we disregard everything connected with religion—this is precisely the fallacy “reversed stupidity is not intelligence” is intended to address. When this was pointed out, you responded with what can only be charitably interpreted as meaning “but religions are irrational!” which isn’t really addressing his point, or indeed any point, since no-one is proposing starting a religion or for that matter joining one.
EDIT: sodding karma toll.
Something like that is happenning. For instance, sending people off to the sequencs to find The Answer, when the sequences don’t even say anything conclusive.
That’s just a slogan, not some universal law.
The opposite of most sorts of stupid is still stupid. Particularly most things that are functional enough to proliferate themselves successfully.
If you meant “Have more than one leader” you’d be on to something. That isn’t what you meant though.
There is a difference between the connotations you are going with for ‘gospel’ and what amounts to a textbook that most people haven’t read anyway.
I sometimes wish people would submit to reference to rudimentary references to rational, logical, decision theoretic or scientific concepts as if they were dogma. That is far from what I observe.
Socialize in person with rudimentary organisation? Oh the horror!
Actually, I don’t disagree at all on this one. Or at least I’d prefer that anyone who was into that kind of thing did it without it being affiliated with lesswrong in any way except partial membership overlap.
Are you complaining (or shaming with labels the observation) that an economist and an AI researcher attempted to use their respective expertise to make predictions about the future?
Don’t. Working on it...
Most upvoted post. Welcome competent, sane or useful criticism. Don’t give nonsense a free pass just because it is ‘dissent’.
How do you know? Multiple leaders at least dilute the problem.
I’ve read it. There’s some time I’ll never get back.
Not what I meant. Those can be studied anywhere. “MWI is the correct interpretation of QM” is an example of dogma.
Other rationalists manage without it.
No, I am referring to mind-killing aspects of the mythos: it fools people into thinking they are Saving the World This sense of self-importance is yet another mind killer. Instead of examining ideas dispassionaely,a s they should, they develop a mentality of “No, don’t take my important world-saving role away from me! I cannot tolerate any criticism of these ideas, because then I will go back to being an ordinary person”.
Is this nonsense ?
It contains five misspellings in a single paragraph: “utimately” “canot” “statees” “hvae” “ontoogical” which might themselves be enough for a downvote, regardless of content.
As for the is-ought problem, if we accept that “ought” is just a matter of calculations in our brain returning an output (and reject that it’s a matter of e.g. our brain receiving supernatural instruction from some non-physical soul), then the “ought” is describable in terms of the world-that-is, because every algorithm in our brain is describable in terms of the world-that-is.
It’s not a matter of “cramming” an entire world-state into your brain—any approximation that your brain is making, including any self-identified deficiency in the ability to make a moral evaluation in any particular situation, are also encoded in your brain—your current brain, not some hypothetical superbrain.
But we shouldnt accept that, because we can miscalculate an “ought” or antyhing else. The is-ought problem is the problem of correctly inferring an ought from a tractable amount of “is’s”.
It perhaps might be one day given sufficiently advanced brain scanning, but we don’t have that now, so we still have an is-ought gap.
The is-ought problem is epistemic. Being told that I have an epistemically inaccessible black box in my head that calculates oughts still doesn’t lead to a situation where oughts can be consciously undestood as correct entailments of is’s.
One way to miscalculate an “ought” is the same way that we can miscalculate an “is”—e.g. lack of information, erroneous knowledge, false understanding of how to weigh data, etc.
And also, because people aren’t perfectly self-aware, we can mistake mere habits or strongly-held preferences to be the outputs of our moral algorithm—same way that e.g. a synaesthete might perceive the number 8 to be colored blue, even though there’s no “blue” light frequency striking the optical nerve. But that sort of thing doesn’t seem as a very deep philosophical problem to me.
We can correct miscalculations where we have an conscious epistemic grasp of how the calculation should work. If morality is a neural black box, we have no such grasp. Such a neural black box cannot be used to plug the is-ought gap, because it does not distinguish correct calculations from miscalculations.
Leaders are useful. Pretty much every cause/movement/group has leadership of some kind.
I’m not really sure how the sequences map onto the Christian Gospel. A catechism, maybe.
Assuming we don’t excommunicate people for disagreeing with it (politely), I’m not sure why not. I mean, we mostly agree that there’s no God, for example; rationality should, presumably, move us closer to the correct position, and if most of us agree that we’ve probably found it, why shouldn’t we assume members agree unless they indicate otherwise?
Or did you have a different meaning of “dogma” in mind?
Because meeting people with similar interests and goals is only done via religion.
Has anyone who’s not a member of this site actually used those rituals as evidence of phygishness? Genuinely asking here.
Because any idea that predicts the end of the world must be discarded a priori?
Because any idea you place in the reference class “god” must be discarded a priori?
An excellent suggestion! In theory, we already do (we could probably do better on this.) Trolling, however, is not generally considered part of that.
I’m not even going to bother linking to the appropriate truism, but reversed stupidity etc.
EDIT: dammit stupid karma toll cutting off my discussions.
Leaders cause people to lapse into thinking “The Guru has an answer, even if I don’t understand it”. This is aready happening in LW.
People say “The answer is in the Sequencess” without bothering to check that it is.
Rationalists should think and argue. However LWers just say “this is wrong” and downvote.
Other ratioanlists manage withoiut them. LWers aren’t aware of how religious they seem.
Because it fools people into thinking they are Saving the World This sense of self-importance is yet another mind killer. Instead of examining ideas dispassionaely,a s they should, they develop a mentality of “No, don’t take my important world-saving role away from me! I cannot tolerate any criticism of these ideas, because then I will go back to being an ordinary person”.
See above. Leads to over-estimation of individual importance, and therefore emotional investment, and therefore mind-killing.
I’ll say
“Trolling” is the blind dogmatist’s term for reasoned criticism.
I> ’m not even going to bother linking to the appropriate truism, but reversed stupidity etc.
Stupidity is stupidity, too.
Some things that might be problematic:
I don’t think we actually do that. Insights, sure, but latest insights? Also, it’s mostly cognitive science and social psychology. The insights from probability and decision theory are more in the style of the simple math of everything.
This might sound weird to someone who hasn’t already read the classic example about doctors not being able to calculate conditional probabilities. Like we believe Bayes theorem magically grants us medical knowledge or something.
[the link to rationality boot-camp]
I’m not a native speaker of English so I can’t really tell, but I recall people complaining that the name ‘boot-camp’ is super creepy.
On the about page:
That’s not cultish-sounding but it’s unnecessarily imperative. Introduction thread is optional.
Disclaimer: My partner and I casually refer to LW meetups (which I attend and she does not) as “the cult”.
That said, if someone asked me if LW (or SIAI) “was a cult”, I think my ideal response might be something like this:
“No, it’s not; at least not in the sense I think you mean. What’s bad about cults is not that they’re weird. It’s that they motivate people to do bad things, like lock kids in chain-lockers, shun their friends and families, or kill themselves). The badness of being a cult is not being weird; it’s doing harmful things — and, secondarily, in coming up with excuses for why the cult gets to do those harmful things. Less Wrong is weird, but not harmful, so I don’t think it is a cult in the sense you mean — at least not at the moment.
“That said, we do recognize that “every cause wants to be a cult”, that human group behavior does sometimes tend toward cultish, and that just because a group says ‘Rationality’ on the label does not mean it contains good thinking. Hoping that we’re special and that the normal rules of human behavior don’t apply to us, would be a really bad idea. It seems that staying self-critical, understanding how cults happen and why, and consciously taking steps to avoid making in-group excuses for bad behavior or bad thinking, is a pretty good strategy for avoiding becoming a cult.”
People use “weird” as a heuristic for danger, and personally I don’t blame them, because they have good Bayesian reasons for it. Breaking a social norm X is positively correlated with breaking a social norm Y, and the correlation is strong enough for most people to notice.
The right thing to do is to show enough social skill to avoid triggering the weirdness alarm signal. (Just like publishing in serious media is the right thing to avoid the “pseudoscience” label.) You cannot expect that outsiders will do an exception for LW, suspend their heuristics and explore the website deeply; that would be asking them to privilege a hypothesis.
If something is “weird”, we should try to make it less weird. No excuses.
So we should be Less Weird now? ;)
We should be winning.
Less Weird is a good heuristic for winning (though a bad heuristic for a site name ).
Often by the time a cult starts doing harmful things, its members have made both real and emotional investments that turn out to be nothing but sunk costs. To avoid ever getting into such a situation, people come up with a lot of ways to attempt to identify cults based on nothing more than the non-harmful, best-foot-forward appearance that cults first try to project. If you see a group using “love bombing”, for instance, the wise response is to be wary—not because making people feel love and self-esteem is inherently a bad thing, but because it’s so easily and commonly twisted toward ulterior motives.
That is until people start bombing factories to mitigate highly improbable existential risks.
What do you mean, “initially” ? I am still getting that impression ! For example, just count the number of times Eliezer (who appears to only have a single name, like Prince or Jesus) is mentioned in the other comments on this post. And he’s usually mentioned in the context of, “As Eliezer says...”, as though the mere fact that it is Eliezer who says these things was enough.
The obvious counter-argument to the above is, “I like the things Eliezer says because they make sense, not because I worship him personally”, but… well… that’s what one would expect a cultist to say, no ?
Less Wrongers also seem to have their own vocabulary (“taboo that term or risk becoming mind-killed, which would be un-Bayesian”). We spend a lot of time worrying about doomsday events that most people would consider science-fictional (at best). We also cultivate a vaguely menacing air of superiority, as we talk about uplifting the ignorant masses by spreading our doctrine of rationality. As far as warning signs go, we’ve got it covered...
Specialized terminology is really irritating to me personally, and off-putting to most new visitors I would think. If you talk to any Objectivists or other cliques with their own internal vocabulary, it can be very bothersome. It also creates a sense that the group is insulated from the rest of the world, which adds to the perception of cultishness.
I think the phrase ‘raising the sanity waterline’ is a problem. As is the vaguely religious language, like ‘litany of Tarski’. I looked up the definiton of ‘litany’ to make sure I was picking up on a religious denotation and not a religious connotation, and here’s what I got:
Not a great word, I think. Also ‘Bayesian Conspiracy.’ There’s no conspiracy, and there shouldn’t be.
Agreed. I realize that the words like “litany” and “conspiracy” are used semi-ironically, but a newcomer to the site might not.
This wording may lose a few people, but it probably helps for many people as well. The core subject matter of rationality could very easily be dull or dry or “academic”. The tounge-in-cheek and occasionally outright goofy humor makes the sequences a lot more fun to read.
The tone may have costs, but not being funny has costs too. If you think back to college, more professors have students tune out by being boring than by being esoteric.
(Responding to old post.)
One problem with such ironic usage is that people tend to joke about things that cause themselves stress, and that includes uncomfortable truths or things that are getting too close to the truth. It’s why it actually makes sense to detain people making bomb jokes in airports. So just because the words are used ironically doesn’t mean they can’t reasonably be taken as signs of a cult—even by people who recognize that they are being used ironically.
(Although this is somewhat mitigated by the fact that many cults won’t allow jokes about themselves at all.)
You’d have to be new to the entire internet to think those are being used seriously. And if you’re THAT new, there’s really very little that can be done to prevent misunderstanding no matter where you first land.
On top of that, it’s extremely unlikely someone very new to the internet would start their journey at LessWrong
Mr. Jesus H. Christ is a bad example. Also there’s this.
The LW FAQ says: >
I suspect that putting a more human face on the front page, rather than just linky text, would help.
Perhaps something like a YouTube video version of the FAQ, featuring two (ideally personable) people talking about what Less Wrong is and is not, and how to get started on it. For some people, seeing is believing. It is one thing to tell them there are lots of different posters here and we’re not fanatics; but that doesn’t have the same impact as watching an actual human with body cues talking.
I don’t believe LW is a cult, but I can see where intelligent, critical thinking people might get that impression. I also think that there may be elitist and clannish tendencies within LW that are detrimental in ways that could stand to be (regularly) examined. Vigilance against irrational bias is the whole point here, right? Shouldn’t that be embraced on the group level as much as on an individual one?
Part of the problem as I see it is that LW can’t decide if it’s a philosophy/science or a cultural movement.
For instance, as already mentioned, there’s a great deal of jargon, and there’s a general attitude of impatience for anyone not thoroughly versed in the established concepts and terminology. Philosophies and sciences also have this problem, but the widely accepted and respected philosophical and scientific theories have proven themselves to the world (and weren’t taken very seriously until they did). I personally believe there’s a lot of substance to the ideas here, but LW hasn’t delivered anything dramatic to the world at large. Until it does so it may remain, in the eyes of outsiders, as some kind of hybrid of Scientology and Objectivism—an insular group of people with a special language, a revered spokesperson, and who claim to have “the answers”.
If, however, LW is supposed to be a cultural movement, then I’m sorry, but ”ur doin it wrong”. Cultural movements gain momentum by being inclusive and organic, and by creating a forum for people to express themselves without fear of judgment. Movements are bottom up, and LW often gives the impression of being top down.
I’m not saying that a choice has to be made or even can be made, merely that there are conflicting currents here. I don’t know if I have any great suggestions. I guess the one thing I can say is that while I’ve observed (am observing) a lot of debate and self-examination internally, there’s still a strong outward impression of having found “the answers”. Perhaps if this community presented itself a little more as a forum for the active practice of critical thinking, and a little less as the authoritative source for an established methodology for critical thinking.
And if that doesn’t work, we could always try bus ads.
Thank you for this.
(In light of my other comment, I should emphasize that I really mean that. It is not sarcasm or any other kind of irony.)
I have seen this problem afflict other intellectually-driven communities, and believe me, it is a very hard problem to shake. Be grateful we aren’t getting media attention. The adage, “All press is good press”, has definitely been proven wrong.
I assume that my post has aggravated things? :o(
The word “cult” never makes discussions like these easier. When people call LW cultish, they are mostly just expressing that they’re creeped out by various aspects of the community—some perceived groupthink, say. Rather than trying to decide whether LW satisfies some normative definition of the word “cult,” it may be more productive to simply inquire as to why these people are getting creeped out. (As other commenters have already been doing.)
This exactly. It’s safe to assume that when most people say some organization strikes them as being cultish, they’re not necessarily keeping a checklist.
Somebody please do so. Those examples are just obviously bad.
I got a distinct cultish vibe when I joined, but only from the far-out parts of the site, like UFAI, but not from the “modern rationality” discussions. When I raised the issue on #lesswrong, the reaction from most regulars was not very reassuring: somewhat negative and more emotional than rational. The same happened when I commented here. That’s why I am looking forward to the separate rationality site, without the added untestable and useless to me EY’s idiosyncrasies, such as the singularity, the UFAI and the MWI.
We should try to pick up “moreright.com″ from whoever owns it. It’s domain-parked at the moment.
Moreright.net already exists, and it’s a “Bayesian reactionary” blog- that is, a blog for far-rightists who are involved in the Less Wrong community. It’s an interesting site, but it strikes me as decidedly unhelpful when it comes to looking uncultish.
As I see it Cult =Clique + Weird Ideas
I think the weird ideas are an integral part of LessWrong, and any attempt to disguise them with a fluffy introduction would be counterproductive.
What about Cliquishness? I think that the problem here is that any internet forum tends to become a clique. To take part you need to read through lots of posts, so it requires quite a commitment. Then there is always some indication of your status within the group—Karma score in this case.
My advice would be to link to some non-internet things. Why not have the FHI news feed and links to a few relevant books on Amazon in the column on the right?
“Twelve Virtues of Rationality” has always seemed really culty to me. I’ve never read it, which may be part of the reason. It just comes across as telling people exactly how they should be, and what they should value.
Also, I’ve never liked that quote about the Sequences. I agree with it, what I’ve read of the sequences (and it would be wrong to not count HPMOR in this) is by far the most important work I’ve ever read. But that doesn’t mean that’s what we should advertise to people.
All you are saying here is “The title of the Twelve Virtues makes me feel bad.” That is literally all you are saying, since you admit to not having read it.
I quote:
I’ll tell you one thing. It got my attention. It got me interested in rationality. I’ve shown it to others; they all liked it or were indifferent. If you’re going to say “culty” because of the title, you are both missing the (most important) point and failing to judge based on anything reasonable. And I don’t particularly care if LW appeals to people who don’t even try to be reasonable.
That’s still an useful data-point. Do we want to scare away people with strong weirdness filters?
The answer to this may very well turn out to be yes.
What proportion of top people at SIAI love sf?
It’s at least plausible that strong weirdness filters interfere with creativity.
On the other hand, weirdness is hard to define—sf is a rather common sort of weirdness these days..
There is no reason to turn them off right away. The blog itself is weird enough. Maybe they will be acclimated, which would be good.
I almost forgot this, but I was pretty put off by the 12 virtues as well when I first came across it on reddit at age 14 or so. My reaction was something like “you’re telling me I should be curious? What if I don’t want to be curious, especially about random stuff like Barbie dolls or stamp collecting?” I think I might have almost sent Eliezer an e-mail about it.
When you put this together with what Eliezer called “the bizarre “can’t get crap done” phenomenon that afflicts large fractions of our community, which he attributes to feelings of low status, this paints a picture of LW putting off the sort of person who is inclined to feel high status (and is therefore good at getting crap done, but doesn’t like being told what to do). This may be unrelated to the cult issue.
Of course, these hypothetical individuals who are inclined to feel high status might not like being told how to think better either… which could mean that Less Wrong is not their cup of tea under any circumstances. But I think it makes sense to shift away from didacticism on the margin.
Ironically, I suspect the “cultlike” problem is that LessWrong/SI’s key claims lack falsifiability.
Friendly AI? In the far future.
Self-improvement? All mental self-improvement is suspected of being a cult, unless it trains a skill outsiders are confident they can measure.
If I have a program for teaching people math, outsiders feel they know how they can check my claims—either my graduates are good at math or not.
But if I have a program for “putting you in touch with your inner goddess”, how are people going to check my claims? For all outsiders know I’m making people feel good, or feel good about me, without actually making them meaningfully better.
Unfortunately, the external falsifiability of LW/SI’s merits is more like the second case than the first. Especially, I suspect, for people who aren’t already big fans of mathematics, information theory, probability, and potential AI.
Organization claims to improve a skill anyone can easily check = school. Organization claims to improve a quality that outsiders don’t even know how to measure = cult.
If and when LW/SI can headline more easily falsifiable claims, it will be less cultlike.
I don’t know if this is an immediately solvable problem, outside of developing other aspects of LW/SI that are more obviously useful/impressive to outsiders, and/or developing a generation of LW/SI fans who are indeed “winners” as rationalists ideally would be.
PredictionBook might help with measuring improvement, in a limited way. You can use it to measure how often your predictions are correct, and whether you are getting better over time. And you could theoretically ask LW-ers and non-LW-ers to make some predictions on PredictionBook, and then compare their accuracy to see if Less Wrong helped. Making accurate predictions of likelihood is a real skill that certainly has the possibility to be very useful – though it depends on what you’re predicting.
Maybe a substantial number of people are searching for the posts about cultishness.
so I shouldn’t refer people to death spirals and baby eating right away?
Don’t mindkill their cached thoughts.
Offer them a hamster in a tutu first, that’ll be cute and put them at ease.
I think the biggest reason Less Wrong seems like a cult is because there’s very little self-skepticism; people seem remarkably confident that their idiosyncratic views must be correct (if the rest of the world disagrees, that’s just because they’re all dumb). There’s very little attempt to provide any “outside” evidence that this confidence is correctly-placed (e.g. by subjecting these idiosyncratic views to serious falsification tests).
Instead, when someone points this out, Eliezer fumes “do you know what pluralistic ignorance is, and Asch’s conformity experiment? … your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong”.
What’s especially amusing is that EY is able to keep this stuff up by systematically ignoring every bit of his own advice: telling people to take the outside view and then taking the inside one, telling people to look into the dark while he studiously avoids it, emphasizing the importance of AI safety while he embarks on an extremely dangerous way of building AI—you can do this with pretty much every entry in the sequences.
These are the sorts of things that make me think LessWrong is most interesting as a study in psychoceramics.
Offhand, can you think of a specific test that you think ought to be applied to a specific idiosyncratic view?
My read on your comment is: LWers don’t act humble, therefore they are crackpots. I agree that LWers don’t always act humble. I think it’d be a good idea for them to be more humble. I disagree that lack of humility implies crackpottery. In my mind, crackpottery is a function of your reasoning, not your mannerisms.
Your comment is a bit short on specific failures of reasoning you see—instead, you’re mostly speaking in broad generalizations. It’s fine to have general impressions, but I’d love to see a specific failure of reasoning you see that isn’t of the form “LWers act too confident”. For example, a specific proposition that LWers are too confident in, along with a detailed argument for why. Or a substantive argument for why SI’s approach to AI is “extremely dangerous”. (I personally know pretty much everyone who works for SI, and I think there’s a solid chance that they’ll change their approach if your argument is good enough. So it might not be a complete waste of time.)
Now it sounds like you’re deliberately trying to be be inflammatory ಠ_ಠ
Well, for example, if EY is so confident that he’s proven “MWI is obviously true—a proposition far simpler than the argument for supporting SIAI”, he should try presenting his argument to some skeptical physicists. Instead, it appears the physicists who have happened to run across his argument found it severely flawed.
How rational is it to think that you’ve found a proof most physicists are wrong and then never run it by any physicists to see if you’re right?
I do not believe that.
As for why SI’s approach is dangerous, I think Holden put it well in the most upvoted post on the site.
I’m not trying to be inflammatory, I just find it striking.
The criticisms at those links have nothing to do with the argument for MWI. They are just about a numerical mistake in an article illustrating how QM works.
The actual argument for MWI that is presented is something like this: Physicists believe that the wavefunction is real and that it collapses on observation, because that is the first model that explained all the data, and science holds onto working models if they are falsified. But we can also explain all the data by saying that the wavefunction is real and doesn’t collapse, if we learn to see the wavefunction as containing multiple worlds that are equally real. The wavefunction doesn’t collapse, it just naturally spreads out into separate parts and what we see is one of those separate parts. A no-collapse theory is simpler than a collapse theory because it has one less postulate, so even though there are no new predictions, by Bayes (or is it Occam?) we can favor the no-collapse theory over the collapse theory. Therefore, there are many worlds.
This is informal reasoning about which qualitative picture of the world to favor, so it is not something that can be verified or falsified by a calculation or an experiment. Therefore, it’s not something that a hostile physicist could crisply debunk, even if they wanted to. In the culture of physics there are numerous qualitative issues where there is no consensus, and where people take sides on the basis of informal reasoning. Eliezer’s argument is on that level; it is an expression in LW idiom, of a reason for believing in MWI that quite a few physicists probably share. It can’t be rebutted by an argument along the lines that Eliezer doesn’t know his physics, because it is an argument which (in another form) a physicist might actually make! So if someone wants to dispute it, they’ll have to do so, just as if they were intervening in any of these informal professional disagreements which exist among physicists, by lines of argument about plausibility, future theoretical prospects, and so on.
ETA One more comment about the argument for MWI as I have presented it. Physicists don’t agree that the wavefunction is real. The debate over whether it is real, goes all the way back to Schrodinger (it’s a real physical object or field) vs Heisenberg (it’s just a calculating device). The original Copenhagen interpretation was in Heisenberg’s camp: a wavefunction is like a probability distribution, and “collapse” is just updating on the basis of new experimental facts (the electron is seen at a certain location, so the wavefunction should be “collapsed” to that point, in order to reflect the facts). I think it’s von Neumann who introduced wavefunction realism into the Copenhagen interpretation (when he axiomatized QM), and thereby the idea of “observer-induced collapse of the wavefunction” as an objective physical process. Though wavefunction realism was always going to creep up on physicists, since they describe everything with wavefunctions (or state vectors) and habitually refer to these as “the state” of the object, rather than “the state of our knowledge” of the object; also because Copenhagen refused to talk about unobserved realities (e.g. where the electron is, when it’s not being seen to be somewhere), an attitude which was regarded as prim positivistic virtue by the founders, but which created an ontological vacuum that was naturally filled by the de-facto wavefunction realism of physics practice.
BTW, it’s important to note that by some polls an actual majority of theoretical physicists now believe in MWI, and this was true well before I wrote anything. My only contributions are in explaining the state of the issue to nonphysicists (I am a good explainer), formalizing the gross probability-theoretic errors of some critiques of MWI (I am a domain expert at that part), and stripping off a lot of soft understatement that many physicists have to do for fear of offending sillier colleagues (i.e., they know how incredibly stupid the Copenhagen interpretation appears nowadays, but will incur professional costs from saying it out loud with corresponding force, because there are many senior physicists who grew up believing it).
The idea that Eliezer Yudkowsky made up the MWI as his personal crackpot interpretation isn’t just a straw version of LW, it’s disrespectful to Everett, DeWitt, and the other inventors of MWI. It does seem to be a common straw version of LW for all that, presumably because it’s spontaneously reinvented any time somebody hears that MWI is popular on LW and they have no idea that MWI is also believed by a plurality and possibly a majority of theoretical physicists and that the Quantum Physics Sequence is just trying to explain why to nonphysicists / formalize the arguments in probability-theoretic terms to show their nonambiguity.
The original source for that “58%” poll is Tipler’s The Physics of Immortality, where it’s cited (chapter V, note 6) as “Raub 1991 (unpublished)”. (I know nothing about the pollster, L. David Raub, except that he corresponded with Everett in 1980.) Tipler says that Feynman, Hawking, and Gell-Mann answered “Yes, I think the MWI is true”, and he lists Weinberg as another believer. But Gell-Mann’s latest paper is a one-history paper, Weinberg’s latest paper is about objective collapse, and Feynman somehow never managed to go on record anywhere else about his belief in MWI.
I trust Tipler as far as I can throw his book.
(It’s a large book, and I’m not very strong.)
Has anyone seriously suggested you invented MWI? That possibility never even occurred to me.
It’s been suggested that I’m the one who invented the idea that it’s obviously true rather than just one more random interpretation; or even that I’m fighting a private war for some science-fiction concept, rather than being one infantry soldier in a long and distinguished battle of physicists. Certainly your remark to the extent that “he should try presenting his argument to some skeptical physicists” sounds like this. Any physicist paying serious attention to this issue (most people aren’t paying attention to most things most of the time) will have already heard many of the arguments, and not from me. It sounds like we have very different concepts of the state of play.
Can’t help but compare this to the Swiftian battle of big-endians and little-endians, only the interpretational war makes even less sense.
I just can’t ignore this. If you take a minute to actually look at the talk section of that wikipedia page you will see those polls being thorn to pieces.
David Deutsch himself has stated that less than 10% of the people doing quantum fundamentals believe in MWI and then within that minority there are a lot of diverging views. So this is still not by any means a “majority interpretation”.
As Mitchell_Porter has pointed out Gell-Mann certainly do not believe in MWI. Nor do Steven Weinberg, he denounced his ‘faith’ in it in a paper last year. Feynman certainly did never talk about it, which to me is more than enough indication that he did not endorse it. Hawking is a bit harder, he is on record seemingly being pro and con it, so I guess he is a fence sitter.
But more importantly is the fact that none of the proponents agree on what MWI they support. (This includes you Eliezer)
Zurek is another fence sitter, partly pro-some-sort-of-MWI, partly pro-It-from-Bit. Also his way of getting the Born Rule in MWI is quite a bit different. From what I understand, only the worlds that are “persistent” are actualized. This reminds me of Robin Hanson’s mangled worlds where only some worlds are real and the rest gets “cancelled” out somehow. Yet they are completley different ways of looking at MWI. Then you got David Deutsch’s fungible worlds which is slightly different from David Wallace’s worlds. Tegmark got his own views etc.
There seems to be no single MWI and there has been no answer to the Born Rule.
So I want to know why you keep on talking about it as it is a slam dunk?
Good question.
I think your use of “believe in” is a little suspect here. I’m willing to believe that more than half of all theoretical physicists believe some variant of the MWI is basically right (though the poll can’t have been that recent if Feynman was part of it, alas), but that’s different from the claim that there are no non-MWI interpretations worth considering, which is something a lot of people, including me, seem to be taking from the QP sequence. Do you believe that that’s a majority view, or anything close to one? My impression is that that view is very uncommon, not just in public but in private too...at least outside of Less Wrong.
That sounds correct to me. A physicist who also possesses probability-theory expertise and who can reason with respect to Solomonoff Induction and formal causal models should realize that single-world variants of MWI are uniformly unworkable (short of this world being a runtime-limited computer simulation); but such is rare (though not unheard-of) among professional physicists; and among the others, you can hardly blame them for trying to keep an open mind.
The Penrose’s objective collapse theory saying that the entanglement scale is limited by gravity, which results in the macroscopic objects remaining essentially classical, does not look all that unworkable.
It’d still be the only FTL discontinuous non-differentiable non-CPT-symmetric non-unitary non-local-in-the-configuration-space etc. etc. process in all of physics, to explain a phenomenon (why do we see only one outcome?) that doesn’t need explaining.
Well, one advantage of it is that it is testable, and so is not a mere interpretation, which holds a certain amount of appeal to the more old-fashioned of us.
I agree, and I myself was, and am still, sentimentally fond of Penrose for this reason, and I would cheer on any agency that funded a test. However and nonetheless, “testable” is not actually the same as “plausible”, scientifically virtuous as it may be.
Not if it doesn’t allow FTL communication, unless you want to argue that quantum entanglement is a FTL phenomenon, but that wouldn’t be an issue of the particular interpretation.
Not necessarily. Irreversible and stochastic quantum processes can be time-continuous and time-differentiable.
Consider the processes described by the Lindblad equation, for instance.
CPT symmetry is a property of conventional field theories, not all quantum theories necessarily have it, and IIUC, there are ongoing experiments to search for violations. CPT symmetry is just the last of a series of postulated symmetries, the previous ones (C symmetry, P symmetry, T symmetry and CP symmetry) have been experimentally falsified.
Right, and that’s the point of objective collapse theories.
I’m not sure what you mean by that, but locality in physics is defined with respect to space and time, not to arbitrary configuration spaces.
AFAIK, there have been attempts to derive the Born rule in Everett’s interpretation, but they didn’t lead to uncontroversial results.
I have never seen a proposed mechanism of ontological collapse that actually fits this, though.
The inability to send a signal that you want, getting instead a Born-Rule-based pure random signal, doesn’t change that this Born-Rule-based pure random signal is, under ontological collapse distributed FTL.
AFAIK, Penrose’s interpretation doesn’t describe the details of the collapse process, it just says that above about the “one graviton” level of energy separation collapse will occur.
It doesn’t commit to collapse being instantaneous: It could be that the state evolution is governed by a non-linear law that approximates very well the linear Schrödinger equation in the “sub-graviton” regime and has a sharp, but still differentiable phase transition when approaching the “super-graviton” regime.
The GRW interpretation assumes instantaneous collapse, IIUC, but it would be a trivial modification to have fast, differentiable collapse.
My point is that non-differentiable collapse is not a requirement of objective collapse interpretations.
But that’s an issue of QM, irrespective of the particular interpretation. Indeed the “spooky action at distance” bugged Einstein and many people of his time, but the modern view is that as long as you don’t have causal influences (that is, information transmission) propagating FTL, you don’t violate special relativity.
No, it isn’t. QM is purely causal and relativistic. You can look into the equations and prove that nothing FTL is in there. The closest you get is accounting for the possibility of a vacuum bubble having appeared nearby a particle with exactly its energy, and the antimatter part of it the bubble then cancels with the particle. And that isn’t much like FTL.
When you do an EPR experiment, the appearance of FTL communication arises from the assumption that the knowledge you gain about what you’ll see if you go check the other branch of the experiment is something happens at the other end of the experiment, instead of locally, with the information propagating to the other end of the experiment as you go to check. The existence of nonlocal states does not imply nonlocal communication.
I’m not sure what we are disagreeing about.
My point is that objective collapse is FTL only in the same sense that QM is. That is, if QM isn’t FTL, then collapse isn’t.
I’m puzzled. What does Solomonoff Induction have to say about experimentally undistinguishable (as far as we can practically test, at least) interpretations of the same theory?
But there is s case to be made for relatioal QM as superior to both MWI an collpase interpretations. I have metuioned it several times. I am still waiting to hear back.
Relational QM is gibberish. Whether the cat is dead or alive is “relative to the observer”. How could that make sense except via many worlds?
It makes sense the way rQM says: there is no non-relational state, so there is not answer to “is the cat dead or alive (absent an observer)”. Since rQM says there is no state, you don’t disprove it by insisting there is state.
BTW, there is no simultaneity either.
OK, so suppose we have an observer. Now look at the cat. Is it alive or dead? If it is alive and only alive, well, we can affix the phrase “relative to the observer” but it doesn’t diminish the absoluteness of the cat’s being alive. But if the cat is alive “relative to one observer to which it is alive”, and dead “relative to another observer to which it is dead”, how can we possibly make sense of that except in many-worlds fashion, by saying there are two cats and two observers?
If two observers measure a cat, they will get compatible results. However one observer can have less complete information (“the cat collapsed”) and another more complete (“the cat is uncollapsed”). Observers can disagree about “collapse” because that is just an issue of their information, not an objective property.
“Relational interpretation
The relational interpretation makes no fundamental distinction between the human experimenter, the cat, or the apparatus, or between animate and inanimate systems; all are quantum systems governed by the same rules of wavefunction evolution, and all may be considered “observers.” But the relational interpretation allows that different observers can give different accounts of the same series of events, depending on the information they have about the system.[11] The cat can be considered an observer of the apparatus; meanwhile, the experimenter can be considered another observer of the system in the box (the cat plus the apparatus). Before the box is opened, the cat, by nature of it being alive or dead, has information about the state of the apparatus (the atom has either decayed or not decayed); but the experimenter does not have information about the state of the box contents. In this way, the two observers simultaneously have different accounts of the situation: To the cat, the wavefunction of the apparatus has appeared to “collapse”; to the experimenter, the contents of the box appear to be in superposition. Not until the box is opened, and both observers have the same information about what happened, do both system states appear to “collapse” into the same definite result, a cat that is either alive or dead.”—WP
In the interpretation of QM, one of the divides is between ontic and epistemic interpretations of the wavefunction. Ontic interpretations of the wavefunction treat it as a thing, epistemic interpetations as an incomplete description or a tabulation of uncertainty, just like a probability distribution.
In the relational interpretation of QM, are the states understood as ontic or as epistemic? The passage you quote makes them sound epistemic: the cat knows but the observer outside the box doesn’t, so the observer outside the box uses a different wavefunction. That undoubtedly implies that the wavefunction of the observer outside the box is epistemic, not ontic; the cat knows something that the outside observer doesn’t, an aspect of reailty which is already definite even though it is not definite in the outside observer’s description.
Or at least, this ought to imply that quantum state in the relational interpretation are epistemic. However, this is never explicitly stated, and instead meaningless locutions are adopted which make it sound as if the quantum states are to be regarded as ontic, but “relative”.
There are certain very limited senses in which it makes sense to say that the state of something is relative. For example, we may be floating in space, and what is up to you may be down to me, so whether one object is above another object may be relative to an observer. But clearly such a dodge will not work for something like Schrodinger’s cat. Either the cat is alive, dead, both, or neither. It can’t be “alive for one observer and dead for another” and still be just one cat. But that is the ontological implication one gets, if “relational QM’ is interpreted as an ontic interpertation.
On the other hand, if it is an epistemic interpretation, then it still hasn’t answered the question, “what is the nature of reality? what is the physical ontology behind the formalism and the instrumental success?”
It can’t in rQM:
“However, the comparison does not lead to contradiction because the comparison is itself a physical process that must be understood in the context of quantum mechanics. Indeed, O′ can physically interact with the electron and then with the l.e.d. (or, equivalently, the other way around). If, for instance, he finds the spin of the electron up, quantum mechanics predicts that he will then consistently find the l.e.d. on (because in the first measurement the state of the composite system collapses on its [spin up/l.e.d. on] component). That is, the multiplicity of accounts leads to no contradiction precisely because the comparison between different accounts can only be a physical quantum interaction. This internal self-consistency of the quantum formalism is general, and it is perhaps its most remarkable aspect. This self consistency is taken in relational quantum mechanics as a strong indication of the relational nature of the world”—SEP
There were plenty of physicists reading those posts when they first came out on OB (the most famous name being Scott Aaronson). Some later readers have indeed asserted that there’s a problem involving a physically wrong factor of i in the first couple of posts (i.e. that’s allegedly not what a half-silvered mirror does to the phase in real life), which I haven’t yet corrected because I would need to verify with a trusted physicist that this was correct, and then possibly craft new illustrations instead of using the ones I found online, and this would take up too much time relative to the point that talking about a phase change of −1 instead of i so as to be faithful to real-world mirrors is an essentially trivial quibble which has no effect on any larger points. If anyone else wants to rejigger the illustration or the explanation so that it flows correctly, and get Scott Aaronson or another known trusted physicist to verify it, I’ll be happy to accept the correction.
Aside from that, real physicists haven’t objected to any of the math, which I’m actually pretty darned proud of considering that I am not a physicist.
As Scott keeps saying, he’s not a physicist! He’s a theoretical computer scientist with a focus on quantum computing. He clearly has very relevant expertise, but you should get his field right.
I still wonder why you haven’t written a update in 4 years regarding this topic. Especially in regards to the Born Rule probability not having a solution yet + the other problems.
You also have the issue of overlap vs non-overlapping of worlds, which again is a relevant issue in the Many Worlds interpretation. Overlap = the typical 1 world branching into 2 worlds. Non-overlap = 2 identical worlds diverging (Saunders 2010, Wilson 2005-present)
Also I feel like the QM sequence is a bit incomplete when you do not give any thought to things like Gerard ’t Hoofts proposal of a local deterministic reality giving rise to quantum mechanics from a cellular automaton at the planck scale? It’s misleading to say the MWI is “a slam dunk” winner when there are so many unanswered questions. Mitchell Porter is one of the few persons here who seem to have a deep understanding of the subject before reading your sequence, so he has raised some interesting points...
I agree that EY is probably overconfident in MWI, although I’m uniformed about QM so I can’t say much with confidence. I don’t think it’s accurate to damn all of Less Wrong because of this. For example, this post questioning the sequence was voted up highly.
I don’t think EY claims to have any original insights pointing to MWI. I think he’s just claiming that the state of the evidence in physics is such that MWI is obviously correct, and this is evidence as to the irrationality of physicists. I’m not too sure about this myself.
Well there have been responses to that point (here’s one). I wish you’d be a bit more self-skeptical and actually engage with that (ongoing) debate instead of summarizing your view on it and dismissing LW because it largely disagrees with your view.
It seems a bit bizarre to say I’ve dismissed LessWrong given how much time I’ve spent here lately.
Fair enough.
Your examples seem … how do I put this … unreliable. The first two are less examples and more insults, since you do not provide any actual examples of these tendencies; the last one would be more serious, if he hadn’t written extensively on why he believes this to be the safest way—the only way that isn’t suicidal—or if you had provided some evidence that his FAI proposals are “extremely dangerous”. And, of course, airily proclaiming that this is true of “pretty much every entry in the sequences” seems, in the context of these examples, like an overgeneralization at best and … well, I’m not going to bother outlining the worst possible interpretation for obvious reasons.
Obviously, observeing salt is not prure reasoning. Very little philsophy is pure reasoning, the salient distinction is between informal, everyday observation and deliberately arranged experiements.
It’s a rather unavoidable side-effect of claiming that you know the optimal way to fulfill one’s utility function, especially if that claim sounds highly unusual (Unite the human species in making a friendly AI that will create Utopia). There are many groups that make such claims, and either one or none of them can be right. Most people(Who haven’t already bought into a different philosophy of life) think it’s the later, and thus tend not to take someone seriously when they make extraordinary claims.
Until recognition of the Singularity’s imminence and need for attention enters mainstream scientific thought, the people most likely to join us (Scientifically-Literate Atheists and Truth-Lovers) will not seriously consider our claims. I haven’t read nearly as much about the nonexistence of Zues as I have about the nonexistence of Yaweh, because the number of intelligent people who believe in Zues is insignificant compared to the number of educated Christians. So when 99% of the developed world isn’t focusing on friendly-AI-theory, it was difficult for me to come to the conclusion that Richard Dawkins and Stephen Hawking and Stephen Fry were all ignorant of one of the most important things on the planet. A few months ago I gave no more thought to cryonics than to cryptozoology, and without MoR I doubt anything would have changed.
Is the goal of the community really to get everyone into the one task of creating FAI? I’m kind of new here, but I’m personally interested in a less direct but maybe more certain (I don’t know the hard numbers) (but, I feel, its synergistic), goal of achieving a stable post-scarcity economy which could free up a lot more people to become hackers/makers of technology and participate in the collective commons, but I’m interested in FAI and particularly machine ethics, and I hang out here because of the rationality and self improvement angles. In fact I got into my current academic track (embedded systems) because I’m interested in robotics and embodied intelligence, and probably got started reading Hofstadter stuff and trying to puzzle out how minds work.
“Come for the rationality… stay for the friendly AI” maybe?
Please don’t talk about ‘the’ goal of the community as if there’s only one. There are many.
That’s what I was wondering, thank you for providing the link to that post. I wasn’t sure how to read Locke’s statement.
Well, there’s the folks at RationalWiki.
Long time lurker, I think LW is not capable enough as a social unit to handle it’s topic and I currently view that participating in LW is not a good way to efficiently drive it’s goals.
In order to reach a (hostile) audience one needs to speak the language. However ambient ways of carrying out discussion are often intermingling status / identity / politics with epistemology. In order to forward a position that biased / faith / economy based thinking are not epistemologically efficient tools one needs to make at least the initial steps in this twisted up “insane troll logic” . The end product is to reject the premise the whole argument stands on but it will never be produced if the thinking doesn’t get started. In making it public and praising this kind of transitioning of modes of thinking, a lot of the machinery temporarily required to play the drama out gets reinforced into a kind of bedrock. It complicates matters people are simultaneously in need of completing a particular step while others need to dispel them. Thus there is a tendency to fixate on a “development step” relevant to the majority and becoming hostile to everything else.
I don’t see the need to profess stances on things if the relevant anticipations work correctly. Coding the keys of insights on a single namespace and honing them to work against a static context makes applying and discussing them in other contexts needlessly complex. If someone knows a bias / heuristic / concept by some other name and that makes a LW participant not recognize or fail to apply things that they have learnt the password for, LW has managed to isolate insights from their usual and most needed application area.
Things that “hardcore” pursuits find valuable are passed “as is” or “as finalized by AwesomeDude42”. This is faith based “cause they say so”. Hooked by the “quality of the merchandise” this communal activity is more of a distribution system of those closed packages of tools rather than an epistemic engine in it’s own right. I think that even school should be a place of learning rather than a place to receive data about what others have learned.
Because there is a caliber difference not all members can follow or participate in the production of the “good stuff” they wait to be distributed right out of the oven. Doing a passive “level up” handbook in the form of sequences still leaves a big “you must be this tall to participate in this facet of this community”. There is no escaping the cognitive work of the individual but LW functions more as a price rather than the workbench.
The activity of LW is limited in a content-independent way by social structure in areas that it wishes to be more. This is not the optimal venue of thinking, but that shouldn’t come as a big surprise.
Yes. I know a couple of people with whom I share interest in Artificial Intelligence (this is my primary focus in loading Less Wrong web pages) who communicated to me that they did not like the site’s atmosphere. Atmosphere is not exactly the word they used. One person thought the cryonics was a deal breaker. (If you read the piece in the New York Times Sunday Magazine about Robin Hanson and his wife you will get a good idea of the global consensus distaste for the topic.) The other person was not so specific although it was clear they were turned off completely even if they couldn’t or wouldn’t explain how.
It is obvious that the culture here would be different if the more controversial or unpopular topics were downplayed enough not to discourage people who don’t find the atmosphere convivial.
Here is what I have personally heard or read in comments that people find most bothersome: cryonics, polyamory, pick up artistry, density of jargon, demographic homogeneity (highly educated white males). Any steps to water that down beyond those already taken (pick up artistry is regularly criticized and Bell Curve racial IQ discussion has been all but tabooed) would not be easy to implement quickly and would have consequences beyond making for a more inclusive atmosphere.
I am not in agreement with the suitability of the word cult to characterize this issue accurately. I did the google test you describe and was surprised to see cult pop up so fast, but when I think cult I think Hare Krishnas, I think Charles Manson, I think David Koresh; I don’t think Singularity Institute, and I don’t think about a number of the organizations on Rick Ross’ pages. Rick Ross is a man whose business makes money by promoting fear of cults. The last time I looked he had Landmark Education listed as a cult; this might be true with an extremely loose definition of the word but they haven’t killed anybody yet to the best of my knowledge. I have taken a couple of courses from them and the multi-level marketing vibe is irksome but they have some excellent (and rational!) content in their courses. The last time I looked Ross did not have the Ordo Templi Orientis listed as a cult. When I was a member of that organization there were around a couple of thousand dues paying members in the United States, so I presume the OTO cult (this word is far more appropriately applied to them than Landmark) is too small for him to spend resources on.
The poster who replied that he and his wife refer to his Less Wrong activity as his cult membership is understandable to me in a light and humorous manner; I would be surprised if they really classify Less Wrong with Scientology and Charles Manson.
What paper or text should I read to convince me y’all want to get to know reality? That’s a sincere question, but I don’t know how to say more without being rude (which I know you don’t mind).
Put another way: What do you think Harry Potter (of HPMOR) would think of the publications from the Singularity Institute? (I mean, I have my answer).
I got a possible proto-small religion feeling from SL4 discussions with Eliezer and SI folk back in the day. Any possible cultishness feeling was with a small c, that is not harmful to the participants accept for their bank balance, as in the use of the word cult in cult following. There isn’t a good word for this type of organization, which is why it gets lumped in with Cults.
Less wrong is better than SL4 for this feeling anyway.
Well, it’s nice to know at least you guys see it. Yes, that was one of my reactions. I started reading some of the sequences (which really aren’t put at a level that the mass public, or, I’d hazard to say, though not with certainty, even people whose IQs don’t fall one standard deviation above the mean or higher can easily understand). I liked them, though I didn’t fully understand them, and have referred people to them. However, at the time I was looking into a job and did some kind of search through the website. Anyways, I encountered a post with a person who was asking for advice on a job...I can’t find it now, but from what I remember (this has been a long time, the memory is greatly degraded, but I think what little I remember may be actually more insightful in this case than a faithful representation of the actual post) the poster talked about divorce and doing a job they hated and the like to be able to donate more to charity, and how that was an acceptable though not valued trade-off. And, though I actually agree to a point, that did raise HUGE red flags in my mind for cult...particularly when combined with the many messages that seem to be embedded here about donating to the LW non-profits. I fled after reading that and stayed away for a long time. I dunno if it helps or not, but figured I’d share.
Also, I just Googled “Less Wrong” and all I did was added a space and Google auto-suggested cult. So things seem to have worsened since this was published.
Related.
The c-word is too strong for what LW actually is. But “rational” is not a complete descriptor either.
It is neither rational nor irrational to embrace cryonics. It may be rational to conclude that someone who wants to live forever and believes body death is the end of his life will embrace cryonics and life extension technologies.
It is neither rational nor irrational to vaunt current human values over any other. It is most likely that current human values are a snapshot in the evolution of humans, and as such are an approximate optimum in a natural selection sense for an environment that existed 10,000 years ago. The idea that “we” lose if we change our values seems more rooted in who “we” decide “we” are. Presumably in the past a human was more likely to have a narrower definition of “we” to include only a few hundred or a few thousand culture-mates. As time has gone on, “we” has grown to cover nationalities, individual races, pan-national, pan-race, for most people. Most Americans don’t identify American with a particular race or national background, and many of us don’t even require being born within the US or of US parents to be part of “we.” Why wouldn’t we extend our concept of “we” to include mammals, or all life that evolved on earth, or even all intelligences that evolved or were created on earth? Why would we necessarily identify a non-earth intelligence as “they” and not “we” as in “we intelligences can stick together and do a better job exploting the inanimate universe.”
Rationality is a tool, not an answer. Having certain value decisions vaunted over others restricts LessWrong to being a community that uses rationality rather than a community of rationalists or a community serving all who use rationality. It is what Buffett calls “an unforced error.”
Let the downvotes begin! To be clear, I don’t WANT to be downvoted, but my history on this site suggests to me that I might be.
Dunno bout you, but I value my values.
I think I have the same emotional response to “wrong” things as most people. The knowledge that this is bred in to me by natural selection sorta takes the wind out of my rationalizations of these feelings in two ways. 1) Although they “feel” like right and wrong, I realize they are just hacks done by evolution. 2) If evolution has seen fit to hack our values in the past to keep us outsurviving others, than it stands to reason that the “extrapolated” values of humanity are DIFFERENT from the “evolved” values of humanity. So no matter how Coherent our Extrapolation of Values will be, it will actually subvert whatever evolution might do to our race. So once we have an FAI with great power and a sense of CEV, we stop evolving. Then we spend the rest of eternity relatively poorly adapted for the environment we are in, with FAI scarmbling to make it alright for us. Sounds like the cluster version of wireheading in a way.
On the other hand, I suppose I value the modifications that occur to us through evolution and natural selection. Presumably an attempt at CEV would build that in and perhaps the FAI would decide to leave us alone. Don’t we keep reading sci fi where that happens?
Edit: Nevermind, my point was poorly thought out and hastily formulated.
This comment will be heavy with jargon, to convey complex ideas with the minimum required words. That is what jargon is for, after all. The post’s long enough even with this shortening.
Less Wrong inspires a feeling of wonder.
To see humans working seriously to advance the robot rebellion is inspiring. To become better, overcome the programming laid in by Azathoth and actually improve our future.
The audacity to challenge death itself, to reach for the stars, is breathtaking. The piercing insight in many of the works here is startling. And the gift of being able to find joy in the merely real again is priceless. It doesn’t hurt that it’s spearheaded by an extremely intelligent and honest person who’s powers of written communication are among the greatest of his generation.
And that sense of awe and wonder makes people flinch. Especially people who have been trained to be wary of that sort of shit. Who’ve seen the glassed-over eyes of their fundamentalist families and the dazed ramblings of hippies named Storm. As much as HJPEV has tried to train himself to never flinch away from the truth, to never let his brain lie to him, MANY of us have been trained just as strongly to always flinch away from awe and wonder produced by charismatic people. In fact, if we had a “don’t let your brain lie to you” instinct as strong as our “don’t let awe and wonder seduce you into idiocy” instinct we’d be half way to being good rationalists already.
And honestly, that instinct is a good one. It saves us from insanity 98% of the time. But it’ll occasionally result in a woo/cult-warning where one could genuinely and legitimately feel wonder and awe. I don’t blame people for trusting their instincts and avoiding the site. And it’ll mean we forever get people saying “I dunno what it is, but that Less Wrong site feels kinda cultish to me.”
We’re open, we’re transparent, we are a positive force in the lives of our members. We’ve got nothing to fear, and that’s why occasional accusations of cultishness will never stick. We’ve just got to learn to live with the vibe and realize that those who stick around long enough to look deeper will bear out that we’re not.
It’s nice to still have that awe and wonder somewhere. I wouldn’t ever want to give that up just so a larger percentage of the skeptic community accepts us. That feeling is integral to this site, giving it up would kill LW for me.
I think this post can be modified, without much effort, to defend any pseudo-cult, or even a cheesy movie.