I feel like the more important question is: How specifically has LW succeeded to make this kind of impression on you? I mean, are we so bad at communicating our ideas? Because many things you wrote here seem to me like quite the opposite of LW. But there is a chance that we really are communicating things poorly, and somehow this is an impression people can get. So I am not really concerned about the things you wrote, but rather about a fact that someone could get this impression. Because...
Rationality doesn’t guarantee correctness.
Which is why this site is called “Less Wrong” in the first place. (Instead of e.g. “Absolutely Correct”.) On many places in Sequences it is written that unlike the hypothetical perfect Bayesian reasoner, human are pretty lousy at processing available evidence, even when we try.
deciding what to do in the real world requires non-rational value judgments
Indeed, this is why a rational paperclip maximizer would create as many paperclips as possible. (The difference between irrational and rational paperclip maximizers is that the latter has a better model of the world, and thus probably succeeds to create more paperclips on average.)
Many LWers seem to assume that being as rational as possible will solve all their life problems.
Let’s rephrase it with ”...will provide them a better chance at solving their life problems.”
instead, a better choice is to find more real-world data about outcomes for different life paths, pick a path (quickly, given the time cost of reflecting), and get on with getting things done.
Not sure exactly what you suggest here. We should not waste time reflecting, but instead pick a path quickly, because time is important. But we should find data. Uhm… I think that finding the data, and processing the data takes some time, so I am not sure whether you recommend doing it or not.
LW recruiting (hpmor, meetup locations near major universities) appears to target socially awkward intellectuals (incl. me) who are eager for new friends and a “high-status” organization to be part of, and who may not have many existing social ties locally.
You seem to suggest some sinister strategy is used here, but I am not sure what other approach would you recommend as less sinister. Math, science, philosophy… are the topics mostly nerds care about. How should we do a debate about math, science and philosophy in a way that will be less attractive to nerds, but will attract many extraverted highly-social non-intellectuals, and the debate will produce meaningful results?
Because I think many LWers would actually not oppose trying that, if they believed such thing was possible and they could organize it.
LW members who are conventionally successful (e.g. PhD students at top-10 universities) typically became so before learning about LW
This is not a strong evidence against usefulness of LW. If you imagine a parallel universe with alternative LW that does increase average success of its readers, then even in that parallel universe, most of the most impressive LW readers became that impressive before reading LW. It is much easier to attract a PhD student at a top university by a smart text, than to attract a smart-but-not-so-awesome person and make them a PhD student at a top university during the next year or two.
For example, the reader may be of a wrong age to become a PhD student during the time they read LW; they may be too young or too old. Or the reader may have done some serious mistakes in the past (e.g. choosing a wrong university) that even LW cannot help overcome in the limited time. Or the reader may be so far below the top level, that even making them more impressive is not enough to get them PhD at a top university.
the LW community may or may not … encourage them … to drop out of their PhD program, go to “training camps” for a few months …
WTF?! Please provide an evidence of LW encouraging PhD students at top-10 universities to drop out of their PhD program to go to LW “training camps” (which by the way don’t take a few months—EDIT: I was wrong, actually there was one).
Here is a real LW discussion with a PhD student; you can see what a realistic LW advice would look like. Here is some general study advice. Here is a CFAR “training camp” for students, and it absolutely doesn’t require anyone to drop out of the school… hint: it takes two weeks in August.
In summary: real LW does not resemble the picture you described, and is sometimes actually more close to the opposite of it.
WTF?! Please provide an evidence of LW encouraging PhD students at top-10 universities to drop out of their PhD program to go to LW “training camps” (which by the way don’t take a few months).
When I visited MIRI one of the first conversations I had with someone was them trying to convince me not to pursue a PhD. Although I don’t know anything about the training camp part (well, I’ve certainly been repeatedly encouraged to go to a CFAR camp, but that is only a weekend and given that I teach for SPARC it seems like a legitimate request).
Convincing someone not to pursue a PhD is rather different than convincing someone to drop out of a top-10 PhD program to attend LW training camps. The latter does indeed merit the response WTF.
Also, there are lots of people, many of them graduate students and PhD’s themselves, who will try to convince you not to do a PhD. Its not an unusual position.
I find this presumption (that the most likely cause for disagreement is that someone misunderstood you) to be somewhat abrasive, and certainly unproductive (sorry for picking on you in particular, my intent is to criticize a general attitude that I’ve seen across the rationalist community and this thread seems like an appropriate place). You should consider the possibility that Algernoq has a relatively good understanding of this community and that his criticisms are fundamentally valid or at least partially valid. Surely that is the stance that offers greater opportunity for learning, at the very least.
I certainly considered that possibility and then rejected it. (If there are more 2 regular commenters here who think that rationality guarantees correctness and will solve all of their lives problems, I will buy a hat and then eat it).
Whether rationality guarantees correctness depends on how one defines “rationality” and “correctness”. Perfect rationality, by most definitions, would guarantee correctness of process. But one aspect of humans’ irrationality is that they tend to focus on results, and think of something as “wrong” simply because a different strategy would have been superior in a particular case.
I have come across serious criticism of the PhD programs at major universities, here on LW (and on OB). This is not quite the same as a recommendation to not enroll for a PhD, and it most certainly is not the same as a recommendation to quit from an ongoing PhD track, but I definitely interpreted such criticism as advice against taking such a PhD. Then again I have also heard similar criticism from other sources, so it might well be a genuine problem with some PhD tracks.
For what it’s worth my personal experiences with the list of main points (not sure if this should be a separate post, but I think it is worth mentioning):
Rationality doesn’t guarantee correctness.
Indeed, but as Villiam_Bur mentions this is way too high a standard. I personally notice that while not always correct I am certainly correct more often thanks to the ideas and knowledge I found at LW!
In particular, AI risk is overstated
I am not sure but I was under the impression that your suggestion of ‘just building some AI, it doesn’t have to be perfect right away’ is the thought that researchers got stuck on last century (the problem being that even making a dumb prototype was insanely complicated), when people were optimistically attempting to make an AI and kept failing. Why should our attempt be different? As for AI risk itself: I don’t know whether or not LW is blowing the risk out of proportion (in particular I do not disagree with them, I am simply unsure).
LW has a cult-like social structure.
I agree wholeheartedly, you beautifully managed to capture my feelings of unease. By targeting socially awkward nerds (such as me, I confess) it becomes unclear whether the popularity of LW among intellectuals (e.g. university students, I am looking for a better word than ‘intellectuals’ but fail to find anything) is due to genuine content or due to a clever approach to a vulnerable audience. However from my personal experience I can confidently assert that the material from LW (and OB, by the way) indeed is of high quality. So the question that remains is: if LW has good material, why does it/do we still target only a very susceptible audience? The obvious answer is that the nerds are most interested in the material discussed, but as there are many many more non-nerds than nerds it would make sense to appeal to a broader audience (at the cost of quality), right? This would probably take a lot of effort (like writing the Sequences for an audience that has trouble grasping fractions), but perhaps it would be worth it?
Many LWers are not very rational.
In my experience non-LWers are even less rational. I fear that again you have set the bar too high—reading the sequences will not make you a perfect Bayesian with Solomonoff priors, at best it will make you a bit closer of an approximation. And let me mention again that personally I have gotten decent mileage out of the sequences (but I am also counting the enjoyment I have reading the material as one of the benefits, I come here not just to learn but also to have fun).
LW membership would make me worse off.
This I mentioned earlier. I notice that you define success in terms of money and status (makes sense), and the easiest ways to try to get these would be using the ‘Dark Arts’. If you want a PhD, just guess the teachers password. It worked for me so far (although I was also interested in learning the material, so I read papers and books with understanding as a goal in my spare time). However these topics are indeed not discussed (and certainly not in the form of: ‘In order to get people to do what you want, use these three easy psychological hacks’) on LW. Would it solve your problem if such things were available?
“Art of Rationality” is an oxymoron.
Just because something is true does not mean that it is not beautiful?
I agree wholeheartedly, you beautifully managed to capture my feelings of unease. By targeting socially awkward nerds (such as me, I confess) it becomes unclear whether the popularity of LW among intellectuals (e.g. university students, I am looking for a better word than ‘intellectuals’ but fail to find anything) is due to genuine content or due to a clever approach to a vulnerable audience.
I have been contemplating this point. One of the things that sets off red flags for people outside a group is when people in the group appear to have cut’n’pasted the leader’s opinions into their heads. And that’s definitely something that happens around LW.
Note that this does not require malice or even intent on the part of said leader! It’s something happening in the heads of the recipients. But the leader needs to be aware of it—it’s part of the cult attractor, selecting for people looking for stuff to cut’n’paste into their heads.
I know this one because the loved one is pursuing ordination in the Church of England … and basically has this superpower: convincing people of pretty much anything. To the point where they’ll walk out saying “You know, black really is white, when you really think about it …” then assume that that is their own conclusion that they came to themselves, when it’s really obvious they cut’n’pasted it in. (These are people of normal intelligence, being a bit too easily convinced by a skilled and sincere arguer … but loved one does pretty well on the smart ones too.)
As I said to them, “The only reason you’re not L. Ron Hubbard is that you don’t want to be. You’d better hope that’s enough.”
Edit: The tell is not just cut’n’pasting the substance of the opinions, but the word-for-word phrasing.
I have been contemplating this point. One of the things that sets off red flags for people outside a group is when people in the group appear to have cut’n’pasted the leader’s opinions into their heads. And that’s definitely something that happens around LW.
The failure mode might be that it’s not obvious that an autodidact who spent a decade absorbing relevant academic literature will have a very different expressive range than another autodidact who spent a couple months reading the writings of the first autodidact. It’s not hard to get into the social slot of a clever outsider because the threshold for cleverness for outsiders isn’t very high.
The business of getting a real PhD is pretty good at making it clear to most people that becoming an expert takes dedication and work. Internet forums have no formal accreditation, so there’s no easy way to distinguish between “could probably write a passable freshman term paper” knowledgeable and “could take some months off and write a solid PhD thesis” knowledgeable, and it’s too easy for people in the first category to be unaware how far they are from the second category.
I have been contemplating this point. One of the things that sets off red flags for people outside a group is when people in the group appear to have cut’n’pasted the leader’s opinions into their heads. And that’s definitely something that happens around LW.
I don’t know. On the one hand side, that’s how you would expect it to look if the leader is right. On the other hand, “cult leader is right” is also how I would expect it to feel if cult leader was merely persuasive. On the third hand side, I don’t feel like I absorbed lots of novel things from cult leader, but mostly concretified notions and better terms for ideas I’d held already, and I remember many Sequences posts having a critical comment at the top.
A further good sign is that the Sequences are mostly retellings of existing literature. It doesn’t really match the “crazy ideas held for ingroup status” profile of cultishness.
The cut’n’paste not merely of the opinions, but of the phrasing is the tell that this is undigested. Possibly this could be explained by complete correctness with literary brilliance, but we’re talking about one-draft daily blog posts here.
So the question that remains is: if LW has good material, why does it/do we still target only a very susceptible audience?
This (to me) reads like you’re implying intentionality on the part of the writers to target “a very susceptible audience”. I submit the alternative hypothesis that most people who make posts here tend to be of a certain personality type (like you, I’m looking for a better term than “personality type” but failing to find anything), and as a result, they write stuff that naturally attracts people with similar personality types. Maybe I’m misreading you, but I think it’s a much more charitable interpretation than “LW is intentionally targeting psychologically vulnerable people”. As a single data point, for instance, I don’t see myself as a particularly insecure or unstable person, and I’d say I’m largely here because much of what EY (and others on LW) wrote makes sense to me, not because it makes me feel good or fuels my ego.
This would probably take a lot of effort (like writing the Sequences for an audience that has trouble grasping fractions), but perhaps it would be worth it?
With respect, I’d say this is most likely an impossible endeavor. Anyone who wants to try is welcome to, of course, but I’m just not seeing someone who can’t grok fractions being able to comprehend more than 5% of the Sequences.
Not generally—I keep coming back for the clear, on-topic, well-reasoned, non-flame discussion.
Not sure exactly what you suggest here. We should not waste time reflecting...but...
Many (I guess 40-70%) of meetups and discussion topics are focused on pursuing rational decision-making for self-improvement. Honestly I feel guilty about not doing more work and I assume other readers are here not because it’s optimal but because it’s fun.
There’s also a sentiment that being more Rational would fix problems. Often, it’s a lack of information, not a lack of reasoning, that’s causing the problem.
This is not a strong evidence against usefulness of LW.
I agree, and I agree LW is frequently useful. I would like to see more reference of non-technical experts for non-technical topics. As an extreme example, I’m thinking of a forum post where some (presumably young) poster asked for a Bayesian estimate on whether a “girl still liked him” based on her not calling, upvoted answers containing Bayes’ Theorem and percentage numbers, and downvoted my answer telling him he didn’t provide enough information. More generally, I think there can be a similar problem to that in some Christian literature where people will take “(X) Advice” because they are part of the (X) community even though the advice is not the best available advice.
Essentially, I think the LW norms should encourage people to learn proven technical skills relevant to their chosen field, and should acknowledge that it’s only advisable to think about Rationality all day if that’s what you enjoy for its own sake. I’m not sure to what extent you already agree with this.
A few LW efforts appear to me to be sub-optimal and possibly harmful to those pursuing them, but this isn’t the place for that argument.
How should we do a debate about math, science and philosophy… for non-intellectuals?
Not answering this question is limiting the spread of LW, because it’s easy to dismiss people as not sufficiently intellectual when they don’t join the group. I don’t know the answer here.
A movement aiming to remove errors in thinking is claiming a high standard for being right.
WTF?! Please provide an evidence of LW encouraging PhD students at top-10 universities to drop out
The PhD student dropping out of a top-10 school to try to do a startup after attending a month-long LW event I heard secondhand from a friend. I will edit my post to avoid spreading rumors, but I trust the source.
real LW does not resemble the picture you described
The PhD student dropping out of a top-10 school to try to do a startup after attending a month-long LW event I heard secondhand from a friend. I will edit my post to avoid spreading rumors, but I trust the source.
If it did happen, then I want to know that it happened. It’s just that this is the first time I even heard about a month-long LW event. (Which may be an information about my ignorance—EDIT: it was, indeed --, since till yesterday I didn’t even know SPARC takes two weeks, so I thought one week was a maximum for an LW event.)
I heard a lot of “quit the school, see how successful and rich Zuckerberg is” advice, but it was all from non-LW sources.
I can imagine people at some LW meetup giving this kind of advice, since there is nothing preventing people with opinions of this kind to visit LW meetups and give advice. It just seems unlikely, and it certainly is not the LW “crowd wisdom”.
That said, as his friend I think the situation is a lot less sinister than it’s been made out to sound here. He didn’t quit to go to the program, he quit a year or so afterwards to found a startup. He wasn’t all that excited about his PHD program and he was really excited about startups, so he quit and founded a startup with some friends.
Often, it’s a lack of information, not a lack of reasoning, that’s causing the problem.
Embracing the conclusion implied by new information even if it is in disagreement with your initial guess is a vital skill that many people do not have. I was first introduced to this problem here on LW. Of course your claim might still be valid, but I’d like to point out that some members (me) wouldn’t have been able to take your advice if it wasn’t for the material here on LW.
I’m thinking of a forum post where some (presumably young) poster asked for a Bayesian estimate on whether a “girl still liked him” based on her not calling
The problem with this example is really interesting—there exists some (subjectively objective) probabily, which we can find with Bayesian reasoning. Your recommendation is meta-advice, rather than attempting to find this probability you suggest investing some time and effort to get more evidence. I don’t see why this would deserve downvotes (rather I would upvote it, I think), but note that a response containing percentages and Bayes’ Theorem is an answer to the question.
Saying you didn’t provide enough information for a probability estimate deserves downvotes because it misses the point. You can give probability estimates based on any information that’s presented. The probability estimate will be better with more information but it’s still possible to do an estimate with low information.
At the same time you seem to criticise LW for being self help and for approaching rationality in an intellectual way that doesn’t maximize life outcomes.
I do think plenty of people on LW do care about rationality in an intellectual way and do care for developing the idea of rationality and for questions such as what happens when we apply Bayes theorem for situations where it usually isn’t applied.
In the case of deciding whether “a girl still likes a guy” a practical answer focused on the situation would probably encourage the guy to ask the girl out. As you describe the situation nobody actually gave the advice the calculating probabilities is a highly useful way to deal with the issue.
However that doesn’t mean that the question of applying Bayes theorem to the situation is worthless. You might learn something about practical application of Bayes theorem. You also get probability numbers that you could use to calibrate yourself.
Do you argue that calibrating your prediction for high stakes emotional situations isn’t a skill worth exploring because we live in a world where nearly nobody is good at making calibrated predictions in high stakes emotional situations, because there nobody that actually good at it?
At LW we try to do something new. The fact that new ideas often fail doesn’t imply that we shouldn’t experiment with new ideas. If you aren’t curious about exploring new ideas and only want practical advice, LW might not be the place for you.
The simple aspect of feeling agentship in the face of uncertainty also shouldn’t be underrated.
The PhD student dropping out of a top-10 school to try to do a startup after attending a month-long LW event I heard secondhand from a friend.
Are you arguing that there aren’t cases where a PhD student has a great idea for a startup and shouldn’t put that idea into practice and leave his PhD? Especially when he might have got the connection to secure the necessary venture capital?
The PhD student dropping out of a top-10 school to try to do a startup after attending a month-long LW event I heard secondhand from a friend.
I don’t know about month-long LW events expect maybe internships with an LW affiliated organisation. Doing internships in general can bring people to do something they wouldn’t have thought about before.
Do you argue that calibrating your prediction for high stakes emotional situations isn’t a skill worth exploring …?
No, I agree it’s generally a worthwhile skill. I objected to the generalization from insufficient evidence, when additional evidence was readily available.
At LW we try to do something new. The fact that new ideas often fail doesn’t imply that we shouldn’t experiment with new ideas. If you aren’t curious about exploring new ideas and only want practical advice, LW might not be the place for you.
I guess what’s really bothering me here is that less-secure or less-wise people can be taken advantage of by confident-sounding higher-status people. I suppose this is no more true in LW than in the world at large. I respect trying new things.
The simple aspect of feeling agentship in the face of uncertainty also shouldn’t be underrated.
Hooray, agency! This is a question I hope to answer.
Are you arguing that there aren’t cases where a PhD student has a great idea for a startup and shouldn’t put that idea into practice and leave his PhD? Especially when he might have got the connection to secure the necessary venture capital?
I’m arguing that it was the wrong move in this case, and hurt him and others. In general, most startups fail, ideas are worthless compared to execution, and capital is available to good teams.
If he’s trying to maximize expected total wages over his career, staying in academia isn’t a good way to do that. Although he’d probably be better off at a larger, more established company than at a startup.
If he’s trying to maximize his career satisfaction, and he wasn’t happy in academia but was excited about startups, he made a good decision. And I think that was the case here.
Some other confounding factors about his situation at the time:
He’d just been accepted to YCombinator, which is a guarantee of mentoring and venture capital
Since he already had funding, it’s not like he was dumping his life savings into a startup expecting a return
He has an open invitation to come back to his PHD program whenever he wants
If you still really want to blame someone for his decision, I think Paul Graham had a much bigger impact on him than anyone associated with LessWrong did.
No, I agree it’s generally a worthwhile skill. I objected to the generalization from insufficient evidence, when additional evidence was readily available.
It’s an online discussion. There a bunch of information that might not be shared because it’s too private to be shared online. I certainly wouldn’t share all information about a romantic interaction on LW. But I might share enough information to ask an interesting question.
I do consider this case to be an interesting question. I like it when people discuss abstract principles like rational decision making via Bayes theorem based on practical real life example instead of only taking far out thought experiments.
I’m arguing that it was the wrong move in this case, and hurt him and others.
If I’m understanding you right, you don’t even know the individual in question. People drop out of Phd programs all the time. I don’t think you can say whether or not they have good reasons for doing so without investigating the case on an individual basis.
I’d just like to point out that ranking is a function of both the school and the metric, and thus the phrase “top-10 school” is not really well-formed. While it does convey significant information, it implies undue precision, and allowing people sneak in unstated metrics is problematic.
deciding what to do in the real world requires non-rational value judgmentsIn
deed, this is why a rational paperclip maximizer would create as many paperclips as possible. (The difference between irrational and rational paperclip maximizers is that the latter has a better model of the world, and thus probably succeeds to create more paperclips on average.)
Uhm… I think that finding the data, and processing the data takes some time, so I am not sure whether you recommend doing it or not.
And when I think of ‘LW failure modes’, I imagine someone acting without further analysis. For example, let’s say a member of the general population calls people with different political views irrational, and opines that they would raise the quality of some website by leaving. If that person followed through by stalking them and downvoting (manually?) all their past comments, I would conclude he had a mental illness.
For example, let’s say a member of the general population calls people with different political views irrational, and opines that they would raise the quality of some website by leaving.
Plenty of US liberals consider people who voted for Bush irrational and wouldn’t want them to be part of the political discourse. The same goes in the other direction.
If that person followed through by stalking them and downvoting (manually?) all their past comments, I would conclude he had a mental illness.
Welcome to the internet. There are plenty of people who misbehave in online forums. Most online forums are simply not very public about members who they ban and whose posts they delete.
I don’t think stalking is a good word for the documented behavior in this case as all actions happened on this website. There are people who actually do get stalked for things they write online and who do get real life problems from the stalking.
I feel like the more important question is: How specifically has LW succeeded to make this kind of impression on you? I mean, are we so bad at communicating our ideas? Because many things you wrote here seem to me like quite the opposite of LW. But there is a chance that we really are communicating things poorly, and somehow this is an impression people can get. So I am not really concerned about the things you wrote, but rather about a fact that someone could get this impression. Because...
Which is why this site is called “Less Wrong” in the first place. (Instead of e.g. “Absolutely Correct”.) On many places in Sequences it is written that unlike the hypothetical perfect Bayesian reasoner, human are pretty lousy at processing available evidence, even when we try.
Indeed, this is why a rational paperclip maximizer would create as many paperclips as possible. (The difference between irrational and rational paperclip maximizers is that the latter has a better model of the world, and thus probably succeeds to create more paperclips on average.)
Let’s rephrase it with ”...will provide them a better chance at solving their life problems.”
Not sure exactly what you suggest here. We should not waste time reflecting, but instead pick a path quickly, because time is important. But we should find data. Uhm… I think that finding the data, and processing the data takes some time, so I am not sure whether you recommend doing it or not.
You seem to suggest some sinister strategy is used here, but I am not sure what other approach would you recommend as less sinister. Math, science, philosophy… are the topics mostly nerds care about. How should we do a debate about math, science and philosophy in a way that will be less attractive to nerds, but will attract many extraverted highly-social non-intellectuals, and the debate will produce meaningful results?
Because I think many LWers would actually not oppose trying that, if they believed such thing was possible and they could organize it.
This is not a strong evidence against usefulness of LW. If you imagine a parallel universe with alternative LW that does increase average success of its readers, then even in that parallel universe, most of the most impressive LW readers became that impressive before reading LW. It is much easier to attract a PhD student at a top university by a smart text, than to attract a smart-but-not-so-awesome person and make them a PhD student at a top university during the next year or two.
For example, the reader may be of a wrong age to become a PhD student during the time they read LW; they may be too young or too old. Or the reader may have done some serious mistakes in the past (e.g. choosing a wrong university) that even LW cannot help overcome in the limited time. Or the reader may be so far below the top level, that even making them more impressive is not enough to get them PhD at a top university.
WTF?! Please provide an evidence of LW encouraging PhD students at top-10 universities to drop out of their PhD program to go to LW “training camps” (which by the way don’t take a few months—EDIT: I was wrong, actually there was one).
Here is a real LW discussion with a PhD student; you can see what a realistic LW advice would look like. Here is some general study advice. Here is a CFAR “training camp” for students, and it absolutely doesn’t require anyone to drop out of the school… hint: it takes two weeks in August.
In summary: real LW does not resemble the picture you described, and is sometimes actually more close to the opposite of it.
When I visited MIRI one of the first conversations I had with someone was them trying to convince me not to pursue a PhD. Although I don’t know anything about the training camp part (well, I’ve certainly been repeatedly encouraged to go to a CFAR camp, but that is only a weekend and given that I teach for SPARC it seems like a legitimate request).
Convincing someone not to pursue a PhD is rather different than convincing someone to drop out of a top-10 PhD program to attend LW training camps. The latter does indeed merit the response WTF.
Also, there are lots of people, many of them graduate students and PhD’s themselves, who will try to convince you not to do a PhD. Its not an unusual position.
I find this presumption (that the most likely cause for disagreement is that someone misunderstood you) to be somewhat abrasive, and certainly unproductive (sorry for picking on you in particular, my intent is to criticize a general attitude that I’ve seen across the rationalist community and this thread seems like an appropriate place). You should consider the possibility that Algernoq has a relatively good understanding of this community and that his criticisms are fundamentally valid or at least partially valid. Surely that is the stance that offers greater opportunity for learning, at the very least.
I certainly considered that possibility and then rejected it. (If there are more 2 regular commenters here who think that rationality guarantees correctness and will solve all of their lives problems, I will buy a hat and then eat it).
Whether rationality guarantees correctness depends on how one defines “rationality” and “correctness”. Perfect rationality, by most definitions, would guarantee correctness of process. But one aspect of humans’ irrationality is that they tend to focus on results, and think of something as “wrong” simply because a different strategy would have been superior in a particular case.
When you believe ~A and someone says ‘You believe A’, what else is there? From most generous to least:
I misspoke, or I misunderstood your saying something else as saying I believe A.
You misheard me, or misspoke when saying that I believe A.
You’re arguing in bad faith
Note that ‘I actually secretly believe A’ is not on the list, so it seems to me that Villiam was being as generous as possible.
I have come across serious criticism of the PhD programs at major universities, here on LW (and on OB). This is not quite the same as a recommendation to not enroll for a PhD, and it most certainly is not the same as a recommendation to quit from an ongoing PhD track, but I definitely interpreted such criticism as advice against taking such a PhD. Then again I have also heard similar criticism from other sources, so it might well be a genuine problem with some PhD tracks.
For what it’s worth my personal experiences with the list of main points (not sure if this should be a separate post, but I think it is worth mentioning):
Indeed, but as Villiam_Bur mentions this is way too high a standard. I personally notice that while not always correct I am certainly correct more often thanks to the ideas and knowledge I found at LW!
I am not sure but I was under the impression that your suggestion of ‘just building some AI, it doesn’t have to be perfect right away’ is the thought that researchers got stuck on last century (the problem being that even making a dumb prototype was insanely complicated), when people were optimistically attempting to make an AI and kept failing. Why should our attempt be different? As for AI risk itself: I don’t know whether or not LW is blowing the risk out of proportion (in particular I do not disagree with them, I am simply unsure).
I agree wholeheartedly, you beautifully managed to capture my feelings of unease. By targeting socially awkward nerds (such as me, I confess) it becomes unclear whether the popularity of LW among intellectuals (e.g. university students, I am looking for a better word than ‘intellectuals’ but fail to find anything) is due to genuine content or due to a clever approach to a vulnerable audience. However from my personal experience I can confidently assert that the material from LW (and OB, by the way) indeed is of high quality. So the question that remains is: if LW has good material, why does it/do we still target only a very susceptible audience? The obvious answer is that the nerds are most interested in the material discussed, but as there are many many more non-nerds than nerds it would make sense to appeal to a broader audience (at the cost of quality), right? This would probably take a lot of effort (like writing the Sequences for an audience that has trouble grasping fractions), but perhaps it would be worth it?
In my experience non-LWers are even less rational. I fear that again you have set the bar too high—reading the sequences will not make you a perfect Bayesian with Solomonoff priors, at best it will make you a bit closer of an approximation. And let me mention again that personally I have gotten decent mileage out of the sequences (but I am also counting the enjoyment I have reading the material as one of the benefits, I come here not just to learn but also to have fun).
This I mentioned earlier. I notice that you define success in terms of money and status (makes sense), and the easiest ways to try to get these would be using the ‘Dark Arts’. If you want a PhD, just guess the teachers password. It worked for me so far (although I was also interested in learning the material, so I read papers and books with understanding as a goal in my spare time). However these topics are indeed not discussed (and certainly not in the form of: ‘In order to get people to do what you want, use these three easy psychological hacks’) on LW. Would it solve your problem if such things were available?
Just because something is true does not mean that it is not beautiful?
I have been contemplating this point. One of the things that sets off red flags for people outside a group is when people in the group appear to have cut’n’pasted the leader’s opinions into their heads. And that’s definitely something that happens around LW.
Note that this does not require malice or even intent on the part of said leader! It’s something happening in the heads of the recipients. But the leader needs to be aware of it—it’s part of the cult attractor, selecting for people looking for stuff to cut’n’paste into their heads.
I know this one because the loved one is pursuing ordination in the Church of England … and basically has this superpower: convincing people of pretty much anything. To the point where they’ll walk out saying “You know, black really is white, when you really think about it …” then assume that that is their own conclusion that they came to themselves, when it’s really obvious they cut’n’pasted it in. (These are people of normal intelligence, being a bit too easily convinced by a skilled and sincere arguer … but loved one does pretty well on the smart ones too.)
As I said to them, “The only reason you’re not L. Ron Hubbard is that you don’t want to be. You’d better hope that’s enough.”
Edit: The tell is not just cut’n’pasting the substance of the opinions, but the word-for-word phrasing.
The failure mode might be that it’s not obvious that an autodidact who spent a decade absorbing relevant academic literature will have a very different expressive range than another autodidact who spent a couple months reading the writings of the first autodidact. It’s not hard to get into the social slot of a clever outsider because the threshold for cleverness for outsiders isn’t very high.
The business of getting a real PhD is pretty good at making it clear to most people that becoming an expert takes dedication and work. Internet forums have no formal accreditation, so there’s no easy way to distinguish between “could probably write a passable freshman term paper” knowledgeable and “could take some months off and write a solid PhD thesis” knowledgeable, and it’s too easy for people in the first category to be unaware how far they are from the second category.
I don’t know. On the one hand side, that’s how you would expect it to look if the leader is right. On the other hand, “cult leader is right” is also how I would expect it to feel if cult leader was merely persuasive. On the third hand side, I don’t feel like I absorbed lots of novel things from cult leader, but mostly concretified notions and better terms for ideas I’d held already, and I remember many Sequences posts having a critical comment at the top.
A further good sign is that the Sequences are mostly retellings of existing literature. It doesn’t really match the “crazy ideas held for ingroup status” profile of cultishness.
The cut’n’paste not merely of the opinions, but of the phrasing is the tell that this is undigested. Possibly this could be explained by complete correctness with literary brilliance, but we’re talking about one-draft daily blog posts here.
I feel like charitably, another explanation would just be that it’s simply a better phrasing than people come up with on their own.
So? Fast doesn’t imply bad. Quite the opposite, fast-work-with-short-feedback-cycle is one of the best ways to get really good.
This (to me) reads like you’re implying intentionality on the part of the writers to target “a very susceptible audience”. I submit the alternative hypothesis that most people who make posts here tend to be of a certain personality type (like you, I’m looking for a better term than “personality type” but failing to find anything), and as a result, they write stuff that naturally attracts people with similar personality types. Maybe I’m misreading you, but I think it’s a much more charitable interpretation than “LW is intentionally targeting psychologically vulnerable people”. As a single data point, for instance, I don’t see myself as a particularly insecure or unstable person, and I’d say I’m largely here because much of what EY (and others on LW) wrote makes sense to me, not because it makes me feel good or fuels my ego.
With respect, I’d say this is most likely an impossible endeavor. Anyone who wants to try is welcome to, of course, but I’m just not seeing someone who can’t grok fractions being able to comprehend more than 5% of the Sequences.
Not generally—I keep coming back for the clear, on-topic, well-reasoned, non-flame discussion.
Many (I guess 40-70%) of meetups and discussion topics are focused on pursuing rational decision-making for self-improvement. Honestly I feel guilty about not doing more work and I assume other readers are here not because it’s optimal but because it’s fun.
There’s also a sentiment that being more Rational would fix problems. Often, it’s a lack of information, not a lack of reasoning, that’s causing the problem.
I agree, and I agree LW is frequently useful. I would like to see more reference of non-technical experts for non-technical topics. As an extreme example, I’m thinking of a forum post where some (presumably young) poster asked for a Bayesian estimate on whether a “girl still liked him” based on her not calling, upvoted answers containing Bayes’ Theorem and percentage numbers, and downvoted my answer telling him he didn’t provide enough information. More generally, I think there can be a similar problem to that in some Christian literature where people will take “(X) Advice” because they are part of the (X) community even though the advice is not the best available advice.
Essentially, I think the LW norms should encourage people to learn proven technical skills relevant to their chosen field, and should acknowledge that it’s only advisable to think about Rationality all day if that’s what you enjoy for its own sake. I’m not sure to what extent you already agree with this.
A few LW efforts appear to me to be sub-optimal and possibly harmful to those pursuing them, but this isn’t the place for that argument.
Not answering this question is limiting the spread of LW, because it’s easy to dismiss people as not sufficiently intellectual when they don’t join the group. I don’t know the answer here.
A movement aiming to remove errors in thinking is claiming a high standard for being right.
The PhD student dropping out of a top-10 school to try to do a startup after attending a month-long LW event I heard secondhand from a friend. I will edit my post to avoid spreading rumors, but I trust the source.
I’m glad your experience has been more ideal.
If it did happen, then I want to know that it happened. It’s just that this is the first time I even heard about a month-long LW event. (Which may be an information about my ignorance—EDIT: it was, indeed --, since till yesterday I didn’t even know SPARC takes two weeks, so I thought one week was a maximum for an LW event.)
I heard a lot of “quit the school, see how successful and rich Zuckerberg is” advice, but it was all from non-LW sources.
I can imagine people at some LW meetup giving this kind of advice, since there is nothing preventing people with opinions of this kind to visit LW meetups and give advice. It just seems unlikely, and it certainly is not the LW “crowd wisdom”.
Here’s the program he went to, which did happen exactly once. It was a precursor to the much shorter CFAR workshops: http://lesswrong.com/lw/4wm/rationality_boot_camp/
That said, as his friend I think the situation is a lot less sinister than it’s been made out to sound here. He didn’t quit to go to the program, he quit a year or so afterwards to found a startup. He wasn’t all that excited about his PHD program and he was really excited about startups, so he quit and founded a startup with some friends.
Thanks!
Now I remember I heard about that in the past, but I forgot completely. It actually took ten weeks!
Embracing the conclusion implied by new information even if it is in disagreement with your initial guess is a vital skill that many people do not have. I was first introduced to this problem here on LW. Of course your claim might still be valid, but I’d like to point out that some members (me) wouldn’t have been able to take your advice if it wasn’t for the material here on LW.
The problem with this example is really interesting—there exists some (subjectively objective) probabily, which we can find with Bayesian reasoning. Your recommendation is meta-advice, rather than attempting to find this probability you suggest investing some time and effort to get more evidence. I don’t see why this would deserve downvotes (rather I would upvote it, I think), but note that a response containing percentages and Bayes’ Theorem is an answer to the question.
Saying you didn’t provide enough information for a probability estimate deserves downvotes because it misses the point. You can give probability estimates based on any information that’s presented. The probability estimate will be better with more information but it’s still possible to do an estimate with low information.
Using a Value of Information calculation would be best, especially if tied to proposed experiments.
At the same time you seem to criticise LW for being self help and for approaching rationality in an intellectual way that doesn’t maximize life outcomes.
I do think plenty of people on LW do care about rationality in an intellectual way and do care for developing the idea of rationality and for questions such as what happens when we apply Bayes theorem for situations where it usually isn’t applied.
In the case of deciding whether “a girl still likes a guy” a practical answer focused on the situation would probably encourage the guy to ask the girl out. As you describe the situation nobody actually gave the advice the calculating probabilities is a highly useful way to deal with the issue.
However that doesn’t mean that the question of applying Bayes theorem to the situation is worthless. You might learn something about practical application of Bayes theorem. You also get probability numbers that you could use to calibrate yourself.
Do you argue that calibrating your prediction for high stakes emotional situations isn’t a skill worth exploring because we live in a world where nearly nobody is good at making calibrated predictions in high stakes emotional situations, because there nobody that actually good at it?
At LW we try to do something new. The fact that new ideas often fail doesn’t imply that we shouldn’t experiment with new ideas. If you aren’t curious about exploring new ideas and only want practical advice, LW might not be the place for you.
The simple aspect of feeling agentship in the face of uncertainty also shouldn’t be underrated.
Are you arguing that there aren’t cases where a PhD student has a great idea for a startup and shouldn’t put that idea into practice and leave his PhD? Especially when he might have got the connection to secure the necessary venture capital?
I don’t know about month-long LW events expect maybe internships with an LW affiliated organisation. Doing internships in general can bring people to do something they wouldn’t have thought about before.
No, I agree it’s generally a worthwhile skill. I objected to the generalization from insufficient evidence, when additional evidence was readily available.
I guess what’s really bothering me here is that less-secure or less-wise people can be taken advantage of by confident-sounding higher-status people. I suppose this is no more true in LW than in the world at large. I respect trying new things.
Hooray, agency! This is a question I hope to answer.
I’m arguing that it was the wrong move in this case, and hurt him and others. In general, most startups fail, ideas are worthless compared to execution, and capital is available to good teams.
By what metric was his decision wrong?
If he’s trying to maximize expected total wages over his career, staying in academia isn’t a good way to do that. Although he’d probably be better off at a larger, more established company than at a startup.
If he’s trying to maximize his career satisfaction, and he wasn’t happy in academia but was excited about startups, he made a good decision. And I think that was the case here.
Some other confounding factors about his situation at the time:
He’d just been accepted to YCombinator, which is a guarantee of mentoring and venture capital
Since he already had funding, it’s not like he was dumping his life savings into a startup expecting a return
He has an open invitation to come back to his PHD program whenever he wants
If you still really want to blame someone for his decision, I think Paul Graham had a much bigger impact on him than anyone associated with LessWrong did.
YC funding is totally worth going after! He made the right choice given that info. That’s what I get for passing on rumors.
It’s an online discussion. There a bunch of information that might not be shared because it’s too private to be shared online. I certainly wouldn’t share all information about a romantic interaction on LW. But I might share enough information to ask an interesting question.
I do consider this case to be an interesting question. I like it when people discuss abstract principles like rational decision making via Bayes theorem based on practical real life example instead of only taking far out thought experiments.
If I’m understanding you right, you don’t even know the individual in question. People drop out of Phd programs all the time. I don’t think you can say whether or not they have good reasons for doing so without investigating the case on an individual basis.
I’d just like to point out that ranking is a function of both the school and the metric, and thus the phrase “top-10 school” is not really well-formed. While it does convey significant information, it implies undue precision, and allowing people sneak in unstated metrics is problematic.
But here’s the training in refining your values?
And when I think of ‘LW failure modes’, I imagine someone acting without further analysis. For example, let’s say a member of the general population calls people with different political views irrational, and opines that they would raise the quality of some website by leaving. If that person followed through by stalking them and downvoting (manually?) all their past comments, I would conclude he had a mental illness.
Plenty of US liberals consider people who voted for Bush irrational and wouldn’t want them to be part of the political discourse. The same goes in the other direction.
Welcome to the internet. There are plenty of people who misbehave in online forums. Most online forums are simply not very public about members who they ban and whose posts they delete.
I don’t think stalking is a good word for the documented behavior in this case as all actions happened on this website. There are people who actually do get stalked for things they write online and who do get real life problems from the stalking.
Sure, OK.
You don’t say. My point is that many would verbally agree with such claims, but very few become Dennis Markuze.
As far as I know nobody in this community did become Dennis Markuze.
I don’t have the feeling that LW is over the internet base rate. Given how little LW is moderated it’s an extremely civil place.