Not generally—I keep coming back for the clear, on-topic, well-reasoned, non-flame discussion.
Not sure exactly what you suggest here. We should not waste time reflecting...but...
Many (I guess 40-70%) of meetups and discussion topics are focused on pursuing rational decision-making for self-improvement. Honestly I feel guilty about not doing more work and I assume other readers are here not because it’s optimal but because it’s fun.
There’s also a sentiment that being more Rational would fix problems. Often, it’s a lack of information, not a lack of reasoning, that’s causing the problem.
This is not a strong evidence against usefulness of LW.
I agree, and I agree LW is frequently useful. I would like to see more reference of non-technical experts for non-technical topics. As an extreme example, I’m thinking of a forum post where some (presumably young) poster asked for a Bayesian estimate on whether a “girl still liked him” based on her not calling, upvoted answers containing Bayes’ Theorem and percentage numbers, and downvoted my answer telling him he didn’t provide enough information. More generally, I think there can be a similar problem to that in some Christian literature where people will take “(X) Advice” because they are part of the (X) community even though the advice is not the best available advice.
Essentially, I think the LW norms should encourage people to learn proven technical skills relevant to their chosen field, and should acknowledge that it’s only advisable to think about Rationality all day if that’s what you enjoy for its own sake. I’m not sure to what extent you already agree with this.
A few LW efforts appear to me to be sub-optimal and possibly harmful to those pursuing them, but this isn’t the place for that argument.
How should we do a debate about math, science and philosophy… for non-intellectuals?
Not answering this question is limiting the spread of LW, because it’s easy to dismiss people as not sufficiently intellectual when they don’t join the group. I don’t know the answer here.
A movement aiming to remove errors in thinking is claiming a high standard for being right.
WTF?! Please provide an evidence of LW encouraging PhD students at top-10 universities to drop out
The PhD student dropping out of a top-10 school to try to do a startup after attending a month-long LW event I heard secondhand from a friend. I will edit my post to avoid spreading rumors, but I trust the source.
real LW does not resemble the picture you described
The PhD student dropping out of a top-10 school to try to do a startup after attending a month-long LW event I heard secondhand from a friend. I will edit my post to avoid spreading rumors, but I trust the source.
If it did happen, then I want to know that it happened. It’s just that this is the first time I even heard about a month-long LW event. (Which may be an information about my ignorance—EDIT: it was, indeed --, since till yesterday I didn’t even know SPARC takes two weeks, so I thought one week was a maximum for an LW event.)
I heard a lot of “quit the school, see how successful and rich Zuckerberg is” advice, but it was all from non-LW sources.
I can imagine people at some LW meetup giving this kind of advice, since there is nothing preventing people with opinions of this kind to visit LW meetups and give advice. It just seems unlikely, and it certainly is not the LW “crowd wisdom”.
That said, as his friend I think the situation is a lot less sinister than it’s been made out to sound here. He didn’t quit to go to the program, he quit a year or so afterwards to found a startup. He wasn’t all that excited about his PHD program and he was really excited about startups, so he quit and founded a startup with some friends.
Often, it’s a lack of information, not a lack of reasoning, that’s causing the problem.
Embracing the conclusion implied by new information even if it is in disagreement with your initial guess is a vital skill that many people do not have. I was first introduced to this problem here on LW. Of course your claim might still be valid, but I’d like to point out that some members (me) wouldn’t have been able to take your advice if it wasn’t for the material here on LW.
I’m thinking of a forum post where some (presumably young) poster asked for a Bayesian estimate on whether a “girl still liked him” based on her not calling
The problem with this example is really interesting—there exists some (subjectively objective) probabily, which we can find with Bayesian reasoning. Your recommendation is meta-advice, rather than attempting to find this probability you suggest investing some time and effort to get more evidence. I don’t see why this would deserve downvotes (rather I would upvote it, I think), but note that a response containing percentages and Bayes’ Theorem is an answer to the question.
Saying you didn’t provide enough information for a probability estimate deserves downvotes because it misses the point. You can give probability estimates based on any information that’s presented. The probability estimate will be better with more information but it’s still possible to do an estimate with low information.
At the same time you seem to criticise LW for being self help and for approaching rationality in an intellectual way that doesn’t maximize life outcomes.
I do think plenty of people on LW do care about rationality in an intellectual way and do care for developing the idea of rationality and for questions such as what happens when we apply Bayes theorem for situations where it usually isn’t applied.
In the case of deciding whether “a girl still likes a guy” a practical answer focused on the situation would probably encourage the guy to ask the girl out. As you describe the situation nobody actually gave the advice the calculating probabilities is a highly useful way to deal with the issue.
However that doesn’t mean that the question of applying Bayes theorem to the situation is worthless. You might learn something about practical application of Bayes theorem. You also get probability numbers that you could use to calibrate yourself.
Do you argue that calibrating your prediction for high stakes emotional situations isn’t a skill worth exploring because we live in a world where nearly nobody is good at making calibrated predictions in high stakes emotional situations, because there nobody that actually good at it?
At LW we try to do something new. The fact that new ideas often fail doesn’t imply that we shouldn’t experiment with new ideas. If you aren’t curious about exploring new ideas and only want practical advice, LW might not be the place for you.
The simple aspect of feeling agentship in the face of uncertainty also shouldn’t be underrated.
The PhD student dropping out of a top-10 school to try to do a startup after attending a month-long LW event I heard secondhand from a friend.
Are you arguing that there aren’t cases where a PhD student has a great idea for a startup and shouldn’t put that idea into practice and leave his PhD? Especially when he might have got the connection to secure the necessary venture capital?
The PhD student dropping out of a top-10 school to try to do a startup after attending a month-long LW event I heard secondhand from a friend.
I don’t know about month-long LW events expect maybe internships with an LW affiliated organisation. Doing internships in general can bring people to do something they wouldn’t have thought about before.
Do you argue that calibrating your prediction for high stakes emotional situations isn’t a skill worth exploring …?
No, I agree it’s generally a worthwhile skill. I objected to the generalization from insufficient evidence, when additional evidence was readily available.
At LW we try to do something new. The fact that new ideas often fail doesn’t imply that we shouldn’t experiment with new ideas. If you aren’t curious about exploring new ideas and only want practical advice, LW might not be the place for you.
I guess what’s really bothering me here is that less-secure or less-wise people can be taken advantage of by confident-sounding higher-status people. I suppose this is no more true in LW than in the world at large. I respect trying new things.
The simple aspect of feeling agentship in the face of uncertainty also shouldn’t be underrated.
Hooray, agency! This is a question I hope to answer.
Are you arguing that there aren’t cases where a PhD student has a great idea for a startup and shouldn’t put that idea into practice and leave his PhD? Especially when he might have got the connection to secure the necessary venture capital?
I’m arguing that it was the wrong move in this case, and hurt him and others. In general, most startups fail, ideas are worthless compared to execution, and capital is available to good teams.
If he’s trying to maximize expected total wages over his career, staying in academia isn’t a good way to do that. Although he’d probably be better off at a larger, more established company than at a startup.
If he’s trying to maximize his career satisfaction, and he wasn’t happy in academia but was excited about startups, he made a good decision. And I think that was the case here.
Some other confounding factors about his situation at the time:
He’d just been accepted to YCombinator, which is a guarantee of mentoring and venture capital
Since he already had funding, it’s not like he was dumping his life savings into a startup expecting a return
He has an open invitation to come back to his PHD program whenever he wants
If you still really want to blame someone for his decision, I think Paul Graham had a much bigger impact on him than anyone associated with LessWrong did.
No, I agree it’s generally a worthwhile skill. I objected to the generalization from insufficient evidence, when additional evidence was readily available.
It’s an online discussion. There a bunch of information that might not be shared because it’s too private to be shared online. I certainly wouldn’t share all information about a romantic interaction on LW. But I might share enough information to ask an interesting question.
I do consider this case to be an interesting question. I like it when people discuss abstract principles like rational decision making via Bayes theorem based on practical real life example instead of only taking far out thought experiments.
I’m arguing that it was the wrong move in this case, and hurt him and others.
If I’m understanding you right, you don’t even know the individual in question. People drop out of Phd programs all the time. I don’t think you can say whether or not they have good reasons for doing so without investigating the case on an individual basis.
I’d just like to point out that ranking is a function of both the school and the metric, and thus the phrase “top-10 school” is not really well-formed. While it does convey significant information, it implies undue precision, and allowing people sneak in unstated metrics is problematic.
Not generally—I keep coming back for the clear, on-topic, well-reasoned, non-flame discussion.
Many (I guess 40-70%) of meetups and discussion topics are focused on pursuing rational decision-making for self-improvement. Honestly I feel guilty about not doing more work and I assume other readers are here not because it’s optimal but because it’s fun.
There’s also a sentiment that being more Rational would fix problems. Often, it’s a lack of information, not a lack of reasoning, that’s causing the problem.
I agree, and I agree LW is frequently useful. I would like to see more reference of non-technical experts for non-technical topics. As an extreme example, I’m thinking of a forum post where some (presumably young) poster asked for a Bayesian estimate on whether a “girl still liked him” based on her not calling, upvoted answers containing Bayes’ Theorem and percentage numbers, and downvoted my answer telling him he didn’t provide enough information. More generally, I think there can be a similar problem to that in some Christian literature where people will take “(X) Advice” because they are part of the (X) community even though the advice is not the best available advice.
Essentially, I think the LW norms should encourage people to learn proven technical skills relevant to their chosen field, and should acknowledge that it’s only advisable to think about Rationality all day if that’s what you enjoy for its own sake. I’m not sure to what extent you already agree with this.
A few LW efforts appear to me to be sub-optimal and possibly harmful to those pursuing them, but this isn’t the place for that argument.
Not answering this question is limiting the spread of LW, because it’s easy to dismiss people as not sufficiently intellectual when they don’t join the group. I don’t know the answer here.
A movement aiming to remove errors in thinking is claiming a high standard for being right.
The PhD student dropping out of a top-10 school to try to do a startup after attending a month-long LW event I heard secondhand from a friend. I will edit my post to avoid spreading rumors, but I trust the source.
I’m glad your experience has been more ideal.
If it did happen, then I want to know that it happened. It’s just that this is the first time I even heard about a month-long LW event. (Which may be an information about my ignorance—EDIT: it was, indeed --, since till yesterday I didn’t even know SPARC takes two weeks, so I thought one week was a maximum for an LW event.)
I heard a lot of “quit the school, see how successful and rich Zuckerberg is” advice, but it was all from non-LW sources.
I can imagine people at some LW meetup giving this kind of advice, since there is nothing preventing people with opinions of this kind to visit LW meetups and give advice. It just seems unlikely, and it certainly is not the LW “crowd wisdom”.
Here’s the program he went to, which did happen exactly once. It was a precursor to the much shorter CFAR workshops: http://lesswrong.com/lw/4wm/rationality_boot_camp/
That said, as his friend I think the situation is a lot less sinister than it’s been made out to sound here. He didn’t quit to go to the program, he quit a year or so afterwards to found a startup. He wasn’t all that excited about his PHD program and he was really excited about startups, so he quit and founded a startup with some friends.
Thanks!
Now I remember I heard about that in the past, but I forgot completely. It actually took ten weeks!
Embracing the conclusion implied by new information even if it is in disagreement with your initial guess is a vital skill that many people do not have. I was first introduced to this problem here on LW. Of course your claim might still be valid, but I’d like to point out that some members (me) wouldn’t have been able to take your advice if it wasn’t for the material here on LW.
The problem with this example is really interesting—there exists some (subjectively objective) probabily, which we can find with Bayesian reasoning. Your recommendation is meta-advice, rather than attempting to find this probability you suggest investing some time and effort to get more evidence. I don’t see why this would deserve downvotes (rather I would upvote it, I think), but note that a response containing percentages and Bayes’ Theorem is an answer to the question.
Saying you didn’t provide enough information for a probability estimate deserves downvotes because it misses the point. You can give probability estimates based on any information that’s presented. The probability estimate will be better with more information but it’s still possible to do an estimate with low information.
Using a Value of Information calculation would be best, especially if tied to proposed experiments.
At the same time you seem to criticise LW for being self help and for approaching rationality in an intellectual way that doesn’t maximize life outcomes.
I do think plenty of people on LW do care about rationality in an intellectual way and do care for developing the idea of rationality and for questions such as what happens when we apply Bayes theorem for situations where it usually isn’t applied.
In the case of deciding whether “a girl still likes a guy” a practical answer focused on the situation would probably encourage the guy to ask the girl out. As you describe the situation nobody actually gave the advice the calculating probabilities is a highly useful way to deal with the issue.
However that doesn’t mean that the question of applying Bayes theorem to the situation is worthless. You might learn something about practical application of Bayes theorem. You also get probability numbers that you could use to calibrate yourself.
Do you argue that calibrating your prediction for high stakes emotional situations isn’t a skill worth exploring because we live in a world where nearly nobody is good at making calibrated predictions in high stakes emotional situations, because there nobody that actually good at it?
At LW we try to do something new. The fact that new ideas often fail doesn’t imply that we shouldn’t experiment with new ideas. If you aren’t curious about exploring new ideas and only want practical advice, LW might not be the place for you.
The simple aspect of feeling agentship in the face of uncertainty also shouldn’t be underrated.
Are you arguing that there aren’t cases where a PhD student has a great idea for a startup and shouldn’t put that idea into practice and leave his PhD? Especially when he might have got the connection to secure the necessary venture capital?
I don’t know about month-long LW events expect maybe internships with an LW affiliated organisation. Doing internships in general can bring people to do something they wouldn’t have thought about before.
No, I agree it’s generally a worthwhile skill. I objected to the generalization from insufficient evidence, when additional evidence was readily available.
I guess what’s really bothering me here is that less-secure or less-wise people can be taken advantage of by confident-sounding higher-status people. I suppose this is no more true in LW than in the world at large. I respect trying new things.
Hooray, agency! This is a question I hope to answer.
I’m arguing that it was the wrong move in this case, and hurt him and others. In general, most startups fail, ideas are worthless compared to execution, and capital is available to good teams.
By what metric was his decision wrong?
If he’s trying to maximize expected total wages over his career, staying in academia isn’t a good way to do that. Although he’d probably be better off at a larger, more established company than at a startup.
If he’s trying to maximize his career satisfaction, and he wasn’t happy in academia but was excited about startups, he made a good decision. And I think that was the case here.
Some other confounding factors about his situation at the time:
He’d just been accepted to YCombinator, which is a guarantee of mentoring and venture capital
Since he already had funding, it’s not like he was dumping his life savings into a startup expecting a return
He has an open invitation to come back to his PHD program whenever he wants
If you still really want to blame someone for his decision, I think Paul Graham had a much bigger impact on him than anyone associated with LessWrong did.
YC funding is totally worth going after! He made the right choice given that info. That’s what I get for passing on rumors.
It’s an online discussion. There a bunch of information that might not be shared because it’s too private to be shared online. I certainly wouldn’t share all information about a romantic interaction on LW. But I might share enough information to ask an interesting question.
I do consider this case to be an interesting question. I like it when people discuss abstract principles like rational decision making via Bayes theorem based on practical real life example instead of only taking far out thought experiments.
If I’m understanding you right, you don’t even know the individual in question. People drop out of Phd programs all the time. I don’t think you can say whether or not they have good reasons for doing so without investigating the case on an individual basis.
I’d just like to point out that ranking is a function of both the school and the metric, and thus the phrase “top-10 school” is not really well-formed. While it does convey significant information, it implies undue precision, and allowing people sneak in unstated metrics is problematic.