At the same time you seem to criticise LW for being self help and for approaching rationality in an intellectual way that doesn’t maximize life outcomes.
I do think plenty of people on LW do care about rationality in an intellectual way and do care for developing the idea of rationality and for questions such as what happens when we apply Bayes theorem for situations where it usually isn’t applied.
In the case of deciding whether “a girl still likes a guy” a practical answer focused on the situation would probably encourage the guy to ask the girl out. As you describe the situation nobody actually gave the advice the calculating probabilities is a highly useful way to deal with the issue.
However that doesn’t mean that the question of applying Bayes theorem to the situation is worthless. You might learn something about practical application of Bayes theorem. You also get probability numbers that you could use to calibrate yourself.
Do you argue that calibrating your prediction for high stakes emotional situations isn’t a skill worth exploring because we live in a world where nearly nobody is good at making calibrated predictions in high stakes emotional situations, because there nobody that actually good at it?
At LW we try to do something new. The fact that new ideas often fail doesn’t imply that we shouldn’t experiment with new ideas. If you aren’t curious about exploring new ideas and only want practical advice, LW might not be the place for you.
The simple aspect of feeling agentship in the face of uncertainty also shouldn’t be underrated.
The PhD student dropping out of a top-10 school to try to do a startup after attending a month-long LW event I heard secondhand from a friend.
Are you arguing that there aren’t cases where a PhD student has a great idea for a startup and shouldn’t put that idea into practice and leave his PhD? Especially when he might have got the connection to secure the necessary venture capital?
The PhD student dropping out of a top-10 school to try to do a startup after attending a month-long LW event I heard secondhand from a friend.
I don’t know about month-long LW events expect maybe internships with an LW affiliated organisation. Doing internships in general can bring people to do something they wouldn’t have thought about before.
Do you argue that calibrating your prediction for high stakes emotional situations isn’t a skill worth exploring …?
No, I agree it’s generally a worthwhile skill. I objected to the generalization from insufficient evidence, when additional evidence was readily available.
At LW we try to do something new. The fact that new ideas often fail doesn’t imply that we shouldn’t experiment with new ideas. If you aren’t curious about exploring new ideas and only want practical advice, LW might not be the place for you.
I guess what’s really bothering me here is that less-secure or less-wise people can be taken advantage of by confident-sounding higher-status people. I suppose this is no more true in LW than in the world at large. I respect trying new things.
The simple aspect of feeling agentship in the face of uncertainty also shouldn’t be underrated.
Hooray, agency! This is a question I hope to answer.
Are you arguing that there aren’t cases where a PhD student has a great idea for a startup and shouldn’t put that idea into practice and leave his PhD? Especially when he might have got the connection to secure the necessary venture capital?
I’m arguing that it was the wrong move in this case, and hurt him and others. In general, most startups fail, ideas are worthless compared to execution, and capital is available to good teams.
If he’s trying to maximize expected total wages over his career, staying in academia isn’t a good way to do that. Although he’d probably be better off at a larger, more established company than at a startup.
If he’s trying to maximize his career satisfaction, and he wasn’t happy in academia but was excited about startups, he made a good decision. And I think that was the case here.
Some other confounding factors about his situation at the time:
He’d just been accepted to YCombinator, which is a guarantee of mentoring and venture capital
Since he already had funding, it’s not like he was dumping his life savings into a startup expecting a return
He has an open invitation to come back to his PHD program whenever he wants
If you still really want to blame someone for his decision, I think Paul Graham had a much bigger impact on him than anyone associated with LessWrong did.
No, I agree it’s generally a worthwhile skill. I objected to the generalization from insufficient evidence, when additional evidence was readily available.
It’s an online discussion. There a bunch of information that might not be shared because it’s too private to be shared online. I certainly wouldn’t share all information about a romantic interaction on LW. But I might share enough information to ask an interesting question.
I do consider this case to be an interesting question. I like it when people discuss abstract principles like rational decision making via Bayes theorem based on practical real life example instead of only taking far out thought experiments.
I’m arguing that it was the wrong move in this case, and hurt him and others.
If I’m understanding you right, you don’t even know the individual in question. People drop out of Phd programs all the time. I don’t think you can say whether or not they have good reasons for doing so without investigating the case on an individual basis.
At the same time you seem to criticise LW for being self help and for approaching rationality in an intellectual way that doesn’t maximize life outcomes.
I do think plenty of people on LW do care about rationality in an intellectual way and do care for developing the idea of rationality and for questions such as what happens when we apply Bayes theorem for situations where it usually isn’t applied.
In the case of deciding whether “a girl still likes a guy” a practical answer focused on the situation would probably encourage the guy to ask the girl out. As you describe the situation nobody actually gave the advice the calculating probabilities is a highly useful way to deal with the issue.
However that doesn’t mean that the question of applying Bayes theorem to the situation is worthless. You might learn something about practical application of Bayes theorem. You also get probability numbers that you could use to calibrate yourself.
Do you argue that calibrating your prediction for high stakes emotional situations isn’t a skill worth exploring because we live in a world where nearly nobody is good at making calibrated predictions in high stakes emotional situations, because there nobody that actually good at it?
At LW we try to do something new. The fact that new ideas often fail doesn’t imply that we shouldn’t experiment with new ideas. If you aren’t curious about exploring new ideas and only want practical advice, LW might not be the place for you.
The simple aspect of feeling agentship in the face of uncertainty also shouldn’t be underrated.
Are you arguing that there aren’t cases where a PhD student has a great idea for a startup and shouldn’t put that idea into practice and leave his PhD? Especially when he might have got the connection to secure the necessary venture capital?
I don’t know about month-long LW events expect maybe internships with an LW affiliated organisation. Doing internships in general can bring people to do something they wouldn’t have thought about before.
No, I agree it’s generally a worthwhile skill. I objected to the generalization from insufficient evidence, when additional evidence was readily available.
I guess what’s really bothering me here is that less-secure or less-wise people can be taken advantage of by confident-sounding higher-status people. I suppose this is no more true in LW than in the world at large. I respect trying new things.
Hooray, agency! This is a question I hope to answer.
I’m arguing that it was the wrong move in this case, and hurt him and others. In general, most startups fail, ideas are worthless compared to execution, and capital is available to good teams.
By what metric was his decision wrong?
If he’s trying to maximize expected total wages over his career, staying in academia isn’t a good way to do that. Although he’d probably be better off at a larger, more established company than at a startup.
If he’s trying to maximize his career satisfaction, and he wasn’t happy in academia but was excited about startups, he made a good decision. And I think that was the case here.
Some other confounding factors about his situation at the time:
He’d just been accepted to YCombinator, which is a guarantee of mentoring and venture capital
Since he already had funding, it’s not like he was dumping his life savings into a startup expecting a return
He has an open invitation to come back to his PHD program whenever he wants
If you still really want to blame someone for his decision, I think Paul Graham had a much bigger impact on him than anyone associated with LessWrong did.
YC funding is totally worth going after! He made the right choice given that info. That’s what I get for passing on rumors.
It’s an online discussion. There a bunch of information that might not be shared because it’s too private to be shared online. I certainly wouldn’t share all information about a romantic interaction on LW. But I might share enough information to ask an interesting question.
I do consider this case to be an interesting question. I like it when people discuss abstract principles like rational decision making via Bayes theorem based on practical real life example instead of only taking far out thought experiments.
If I’m understanding you right, you don’t even know the individual in question. People drop out of Phd programs all the time. I don’t think you can say whether or not they have good reasons for doing so without investigating the case on an individual basis.