Just because someone doesn’t have a ‘cause’ to shout about from the rooftops, it doesn’t mean that that same person has no reason to want to be more rational.
I never really agreed with that post; it seems simpler to say that because it is easier to judge instrumental rationality’s effectiveness, it is less likely to become corrupted with idiosyncrasies or blind spots. This is not a sufficient condition for saying that rationality for its own sake (or just without an overriding, save-the-world ‘purpose’) is doomed to failure
This seems contrary to LessWrong’s best interests. Communities live or die by how they treat new members and discouraging newcomers will lead to stagnation, marginalization, and eventual irrelevance. There is admittedly a large inferential gap between a theist and your typical LW member, but what would you say to someone who had just come in from reading HPMOR and wanted to know more about LessWrong? “Sorry, come back when you have a cause like an anime character.”?
Sorry for the rant, but this really rubbed me the wrong way.
Just because someone doesn’t have a ‘cause’ to shout about from the rooftops, it doesn’t mean that that same person has no reason to want to be more rational.
I don’t disagree, but becoming a PUA is the end result of studying whatever it is the field is called, so in this twisted analogy presumably RA is the end result of studying rationality. My point is that if you develop rationality in a vacuum and never significantly confront anything with it, there’s no way to know if your rationality is actually effective, and no reason to improve its flaws—and this is basically what you say in the second paragraph, so surely we agree here.
There is admittedly a large inferential gap between a theist and your typical LW member, but what would you say to someone who had just come in from reading HPMOR and wanted to know more about LessWrong? “Sorry, come back when you have a cause like an anime character!”?
It sounds like you were turned off by EY’s illustrations with anime references, and not the actual conclusion of the article. In any case, I suspect you would have agreed with it had it been written about e.g. HPMoR!Harry’s obsession with science-ifying magic.
EDIT: If by some chance that question wasn’t rhetorical, of course I wouldn’t say that.
It sounds like you were turned off by EY’s illustrations with anime references, and not the actual conclusion of the article. In any case, I suspect you would have agreed with it had it been written about e.g. HPMoR!Harry’s obsession with science-ifying magic.
It took me a (comparatively, compared to some other strangeness in the Sequences) short time to get past all of the anime references. I don’t think that Harry wanting to science-ify magic would have been enough to bring me around, as what I don’t like about the ‘something to protect’ post is that it seems to say that wanting to be more rational for small, mundane, and more importantly, common reasons aren’t enough.
Not wanting to be ripped off at the car dealership, trying to find the best way to make economic profits, out-competing rivals, etc., are not sufficient for rationality, only a grand purpose like FAI or cryogenics or curing cancer or designing more efficient wheat yields like Borlaug are enough, otherwise you’re just wasting everyone’s time and should be content being a mortal.
From the article:
No one masters the Way until more than their life is at stake. More than their comfort, more even than their pride.
You can’t just pick out a Cause like that because you feel you need a hobby. Go looking for a “good cause”, and your mind will just fill in a standard cliche. Learn how to multiply, and perhaps you will recognize a drastically important cause when you see one.
But if you have a cause like that, it is right and proper to wield your rationality in its service.
Just because someone doesn’t have a ‘cause’ to shout about from the rooftops, it doesn’t mean that that same person has no reason to want to be more rational.
I never really agreed with that post; it seems simpler to say that because it is easier to judge instrumental rationality’s effectiveness, it is less likely to become corrupted with idiosyncrasies or blind spots. This is not a sufficient condition for saying that rationality for its own sake (or just without an overriding, save-the-world ‘purpose’) is doomed to failure
This seems contrary to LessWrong’s best interests. Communities live or die by how they treat new members and discouraging newcomers will lead to stagnation, marginalization, and eventual irrelevance. There is admittedly a large inferential gap between a theist and your typical LW member, but what would you say to someone who had just come in from reading HPMOR and wanted to know more about LessWrong? “Sorry, come back when you have a cause like an anime character.”?
Sorry for the rant, but this really rubbed me the wrong way.
I don’t disagree, but becoming a PUA is the end result of studying whatever it is the field is called, so in this twisted analogy presumably RA is the end result of studying rationality. My point is that if you develop rationality in a vacuum and never significantly confront anything with it, there’s no way to know if your rationality is actually effective, and no reason to improve its flaws—and this is basically what you say in the second paragraph, so surely we agree here.
It sounds like you were turned off by EY’s illustrations with anime references, and not the actual conclusion of the article. In any case, I suspect you would have agreed with it had it been written about e.g. HPMoR!Harry’s obsession with science-ifying magic.
EDIT: If by some chance that question wasn’t rhetorical, of course I wouldn’t say that.
It took me a (comparatively, compared to some other strangeness in the Sequences) short time to get past all of the anime references. I don’t think that Harry wanting to science-ify magic would have been enough to bring me around, as what I don’t like about the ‘something to protect’ post is that it seems to say that wanting to be more rational for small, mundane, and more importantly, common reasons aren’t enough.
Not wanting to be ripped off at the car dealership, trying to find the best way to make economic profits, out-competing rivals, etc., are not sufficient for rationality, only a grand purpose like FAI or cryogenics or curing cancer or designing more efficient wheat yields like Borlaug are enough, otherwise you’re just wasting everyone’s time and should be content being a mortal.
From the article: