I suspect LW would be worse—and would be worse at improving the world—if altruism, especially altruism around a particular set of causes that seemed strange to most newcomers, was obligatory. It would make many potential contributors uncomfortable, and would heighten the feeling that newcomers had walked into a cult.
Improving LW appears to be an effective way to reduce existential risk. Many LW-ers have either donated to existential risk reducing organizations (SIAI, FHI, nuclear risks organizations), have come to work at SIAI, or have done useful research; this is likely to continue. As long as this is the case, serious aspiring rationalists who contribute to LW are reducing existential risk, whether or not they consider themselves altruists.
Whether your goal is to help the world or to improve your own well-being, do improve your own rationality. (“You”, here, is any reader of this comment.) Do this especially if you are already among LW’s best rationalists. Often ego makes people keener on teaching rationality, and removing the specks from other peoples’ eyes, than on learning it themselves; I know I often find teaching more comfortable than exploring my own rationality gaps. But at least if your aim is existential risk reduction, there seems to be increasing marginal returns to increasing rationality/sanity; better to help yourself or another skilled rationalist gain a rationality step, then to help two beginners gain a much lower step.
If we are to reduce existential risk, we’ll need to actually care about the future of the world, so that we take useful actions even when this requires repeatedly shifting course, admitting that we’re wrong, etc. We’ll need to figure out how to produce real concern about humanity’s future, while avoiding feelings of obligation, resentment, “I’m sacrificing more than you are, and am therefore better than you” politics, etc. How to achieve this is an open question, but my guess is that it involves avoiding “shoulds”.
But at least if your aim is existential risk reduction, there seems to be increasing marginal returns to increasing rationality/sanity; better to help yourself or another skilled rationalist gain a rationality step, then to help two beginners gain a much lower step.
Could you elaborate on this? I understand how having only a bit of rationality won’t help you much, because you’ll have so many holes in your thinking that you’re unlikely to complete a complex action without taking a badly wrong step somewhere, but I mentally envisioned that leveling out at a point (for no good reason.)
It might level out at some point—just not at the point of any flesh-and-blood human I’ve spent much time with. For example, among the best rationalists I know (and SIAI has brought me into contact with many serious aspiring rationalists), I would expect literally everyone to be able to at least double their current effectiveness if they had (in addition to their current rationality subskills) the rationality/sanity/self-awareness strengths of the other best rationalists I know.
Have you spent much time with non-flesh-and-blood humans?
Joking aside, what does the last part of your post mean? Are you saying that if the current rationalists you know were able to combine the strengths of others as well as their own strengths they would be substantially more effective?
For a better take on the purpose of Less Wrong, see Rationality: Common Interest of Many Causes.
(This post reads as moderately crazy to me.)
I agree with Nesov.
To give more detailed impressions:
I suspect LW would be worse—and would be worse at improving the world—if altruism, especially altruism around a particular set of causes that seemed strange to most newcomers, was obligatory. It would make many potential contributors uncomfortable, and would heighten the feeling that newcomers had walked into a cult.
Improving LW appears to be an effective way to reduce existential risk. Many LW-ers have either donated to existential risk reducing organizations (SIAI, FHI, nuclear risks organizations), have come to work at SIAI, or have done useful research; this is likely to continue. As long as this is the case, serious aspiring rationalists who contribute to LW are reducing existential risk, whether or not they consider themselves altruists.
Whether your goal is to help the world or to improve your own well-being, do improve your own rationality. (“You”, here, is any reader of this comment.) Do this especially if you are already among LW’s best rationalists. Often ego makes people keener on teaching rationality, and removing the specks from other peoples’ eyes, than on learning it themselves; I know I often find teaching more comfortable than exploring my own rationality gaps. But at least if your aim is existential risk reduction, there seems to be increasing marginal returns to increasing rationality/sanity; better to help yourself or another skilled rationalist gain a rationality step, then to help two beginners gain a much lower step.
If we are to reduce existential risk, we’ll need to actually care about the future of the world, so that we take useful actions even when this requires repeatedly shifting course, admitting that we’re wrong, etc. We’ll need to figure out how to produce real concern about humanity’s future, while avoiding feelings of obligation, resentment, “I’m sacrificing more than you are, and am therefore better than you” politics, etc. How to achieve this is an open question, but my guess is that it involves avoiding “shoulds”.
Could you elaborate on this? I understand how having only a bit of rationality won’t help you much, because you’ll have so many holes in your thinking that you’re unlikely to complete a complex action without taking a badly wrong step somewhere, but I mentally envisioned that leveling out at a point (for no good reason.)
It might level out at some point—just not at the point of any flesh-and-blood human I’ve spent much time with. For example, among the best rationalists I know (and SIAI has brought me into contact with many serious aspiring rationalists), I would expect literally everyone to be able to at least double their current effectiveness if they had (in addition to their current rationality subskills) the rationality/sanity/self-awareness strengths of the other best rationalists I know.
Have you spent much time with non-flesh-and-blood humans?
Joking aside, what does the last part of your post mean? Are you saying that if the current rationalists you know were able to combine the strengths of others as well as their own strengths they would be substantially more effective?
Well, maybe I did not drink enough sanity juice today :)
rhollerith_dot_com’s post or Rationality: Common Interest of Many Causes. reads as moderately crazy?
I read it as referring to rhollerith_dot_com’s post.