I would like some way of knowing what the top most important issues are
LW was founded because Eliezer decided that making people think more rationally would help prevent AI disaster. That defines a scale of usefulness:
1) Math ideas (decision theory, game theory, logical induction, etc) and philosophy ideas (orthogonality thesis, complexity of value, torture vs dust specks, etc) that are directly related to preventing AI disaster. There’s surprisingly many such ideas, because the problem is so sprawling.
2) Meta ideas that improve your thinking about (1), like avoiding rationalization, changing your mind, noticing confusion, mysterious answers, etc.
3) Practice problems for (1) and (2). This can be anything from quantum physics to religion, as long as there’s a lesson that feeds back into the main goal.
At some point the community took another step toward meta, and latched onto everyday rationality which amounts to unreliable self-help with rationalist words sprinkled on top. That was mostly a failure, with the exception of some brilliant ideas like “politics is the mind-killer” that spilled over from (2) and were promptly forgotten as people slipped back into irrationality. (Another sign of slipping back is the newly positive attitude toward religion.) It seems like the only way to focus your mind on rationality is trying to solve some hard intellectual problem, like preventing AI disaster, and self-help isn’t such a problem.
Another sign of slipping back is the newly positive attitude toward religion.
Is it really that bad? I haven’t noticed, but perhaps I was not paying enough attention, or my unconsciousness was trying to protect me by filtering out the most horrible things.
In case you only meant websites other than LW, I guess the definition of “rationalist community” has grown too far, and now means more or less “anyone who seems smart and either pays lip service to reason or is a friend with the right people”.
Not sure what conclusion should I make on this. I always felt wrong about censoring dissenters, and I still kinda do, but sometimes tolerating one smart religious person or one smart politically mindkilled person is all it takes to move the Overton window towards tolerating bullshit per se (as opposed to merely tolerating that this one specific smart person also believes some bullshit).
I’d like to see LessWrong 2.0 adopting zero-tolerance policy against politics and religion. I guess I can dream.
everyday rationality which amounts to unreliable self-help with rationalist words sprinkled on top.
Equations like “productivity equals intelligence plus joy minus square root of area under hyperbole of your procrastination” feel like self-help with rationality as attire.
But there is also some boring advice like: “pomodoros seem to help most people”.
I’d like to see LessWrong 2.0 adopting zero-tolerance policy against politics and religion.
In good old fashioned tradition, we might start with tabooing religion. I don’t think cousin_it has a problem with having smart religious people on LessWrong. He would likely prefer it if Ilya would still participate on LessWrong.
I think his concern is rather about a project like Dragon Army copying structures from religious organizations and the LessWrong community having solstice celebrations filled with ritual.
I agree that there are different things one can possibly dislike about religion, and it would be better to be more precise.
For me, the annoying aspects are applying double standards of evidence (it would be wrong to blindly believe what random Joe says about theory of relativity, but it is perfectly okay and actually desirable to blindly believe what random Joe said a few millenia ago about the beginning of universe), speaking incoherent sentences (e.g. “god is love”), twisting one’s logic and morality to fit the predetermined bottom line (a smart and powerful being who decides that billions of people need to suffer and die because someone stole a fucking apple from his garden is still somehow praised as loving and sane), etc. If LW is an attempt to increase sanity, this is among the lower hanging fruit. It’s like someone participating on a website about advanced math, while insisting that 2+2=5, and people saying “well, I don’t agree, but it would be rude to publicly call them wrong”.
But I can’t talk for cousin_it, and maybe we are concerned with completely different things.
I personally can’t remember anybody saying “God is love” on LessWrong. On the other hand, I read recently of people updating in the direction that kabbalistic wisdom might not be completely bogus after reading Unsong.
Scott has this creepy mental skill where he could steelman a long string of random ones and zeroes, and some people would believe it contains the deepest secret to the universe.
I’d like to imagine that Scott is doing this to create a control group for his usual articles. By comparing how many people got convinced by his serious articles and how many people got convinced by his attempts to steelman nonsense, he can evaluate whether people agree with him because of his ideas or because of his hypnotic writing. :D
I guess the definition of “rationalist community” has grown too far, and now means more or less “anyone who seems smart and either pays lip service to reason or is a friend with the right people”.
It seems like the only way to focus your mind on rationality is trying to solve some hard intellectual problem, like preventing AI disaster, and self-help isn’t such a problem.
I don’t think the problem is that self-help isn’t a hard intellectual problem. It’s rather that it’s a problem that has direct application to the daily life and as such people feel the need to strong opinions about it, even when those aren’t warranted. It’s similar to politics in that regard.
LW was founded because Eliezer decided that making people think more rationally would help prevent AI disaster. That defines a scale of usefulness:
1) Math ideas (decision theory, game theory, logical induction, etc) and philosophy ideas (orthogonality thesis, complexity of value, torture vs dust specks, etc) that are directly related to preventing AI disaster. There’s surprisingly many such ideas, because the problem is so sprawling.
2) Meta ideas that improve your thinking about (1), like avoiding rationalization, changing your mind, noticing confusion, mysterious answers, etc.
3) Practice problems for (1) and (2). This can be anything from quantum physics to religion, as long as there’s a lesson that feeds back into the main goal.
At some point the community took another step toward meta, and latched onto everyday rationality which amounts to unreliable self-help with rationalist words sprinkled on top. That was mostly a failure, with the exception of some brilliant ideas like “politics is the mind-killer” that spilled over from (2) and were promptly forgotten as people slipped back into irrationality. (Another sign of slipping back is the newly positive attitude toward religion.) It seems like the only way to focus your mind on rationality is trying to solve some hard intellectual problem, like preventing AI disaster, and self-help isn’t such a problem.
Is it really that bad? I haven’t noticed, but perhaps I was not paying enough attention, or my unconsciousness was trying to protect me by filtering out the most horrible things.
In case you only meant websites other than LW, I guess the definition of “rationalist community” has grown too far, and now means more or less “anyone who seems smart and either pays lip service to reason or is a friend with the right people”.
Not sure what conclusion should I make on this. I always felt wrong about censoring dissenters, and I still kinda do, but sometimes tolerating one smart religious person or one smart politically mindkilled person is all it takes to move the Overton window towards tolerating bullshit per se (as opposed to merely tolerating that this one specific smart person also believes some bullshit).
I’d like to see LessWrong 2.0 adopting zero-tolerance policy against politics and religion. I guess I can dream.
Equations like “productivity equals intelligence plus joy minus square root of area under hyperbole of your procrastination” feel like self-help with rationality as attire.
But there is also some boring advice like: “pomodoros seem to help most people”.
In good old fashioned tradition, we might start with tabooing religion. I don’t think cousin_it has a problem with having smart religious people on LessWrong. He would likely prefer it if Ilya would still participate on LessWrong. I think his concern is rather about a project like Dragon Army copying structures from religious organizations and the LessWrong community having solstice celebrations filled with ritual.
You’re right on both counts. Ilya is awesome, and rationalist versions of religious activities feel creepy to me.
I agree that there are different things one can possibly dislike about religion, and it would be better to be more precise.
For me, the annoying aspects are applying double standards of evidence (it would be wrong to blindly believe what random Joe says about theory of relativity, but it is perfectly okay and actually desirable to blindly believe what random Joe said a few millenia ago about the beginning of universe), speaking incoherent sentences (e.g. “god is love”), twisting one’s logic and morality to fit the predetermined bottom line (a smart and powerful being who decides that billions of people need to suffer and die because someone stole a fucking apple from his garden is still somehow praised as loving and sane), etc. If LW is an attempt to increase sanity, this is among the lower hanging fruit. It’s like someone participating on a website about advanced math, while insisting that 2+2=5, and people saying “well, I don’t agree, but it would be rude to publicly call them wrong”.
But I can’t talk for cousin_it, and maybe we are concerned with completely different things.
I personally can’t remember anybody saying “God is love” on LessWrong. On the other hand, I read recently of people updating in the direction that kabbalistic wisdom might not be completely bogus after reading Unsong.
Scott has this creepy mental skill where he could steelman a long string of random ones and zeroes, and some people would believe it contains the deepest secret to the universe.
I’d like to imagine that Scott is doing this to create a control group for his usual articles. By comparing how many people got convinced by his serious articles and how many people got convinced by his attempts to steelman nonsense, he can evaluate whether people agree with him because of his ideas or because of his hypnotic writing. :D
If you really think that you should add the definition here: https://wiki.lesswrong.com/wiki/Rationalist_movement
I don’t think the problem is that self-help isn’t a hard intellectual problem. It’s rather that it’s a problem that has direct application to the daily life and as such people feel the need to strong opinions about it, even when those aren’t warranted. It’s similar to politics in that regard.
Good point, agreed 100%.