First I believe that there is always likely to be a much higher ratio of critique than content creation going on. This is not a problem in and of itself. But as has been mentioned and which motivated my post on the norm one principle, heavy amounts of negative feedback are likely to discourage content creation. If the incentives to produce content are outweighed by the likelihood that there will be punishments for bad contributions, then there will be very little productive activity going on, and we will be filtering out not just noise but also potentially useful stuff as well. So I am still heavily for establishing norms that regulate this kind of thing.
Secondly it seems that they very best content creators spend some time writing and making information freely available, detailing their goals and so on, and then eventually go off to pursue those goals more concretely, and the content creation on the site goes down. This is sort of what happened with the original creators of this site. This is not something to prevent, simply something we should expect to happen periodically. Ideally we would like people to still engage with each other even if primary content producers leave.
It’s hard to figure out what the “consensus” is on specific ideas, or whether or not they should be pursued or discussed further, or whether people even care about them still. Currently the way content is produced is more like a stream of consciousness of the community as a whole. It goes in somewhat random directions, and it’s hard to predict where people will want to go with their ideas or when engagement will suddenly stop. I would like some way of knowing what the top most important issues are and who is currently thinking about them, so I know who to talk to if I have ideas.
This is related to my earlier point about content creators leaving. We only occasionally get filtered down information about what they are working on. If I wanted to help them, I don’t know who to contact about that, or what the proper protocols are about trying to become involved in those projects. I think the standard way these projects happen is a handful of people who are really interested simply start working on it, but they are essentially radio silent until they get to a point where they are either finished or feel they can’t proceed further. This seems less than ideal to me.
A lot of these problems seem difficult to me, and so far my suggestions have mostly been around discourse norms. But again this is why we need more engagement. Speak up, and even if your ideas suck, I’ll try to be nice and help you improve on them.
By the way, I think it’s important to mention that even asking questions is actually really helpful. I can’t count the number of times someone has asked me to clarify a point I made about something, and in the process of clarifying, I actually discovered some new issues or important details that I had previously missed, and it caused me to update because of that. So even if you don’t think you can offer much insight, even just asking about things can be helpful, and you shouldn’t feel discouraged about doing this.
I would like some way of knowing what the top most important issues are
LW was founded because Eliezer decided that making people think more rationally would help prevent AI disaster. That defines a scale of usefulness:
1) Math ideas (decision theory, game theory, logical induction, etc) and philosophy ideas (orthogonality thesis, complexity of value, torture vs dust specks, etc) that are directly related to preventing AI disaster. There’s surprisingly many such ideas, because the problem is so sprawling.
2) Meta ideas that improve your thinking about (1), like avoiding rationalization, changing your mind, noticing confusion, mysterious answers, etc.
3) Practice problems for (1) and (2). This can be anything from quantum physics to religion, as long as there’s a lesson that feeds back into the main goal.
At some point the community took another step toward meta, and latched onto everyday rationality which amounts to unreliable self-help with rationalist words sprinkled on top. That was mostly a failure, with the exception of some brilliant ideas like “politics is the mind-killer” that spilled over from (2) and were promptly forgotten as people slipped back into irrationality. (Another sign of slipping back is the newly positive attitude toward religion.) It seems like the only way to focus your mind on rationality is trying to solve some hard intellectual problem, like preventing AI disaster, and self-help isn’t such a problem.
Another sign of slipping back is the newly positive attitude toward religion.
Is it really that bad? I haven’t noticed, but perhaps I was not paying enough attention, or my unconsciousness was trying to protect me by filtering out the most horrible things.
In case you only meant websites other than LW, I guess the definition of “rationalist community” has grown too far, and now means more or less “anyone who seems smart and either pays lip service to reason or is a friend with the right people”.
Not sure what conclusion should I make on this. I always felt wrong about censoring dissenters, and I still kinda do, but sometimes tolerating one smart religious person or one smart politically mindkilled person is all it takes to move the Overton window towards tolerating bullshit per se (as opposed to merely tolerating that this one specific smart person also believes some bullshit).
I’d like to see LessWrong 2.0 adopting zero-tolerance policy against politics and religion. I guess I can dream.
everyday rationality which amounts to unreliable self-help with rationalist words sprinkled on top.
Equations like “productivity equals intelligence plus joy minus square root of area under hyperbole of your procrastination” feel like self-help with rationality as attire.
But there is also some boring advice like: “pomodoros seem to help most people”.
I’d like to see LessWrong 2.0 adopting zero-tolerance policy against politics and religion.
In good old fashioned tradition, we might start with tabooing religion. I don’t think cousin_it has a problem with having smart religious people on LessWrong. He would likely prefer it if Ilya would still participate on LessWrong.
I think his concern is rather about a project like Dragon Army copying structures from religious organizations and the LessWrong community having solstice celebrations filled with ritual.
I agree that there are different things one can possibly dislike about religion, and it would be better to be more precise.
For me, the annoying aspects are applying double standards of evidence (it would be wrong to blindly believe what random Joe says about theory of relativity, but it is perfectly okay and actually desirable to blindly believe what random Joe said a few millenia ago about the beginning of universe), speaking incoherent sentences (e.g. “god is love”), twisting one’s logic and morality to fit the predetermined bottom line (a smart and powerful being who decides that billions of people need to suffer and die because someone stole a fucking apple from his garden is still somehow praised as loving and sane), etc. If LW is an attempt to increase sanity, this is among the lower hanging fruit. It’s like someone participating on a website about advanced math, while insisting that 2+2=5, and people saying “well, I don’t agree, but it would be rude to publicly call them wrong”.
But I can’t talk for cousin_it, and maybe we are concerned with completely different things.
I personally can’t remember anybody saying “God is love” on LessWrong. On the other hand, I read recently of people updating in the direction that kabbalistic wisdom might not be completely bogus after reading Unsong.
Scott has this creepy mental skill where he could steelman a long string of random ones and zeroes, and some people would believe it contains the deepest secret to the universe.
I’d like to imagine that Scott is doing this to create a control group for his usual articles. By comparing how many people got convinced by his serious articles and how many people got convinced by his attempts to steelman nonsense, he can evaluate whether people agree with him because of his ideas or because of his hypnotic writing. :D
I guess the definition of “rationalist community” has grown too far, and now means more or less “anyone who seems smart and either pays lip service to reason or is a friend with the right people”.
It seems like the only way to focus your mind on rationality is trying to solve some hard intellectual problem, like preventing AI disaster, and self-help isn’t such a problem.
I don’t think the problem is that self-help isn’t a hard intellectual problem. It’s rather that it’s a problem that has direct application to the daily life and as such people feel the need to strong opinions about it, even when those aren’t warranted. It’s similar to politics in that regard.
“Secondly it seems that they very best content creators spend some time writing and making information freely available, detailing their goals and so on, and then eventually go off to pursue those goals more concretely, and the content creation on the site goes down.”
That is a rather good point. The point suggests that if we want to keep lesswrong a healthy community we need to maintain a strong pipeline.
I see both sides of the ’radio silence” thing. On one hand its good to let other people know about your project in case they want to get involved. On the other hand making a project “public” creates alot of stuff to deal with. We both agree public criticism can be quite harsh. Organazing a group effort is difficult. Maintaining a cohesive vision becoems more difficult the more people that are invovled. Finally a decent number of hyped rationalsit project seemed to have fundamental problems (Arbital comes to mind*).
My personal intuition is that in many cases its better to take the middle ground about when to take ideas public. Put together something like a “minimum viable project” or at least a true” proof of concept”. Once you have that its easier to keep a coherent vision and its more likely the project is a good idea. It is suboptimal to spend lots of time organizing people and dealing with feedback before you have determined your project is a fundamentally sound idea.In this post I tried to mention projects which were already underway or that could be done on a small scale. I should note I am not very confidant in my preceding intuition and would welcome your feedback.
*I am aware of the personal problems that hurt Arbital. I am also aware that there are/were plans for it to pivot directions to a micro-blogging platform. But the original vision of arbital seems flawed. Ther arbital leadership basically confimed this some time ago.
And I think we’re mostly still trapped in a false implicit dogma that creativity is an innate talent that is possessed by some rare individuals and can’t be duplicated in anyone who isn’t already creative. What I’m hoping to be true is that you can train people to come up with good ideas, and that more importantly, if we can harness the ability of this community to look for errors in reasoning, even bad ideas can slowly be transformed into good ones, as long as we can come up with a decent framework for making that process robust.
I have a few thoughts about this.
First I believe that there is always likely to be a much higher ratio of critique than content creation going on. This is not a problem in and of itself. But as has been mentioned and which motivated my post on the norm one principle, heavy amounts of negative feedback are likely to discourage content creation. If the incentives to produce content are outweighed by the likelihood that there will be punishments for bad contributions, then there will be very little productive activity going on, and we will be filtering out not just noise but also potentially useful stuff as well. So I am still heavily for establishing norms that regulate this kind of thing.
Secondly it seems that they very best content creators spend some time writing and making information freely available, detailing their goals and so on, and then eventually go off to pursue those goals more concretely, and the content creation on the site goes down. This is sort of what happened with the original creators of this site. This is not something to prevent, simply something we should expect to happen periodically. Ideally we would like people to still engage with each other even if primary content producers leave.
It’s hard to figure out what the “consensus” is on specific ideas, or whether or not they should be pursued or discussed further, or whether people even care about them still. Currently the way content is produced is more like a stream of consciousness of the community as a whole. It goes in somewhat random directions, and it’s hard to predict where people will want to go with their ideas or when engagement will suddenly stop. I would like some way of knowing what the top most important issues are and who is currently thinking about them, so I know who to talk to if I have ideas.
This is related to my earlier point about content creators leaving. We only occasionally get filtered down information about what they are working on. If I wanted to help them, I don’t know who to contact about that, or what the proper protocols are about trying to become involved in those projects. I think the standard way these projects happen is a handful of people who are really interested simply start working on it, but they are essentially radio silent until they get to a point where they are either finished or feel they can’t proceed further. This seems less than ideal to me.
A lot of these problems seem difficult to me, and so far my suggestions have mostly been around discourse norms. But again this is why we need more engagement. Speak up, and even if your ideas suck, I’ll try to be nice and help you improve on them.
By the way, I think it’s important to mention that even asking questions is actually really helpful. I can’t count the number of times someone has asked me to clarify a point I made about something, and in the process of clarifying, I actually discovered some new issues or important details that I had previously missed, and it caused me to update because of that. So even if you don’t think you can offer much insight, even just asking about things can be helpful, and you shouldn’t feel discouraged about doing this.
LW was founded because Eliezer decided that making people think more rationally would help prevent AI disaster. That defines a scale of usefulness:
1) Math ideas (decision theory, game theory, logical induction, etc) and philosophy ideas (orthogonality thesis, complexity of value, torture vs dust specks, etc) that are directly related to preventing AI disaster. There’s surprisingly many such ideas, because the problem is so sprawling.
2) Meta ideas that improve your thinking about (1), like avoiding rationalization, changing your mind, noticing confusion, mysterious answers, etc.
3) Practice problems for (1) and (2). This can be anything from quantum physics to religion, as long as there’s a lesson that feeds back into the main goal.
At some point the community took another step toward meta, and latched onto everyday rationality which amounts to unreliable self-help with rationalist words sprinkled on top. That was mostly a failure, with the exception of some brilliant ideas like “politics is the mind-killer” that spilled over from (2) and were promptly forgotten as people slipped back into irrationality. (Another sign of slipping back is the newly positive attitude toward religion.) It seems like the only way to focus your mind on rationality is trying to solve some hard intellectual problem, like preventing AI disaster, and self-help isn’t such a problem.
Is it really that bad? I haven’t noticed, but perhaps I was not paying enough attention, or my unconsciousness was trying to protect me by filtering out the most horrible things.
In case you only meant websites other than LW, I guess the definition of “rationalist community” has grown too far, and now means more or less “anyone who seems smart and either pays lip service to reason or is a friend with the right people”.
Not sure what conclusion should I make on this. I always felt wrong about censoring dissenters, and I still kinda do, but sometimes tolerating one smart religious person or one smart politically mindkilled person is all it takes to move the Overton window towards tolerating bullshit per se (as opposed to merely tolerating that this one specific smart person also believes some bullshit).
I’d like to see LessWrong 2.0 adopting zero-tolerance policy against politics and religion. I guess I can dream.
Equations like “productivity equals intelligence plus joy minus square root of area under hyperbole of your procrastination” feel like self-help with rationality as attire.
But there is also some boring advice like: “pomodoros seem to help most people”.
In good old fashioned tradition, we might start with tabooing religion. I don’t think cousin_it has a problem with having smart religious people on LessWrong. He would likely prefer it if Ilya would still participate on LessWrong. I think his concern is rather about a project like Dragon Army copying structures from religious organizations and the LessWrong community having solstice celebrations filled with ritual.
You’re right on both counts. Ilya is awesome, and rationalist versions of religious activities feel creepy to me.
I agree that there are different things one can possibly dislike about religion, and it would be better to be more precise.
For me, the annoying aspects are applying double standards of evidence (it would be wrong to blindly believe what random Joe says about theory of relativity, but it is perfectly okay and actually desirable to blindly believe what random Joe said a few millenia ago about the beginning of universe), speaking incoherent sentences (e.g. “god is love”), twisting one’s logic and morality to fit the predetermined bottom line (a smart and powerful being who decides that billions of people need to suffer and die because someone stole a fucking apple from his garden is still somehow praised as loving and sane), etc. If LW is an attempt to increase sanity, this is among the lower hanging fruit. It’s like someone participating on a website about advanced math, while insisting that 2+2=5, and people saying “well, I don’t agree, but it would be rude to publicly call them wrong”.
But I can’t talk for cousin_it, and maybe we are concerned with completely different things.
I personally can’t remember anybody saying “God is love” on LessWrong. On the other hand, I read recently of people updating in the direction that kabbalistic wisdom might not be completely bogus after reading Unsong.
Scott has this creepy mental skill where he could steelman a long string of random ones and zeroes, and some people would believe it contains the deepest secret to the universe.
I’d like to imagine that Scott is doing this to create a control group for his usual articles. By comparing how many people got convinced by his serious articles and how many people got convinced by his attempts to steelman nonsense, he can evaluate whether people agree with him because of his ideas or because of his hypnotic writing. :D
If you really think that you should add the definition here: https://wiki.lesswrong.com/wiki/Rationalist_movement
I don’t think the problem is that self-help isn’t a hard intellectual problem. It’s rather that it’s a problem that has direct application to the daily life and as such people feel the need to strong opinions about it, even when those aren’t warranted. It’s similar to politics in that regard.
Good point, agreed 100%.
“Secondly it seems that they very best content creators spend some time writing and making information freely available, detailing their goals and so on, and then eventually go off to pursue those goals more concretely, and the content creation on the site goes down.”
That is a rather good point. The point suggests that if we want to keep lesswrong a healthy community we need to maintain a strong pipeline.
I see both sides of the ’radio silence” thing. On one hand its good to let other people know about your project in case they want to get involved. On the other hand making a project “public” creates alot of stuff to deal with. We both agree public criticism can be quite harsh. Organazing a group effort is difficult. Maintaining a cohesive vision becoems more difficult the more people that are invovled. Finally a decent number of hyped rationalsit project seemed to have fundamental problems (Arbital comes to mind*).
My personal intuition is that in many cases its better to take the middle ground about when to take ideas public. Put together something like a “minimum viable project” or at least a true” proof of concept”. Once you have that its easier to keep a coherent vision and its more likely the project is a good idea. It is suboptimal to spend lots of time organizing people and dealing with feedback before you have determined your project is a fundamentally sound idea.In this post I tried to mention projects which were already underway or that could be done on a small scale. I should note I am not very confidant in my preceding intuition and would welcome your feedback.
*I am aware of the personal problems that hurt Arbital. I am also aware that there are/were plans for it to pivot directions to a micro-blogging platform. But the original vision of arbital seems flawed. Ther arbital leadership basically confimed this some time ago.
Agree about creation:critique ratio. Generativity/creativity training is the rationalist communities’ current bottleneck IMO.
And I think we’re mostly still trapped in a false implicit dogma that creativity is an innate talent that is possessed by some rare individuals and can’t be duplicated in anyone who isn’t already creative. What I’m hoping to be true is that you can train people to come up with good ideas, and that more importantly, if we can harness the ability of this community to look for errors in reasoning, even bad ideas can slowly be transformed into good ones, as long as we can come up with a decent framework for making that process robust.