My question isn’t “is this happening?”—my question is, “how big is the effect, and does it matter?” I suspect that’s the case for a lot of LW readers.
This is a recurring theme that I find over and over. These sorts of biases and problems are obvious, they are the kind of thing that are pretty much guaranteed to exist, the kind of thing you could hardly hope to escape from. But that does not in any way mean that the effects are large enough to be relevant, or that the time spent fixing them cannot be better spent elsewhere. It is not enough to say that it it worthwhile; you must show that it is worthwhile enough to compete with other options.
This implies that your article, should you decide to write it, would in fact be understood, and that a good proportion of the LW readership has in fact considered your platform. For your article to be effective, it may be necessary for you to lay out the extent to which these issues are an actual problem, instead of simply pointing out the obvious.
Let me put it this way: The effect is big enough that I have no qualms calling it a blanket inability. This should be implied by the rules of common speech, but people who consider themselves intelligent find it easier to believe that such confidence is evidence of irrationality.
What’s interesting is that you think such an article can actually be written. (Let’s ignore that I earned sub-zero karma with my posts in this thread today.)
Consider the premise:
LessWrong doesn’t think about what’s written beyond what’s written.
(Obviously there are a few stray thoughts that you’ll find in the comments, but they are non-useful and do not generally proliferate into more descriptive articles.)
Let’s make it clear the purpose of such an article would be to get LessWrong to think about what’s written beyond what is written. This is necessary to make LessWrong useful beyond any other internet forum. Such an article would be advocating independent and bold thinking, and then voicing any compelling realizations back to LessWrong to spark further thought by others. A few short passes of this process and you could see some pretty impressive brainstorming—all while maintaining LessWrong’s standards of rationality. Recall that thought being a very cheap and very effective resource is what makes machine intelligence so formidable. If the potential for communal superintelligence isn’t sufficient payoff here, nothing will be.
Keep in mind that this is only possible insofar as a significant portion of LessWrong is willing to think beyond what is written.
If we suppose that this is actually possible; that superintelligent-quality payoffs are possible here with only slight optimization of LessWrong, then why isn’t LessWrong already trying to do this? Why weren’t they trying years ago? Why weren’t they trying when That Alien Message was published? You might want to say that the supposing is what’s causing the apparent question; that if LessWrong could really trivially evolve into such a mechanism, that it most definitely would be, and that the reason we don’t see it doing this is because many consider this to be irrational and not worth trying for.
Okay.
Then what is the point of thinking beyond what’s written?
If there aren’t significant payoffs to self-realizations that increase rationality substantially, then what is the point? Why be LessWrong? Why bother coming here? Why bother putting in all this effort if you’re only going to end up performing marginally better? I can already hear half the readers thinking, “But marginally better performance can have significant payoffs!” Great, then that supports my argument that LessWrong could benefit tremendously from very minor optimization towards thought sharing. But that’s not what I was saying. I was saying, after all the payoffs are calculated, if they aren’t going to have been any more than marginally better even with intense increases in rationality, then what is the point? Are we just here to advertise the uFAI pseudo-hypothesis? (Not being willing to conduct the experiment makes it an unscientific hypothesis, regardless of however reasonable it is to not conduct the experiment.) If so, we could do a lot better by leaving people irrational as they are and spreading classic FUD on the matter. Write a few compelling stories that freak everyone out—even intelligence people.
That’s not what LessWrong is. Even if that was what Yudkowsky wanted out of it in the end, that’s not what LessWrong is. If that were all LessWrong was, there wouldn’t be nearly as many users as there are. I recall numerous times Yudkowsky himself stated that in order to make LessWrong grow, he would need to provide something legitimate beyond his own ulterior motives. By Yudkowsky’s own assertion, LessWrong is more than FAI propaganda.
LessWrong is what it states on the front page. I am not here writing this for my own hubris. (The comments I write under that premise sound vastly different.) I am writing this for one sole single purpose. If I can demonstrate to you that such an article and criticism cannot currently be written, that there is no sequence of words that will provoke a thinking beyond what’s written response in a significant portion of LessWrongers, then you will have to acknowledge that there is a significant resource here that remains significantly underutilized. If I can’t make that argument, I have to keep trying with others, waiting for someone to recognize that there is no immediate path to a LessWrong awakening.
I’ve left holes in my argument. Mostly because I’m tired and want to go to bed, but there’s nothing stopping me from simply not sending this and waiting until tomorrow. Sleepiness is not an excuse or a reason here. If I were more awake, I’d try writing a more optimum argument instead of stream-of-consciousness. But I don’t need to. I’m not just writing this to convince you of an argument. I’m writing this as a test, to see if you can accept (purely on principle) that thought is inherently useful. I’m attempting to convince you not of my argument, but to use your own ability to reason to derive your own stance. I’m not asking you to agree and I’d prefer if you didn’t. What I want is your thoughts on the matter. I don’t want knee-jerk sophomoric rejections to obvious holes that have nothing to do with my core argument. I don’t want to be told I haven’t thought about this enough. I don’t want to be told I need to demonstrate an actual method. I don’t want you to repeat what all other LessWrongers have told me after they summarily failed to grasp the core of my argument. The holes I leave open are intentional. They are tripholes for sophomores. They are meant to weed out impatient fools, even if it means getting downvoted. It means wasting less of my time on people who are skilled at pretending they’re actually listening to my argument.
LessWrong, in its current state, is beneath me. It performs marginally better than your average internet forum. There are non-average forums that perform significantly better than LessWrong in terms of not only advancing rationality, but just about everything. There is nothing that makes LessWrong special aside from the front page potential to form a community whose operations represent a superingellient process.
I’ve been slowly giving out slightly more detailed explanations of this here and there for the past month or so. I’ve left fewer holes here than anywhere else I’ve made similar arguments. I have put the idea so damn close to the finish line for you that for you to not spend two minutes reflecting on your own, past what’s written here, indicates to me exactly how strong the cognitive biases are that prevent LessWrong from recursive self-improvement.
Even in the individuals who signal being the most open minded and willing to hear my argument.
My question isn’t “is this happening?”—my question is, “how big is the effect, and does it matter?” I suspect that’s the case for a lot of LW readers.
This is a recurring theme that I find over and over. These sorts of biases and problems are obvious, they are the kind of thing that are pretty much guaranteed to exist, the kind of thing you could hardly hope to escape from. But that does not in any way mean that the effects are large enough to be relevant, or that the time spent fixing them cannot be better spent elsewhere. It is not enough to say that it it worthwhile; you must show that it is worthwhile enough to compete with other options.
This implies that your article, should you decide to write it, would in fact be understood, and that a good proportion of the LW readership has in fact considered your platform. For your article to be effective, it may be necessary for you to lay out the extent to which these issues are an actual problem, instead of simply pointing out the obvious.
Let me put it this way: The effect is big enough that I have no qualms calling it a blanket inability. This should be implied by the rules of common speech, but people who consider themselves intelligent find it easier to believe that such confidence is evidence of irrationality.
What’s interesting is that you think such an article can actually be written. (Let’s ignore that I earned sub-zero karma with my posts in this thread today.)
Consider the premise:
(Obviously there are a few stray thoughts that you’ll find in the comments, but they are non-useful and do not generally proliferate into more descriptive articles.)
Let’s make it clear the purpose of such an article would be to get LessWrong to think about what’s written beyond what is written. This is necessary to make LessWrong useful beyond any other internet forum. Such an article would be advocating independent and bold thinking, and then voicing any compelling realizations back to LessWrong to spark further thought by others. A few short passes of this process and you could see some pretty impressive brainstorming—all while maintaining LessWrong’s standards of rationality. Recall that thought being a very cheap and very effective resource is what makes machine intelligence so formidable. If the potential for communal superintelligence isn’t sufficient payoff here, nothing will be.
Keep in mind that this is only possible insofar as a significant portion of LessWrong is willing to think beyond what is written.
If we suppose that this is actually possible; that superintelligent-quality payoffs are possible here with only slight optimization of LessWrong, then why isn’t LessWrong already trying to do this? Why weren’t they trying years ago? Why weren’t they trying when That Alien Message was published? You might want to say that the supposing is what’s causing the apparent question; that if LessWrong could really trivially evolve into such a mechanism, that it most definitely would be, and that the reason we don’t see it doing this is because many consider this to be irrational and not worth trying for.
Okay.
Then what is the point of thinking beyond what’s written?
If there aren’t significant payoffs to self-realizations that increase rationality substantially, then what is the point? Why be LessWrong? Why bother coming here? Why bother putting in all this effort if you’re only going to end up performing marginally better? I can already hear half the readers thinking, “But marginally better performance can have significant payoffs!” Great, then that supports my argument that LessWrong could benefit tremendously from very minor optimization towards thought sharing. But that’s not what I was saying. I was saying, after all the payoffs are calculated, if they aren’t going to have been any more than marginally better even with intense increases in rationality, then what is the point? Are we just here to advertise the uFAI pseudo-hypothesis? (Not being willing to conduct the experiment makes it an unscientific hypothesis, regardless of however reasonable it is to not conduct the experiment.) If so, we could do a lot better by leaving people irrational as they are and spreading classic FUD on the matter. Write a few compelling stories that freak everyone out—even intelligence people.
That’s not what LessWrong is. Even if that was what Yudkowsky wanted out of it in the end, that’s not what LessWrong is. If that were all LessWrong was, there wouldn’t be nearly as many users as there are. I recall numerous times Yudkowsky himself stated that in order to make LessWrong grow, he would need to provide something legitimate beyond his own ulterior motives. By Yudkowsky’s own assertion, LessWrong is more than FAI propaganda.
LessWrong is what it states on the front page. I am not here writing this for my own hubris. (The comments I write under that premise sound vastly different.) I am writing this for one sole single purpose. If I can demonstrate to you that such an article and criticism cannot currently be written, that there is no sequence of words that will provoke a thinking beyond what’s written response in a significant portion of LessWrongers, then you will have to acknowledge that there is a significant resource here that remains significantly underutilized. If I can’t make that argument, I have to keep trying with others, waiting for someone to recognize that there is no immediate path to a LessWrong awakening.
I’ve left holes in my argument. Mostly because I’m tired and want to go to bed, but there’s nothing stopping me from simply not sending this and waiting until tomorrow. Sleepiness is not an excuse or a reason here. If I were more awake, I’d try writing a more optimum argument instead of stream-of-consciousness. But I don’t need to. I’m not just writing this to convince you of an argument. I’m writing this as a test, to see if you can accept (purely on principle) that thought is inherently useful. I’m attempting to convince you not of my argument, but to use your own ability to reason to derive your own stance. I’m not asking you to agree and I’d prefer if you didn’t. What I want is your thoughts on the matter. I don’t want knee-jerk sophomoric rejections to obvious holes that have nothing to do with my core argument. I don’t want to be told I haven’t thought about this enough. I don’t want to be told I need to demonstrate an actual method. I don’t want you to repeat what all other LessWrongers have told me after they summarily failed to grasp the core of my argument. The holes I leave open are intentional. They are tripholes for sophomores. They are meant to weed out impatient fools, even if it means getting downvoted. It means wasting less of my time on people who are skilled at pretending they’re actually listening to my argument.
LessWrong, in its current state, is beneath me. It performs marginally better than your average internet forum. There are non-average forums that perform significantly better than LessWrong in terms of not only advancing rationality, but just about everything. There is nothing that makes LessWrong special aside from the front page potential to form a community whose operations represent a superingellient process.
I’ve been slowly giving out slightly more detailed explanations of this here and there for the past month or so. I’ve left fewer holes here than anywhere else I’ve made similar arguments. I have put the idea so damn close to the finish line for you that for you to not spend two minutes reflecting on your own, past what’s written here, indicates to me exactly how strong the cognitive biases are that prevent LessWrong from recursive self-improvement.
Even in the individuals who signal being the most open minded and willing to hear my argument.