I don’t think that if any other organization were posting classified ads here that it would be tolerated.
However ugly it sounds, you’ve been using Less Wrong as a soap box. Regardless of our statement of purpose, you have made it, in part, about you and SIAI.
So I for one think that the OP’s post isn’t particularly out of place.
Edit: For the record I like most of your fiction. I just don’t think it belongs here anymore.
To be honest, maybe they didn’t. Those crude analogies interspersed between the chapters—some as long as a chapter itself! - were too often unnecessary. The book was long enough without them… but with them? Most could have been summed up in a paragraph.
If you need magical stories about turtles and crabs drinking hot tea before a rabbit shows up with a device which allows him to enter paintings to understand recursion, then you’re never going to get it.
On the other hand, if the author’s introduction of stories in that manner is necessary to explain his subject or thesis, then something is either wrong with the subject or with his expose of it.
I know GEB is like the Book around Less Wrong, but what I’m saying here isn’t heresy. Admittedly, Hofstadter had to write I Am a Strange Loop because people couldn’t understand GEB.
It’s a question of aesthetics. Of course math doesn’t have to be presented this way, but a lot of people like the presentation.
You should make explicit what you are arguing. It seems to me that the cause of your argument is simply “I don’t like the presentation”, but you are trying to argue (rationalize) it as a universal. There is a proper generalization somewhere in between, like “it’s not an efficient way to [something specific]”.
Admittedly, Hofstadter had to write I Am a Strange Loop because people couldn’t understand GEB.
Wait, what? I Am a Strange Loop was written about 30 years later. Hofstadter wrote four other books on mind and pattern in the meantime, so this doesn’t make any sense.
What led you to write the book? (I Am a Strange Loop)
. . . two philosophers [Ken Williford and Uriah Kriegel] asked me if I would write about my thoughts about what an “I” is. They said that they had appreciated what I had said of these ideas in Gödel, Escher, Bach many years ago, but that they knew that I felt that my message had not really been absorbed—that Gödel, Escher, Bach had become popular but that the driving force behind the book had not really been perceived by most readers, let alone absorbed by a large number of people, and I was frustrated with this. I felt I had reached people, but not exactly as I had hoped. I had greater success with the book than I’d ever expected, but I didn’t have the exact type of success that I wanted. . .
I thought, “This is a good opportunity to at least address the world of philosophers of mind. It’s a narrow world, but if I can say it well, at least they’ll know what I intended to do in my book GEB almost 30 years ago.”
I don’t think that if any other organization were posting classified ads here that it would be tolerated.
Actually, that’s not true, classified ads for both SIAI and the Future of Humanity Institute have been posted. The sponsors of Overcoming Bias and Less Wrong have posted such announcements, and others haven’t, which is an intelligible and not particularly ugly principle.
I’m having some slight difficulty putting perceptions into words—just as I can’t describe in full detail everything I do to craft my fictions—but I can certainly tell the difference between that and this.
Since I haven’t spent a lot of time here talking about ideas along the lines of Pirsig’s Quality, there are readers who will think this is a copout. And if I wanted to be manipulative, I would go ahead and offer up a decoy reason they can verbally acknowledge in order to justify their intuitive perceptions of difference—something along the lines of “Demanding that a specific person justify specific decisions in a top-level post doesn’t encourage the spreading threads of casual conversation about rationality” or “In the end, every OBLW post was about rationality even if it didn’t look that way at the time, just as much as the Quantum Physics Sequence amazingly ended up being about rationality after all.” Heck, if I was a less practiced rationalist, I would be inventing verbal excuses like that to justify my intuitive perceptions to myself. As it is, though, I’ll just say that I can see the difference perceptually, and leave it at that—after adding some unnecessary ornaments to prevent this reply from being voted down by people who are still too focused on the verbal.
You could have just not replied at all. It would have saved me the time spent trying to write up a response to a reply which is nearly devoid of any content.
Incidentally, I don’t have “intuitive” perceptions of difference here. It’s pretty clear to me, and I can explain why. Though in my estimation, you don’t care.
Incidentally, I don’t have “intuitive” perceptions of difference here. It’s pretty clear to me, and I can explain why. Though in my estimation, you don’t care.
When I read Eliezer’s fiction the concepts from dozens of lesswrong posts float to the surface of my mind, are processed and the implications become more intuitively grasped. Your brain may be wired somewhat differently but for me fiction is useful.
PPS: Probing my intuitions further, I suspect that if the above post had been questioning e.g. komponisto’s rationality in the same tone and manner, I would have had around the same reaction of offtopicness for around the same reason.
I can see a couple of reasons why the post does belong here:
It concerns Less Wrong itself, specifically it’s origin and motivation. This should be of interest to community members.
You (Eliezer) are the most visible advocate and practitioner of human rationality improvement. If it turns out that you are not particularly rational, then perhaps the techniques you have developed are not worth learning.
Psy-Kosh’s answer seems perfectly reasonable to me. I wonder why you don’t just give that answer, instead of saying the post doesn’t belong here. Actually if I had known this was one of the reasons for starting OB/LW, I probably would have paid more attention earlier, because at the beginning I was thinking “Why is Eliezer talking so much about human biases now? That doesn’t seem so interesting, compared to the Singularity/FAI stuff he used to talk about.”
I am going to respond to the general overall direction of your responses.
That is feeble, and for those who don’t understand why let me explain it.
Eliezer works for SIAI which is a non-profit where his pay depends on donations. Many people on LW are interested in SIAI and some even donate to SIAI, others potentially could donate. When your pay depends on convincing people that your work is worthwhile it is always worth justifying what you are doing. This becomes even more important when it looks like you’re distracted from what you are being paid to do. (If you ever work with a VC and their money you’ll know what I mean.)
When it comes to ensuring that SIAI continues to pay especially when you are the FAI researcher there justifying why you are writing a book on rationality which in no way solves FAI becomes extremely important.
EY ask yourself this what percent of the people interested in SIAI and donate are interested FAI? Then ask what percent are interested in rationality with no clear plan of how that gets to FAI? If the answer to the first is greater then the second then you have a big problem, because one could interpret the use of your time writing this book on rationality as wasting donated money unless there is a clear reason how rationality books get you to FAI.
P.S. If you want to educate people to help you out as someone speculated you’d be better off teaching them computer science and mathematics.
Remember my post drew no conclusions so for Yvain I have cast no stones I merely ask questions.
If you want to educate people to help you out as someone speculated you’d be better off teaching them computer science and mathematics.
Even on the margin? There are already lots of standard textbooks and curricula for mathematics and computer science, whereas I’m not aware of anything else that fills the function of Less Wrong.
If you are previously a donor to SIAI, I’ll be happy to answer you elsewhere.
If not, I am not interested in what you think SIAI donors think. Given your other behavior, I’m also not interested in any statements on your part that you might donate if only circumstances were X. Experience tells me better.
“If not, I am not interested in what you think SIAI donors think.”
I never claimed to know what SIAI donors think I asked you to think about that. But I think the fact that SIAI has as little money as it does after all these years speaks volumes about SIAI.
“Given your other behavior, ”
Why because I ask questions that when answered honestly you don’t like? Or is it because I don’t blindly hang on every word you speak?
“I’m also not interested in any statements on your part that you might donate if only circumstances were X. Experience tells me better.”
I never claimed I would donate nor will I ever as long as I live. As for experience telling you better, you have none, and considering the lack of money SIAI has and your arrogance you probably never will so I will keep my own council on that part.
“If you are previously a donor to SIAI, I’ll be happy to answer you elsewhere.”
Why, because you don’t want to disrupt the LW image of Eliezer the genius? Or is it because you really are distracted as I suspect or have given up because you cannot solve the problem of FAI another good possibility? These questions are simple easy to answer and I see no real reason you can’t answer them here and now. If you find the answers embarrassing then change, if not then what have you got to loose?
If your next response is as feeble as the last ones have been don’t bother posting them for my sake. You claim you want to be a rationalist then try applying reason to your own actions and answer the questions asked honestly.
why you are writing a book on rationality which in no way solves FAI
Rationality is the art of not screwing up—seeing what is there instead of what you want to see, or are evolutionarily suspectible to seeing. When working on a task that may have (literally) earth-shattering consequences, there may not be a skill that’s more important. Getting people educated about rationality is of prime importance for FAI.
Because Less Wrong is about human rationality, not the Singularity Institute, and not me.
Then from whence came the Q&A with Eliezer Yudkowsky, your fiction submissions (which I think lately have become of questionable value to LW), and other such posts which properly belong on either your personal blog or the SIAI blog?
I don’t think that if any other organization were posting classified ads here that it would be tolerated.
However ugly it sounds, you’ve been using Less Wrong as a soap box. Regardless of our statement of purpose, you have made it, in part, about you and SIAI.
So I for one think that the OP’s post isn’t particularly out of place.
Edit: For the record I like most of your fiction. I just don’t think it belongs here anymore.
That’s like saying the Dialogues don’t belong in Godel, Escher, Bach.
To be honest, maybe they didn’t. Those crude analogies interspersed between the chapters—some as long as a chapter itself! - were too often unnecessary. The book was long enough without them… but with them? Most could have been summed up in a paragraph.
If you need magical stories about turtles and crabs drinking hot tea before a rabbit shows up with a device which allows him to enter paintings to understand recursion, then you’re never going to get it.
On the other hand, if the author’s introduction of stories in that manner is necessary to explain his subject or thesis, then something is either wrong with the subject or with his expose of it.
I know GEB is like the Book around Less Wrong, but what I’m saying here isn’t heresy. Admittedly, Hofstadter had to write I Am a Strange Loop because people couldn’t understand GEB.
It’s a question of aesthetics. Of course math doesn’t have to be presented this way, but a lot of people like the presentation.
You should make explicit what you are arguing. It seems to me that the cause of your argument is simply “I don’t like the presentation”, but you are trying to argue (rationalize) it as a universal. There is a proper generalization somewhere in between, like “it’s not an efficient way to [something specific]”.
Wait, what? I Am a Strange Loop was written about 30 years later. Hofstadter wrote four other books on mind and pattern in the meantime, so this doesn’t make any sense.
An interview with Douglas R. Hofstadter
Actually, that’s not true, classified ads for both SIAI and the Future of Humanity Institute have been posted. The sponsors of Overcoming Bias and Less Wrong have posted such announcements, and others haven’t, which is an intelligible and not particularly ugly principle.
You’re right. It is the sponsor’s prerogative.
I’m having some slight difficulty putting perceptions into words—just as I can’t describe in full detail everything I do to craft my fictions—but I can certainly tell the difference between that and this.
Since I haven’t spent a lot of time here talking about ideas along the lines of Pirsig’s Quality, there are readers who will think this is a copout. And if I wanted to be manipulative, I would go ahead and offer up a decoy reason they can verbally acknowledge in order to justify their intuitive perceptions of difference—something along the lines of “Demanding that a specific person justify specific decisions in a top-level post doesn’t encourage the spreading threads of casual conversation about rationality” or “In the end, every OBLW post was about rationality even if it didn’t look that way at the time, just as much as the Quantum Physics Sequence amazingly ended up being about rationality after all.” Heck, if I was a less practiced rationalist, I would be inventing verbal excuses like that to justify my intuitive perceptions to myself. As it is, though, I’ll just say that I can see the difference perceptually, and leave it at that—after adding some unnecessary ornaments to prevent this reply from being voted down by people who are still too focused on the verbal.
PS: We post classified ads for FHI, too.
You could have just not replied at all. It would have saved me the time spent trying to write up a response to a reply which is nearly devoid of any content.
Incidentally, I don’t have “intuitive” perceptions of difference here. It’s pretty clear to me, and I can explain why. Though in my estimation, you don’t care.
When I read Eliezer’s fiction the concepts from dozens of lesswrong posts float to the surface of my mind, are processed and the implications become more intuitively grasped. Your brain may be wired somewhat differently but for me fiction is useful.
PPS: Probing my intuitions further, I suspect that if the above post had been questioning e.g. komponisto’s rationality in the same tone and manner, I would have had around the same reaction of offtopicness for around the same reason.
I can see a couple of reasons why the post does belong here:
It concerns Less Wrong itself, specifically it’s origin and motivation. This should be of interest to community members.
You (Eliezer) are the most visible advocate and practitioner of human rationality improvement. If it turns out that you are not particularly rational, then perhaps the techniques you have developed are not worth learning.
Psy-Kosh’s answer seems perfectly reasonable to me. I wonder why you don’t just give that answer, instead of saying the post doesn’t belong here. Actually if I had known this was one of the reasons for starting OB/LW, I probably would have paid more attention earlier, because at the beginning I was thinking “Why is Eliezer talking so much about human biases now? That doesn’t seem so interesting, compared to the Singularity/FAI stuff he used to talk about.”
E.Y. has given that answer before:
Rationality: Common Interest of Many Causes
I am going to respond to the general overall direction of your responses.
That is feeble, and for those who don’t understand why let me explain it.
Eliezer works for SIAI which is a non-profit where his pay depends on donations. Many people on LW are interested in SIAI and some even donate to SIAI, others potentially could donate. When your pay depends on convincing people that your work is worthwhile it is always worth justifying what you are doing. This becomes even more important when it looks like you’re distracted from what you are being paid to do. (If you ever work with a VC and their money you’ll know what I mean.)
When it comes to ensuring that SIAI continues to pay especially when you are the FAI researcher there justifying why you are writing a book on rationality which in no way solves FAI becomes extremely important.
EY ask yourself this what percent of the people interested in SIAI and donate are interested FAI? Then ask what percent are interested in rationality with no clear plan of how that gets to FAI? If the answer to the first is greater then the second then you have a big problem, because one could interpret the use of your time writing this book on rationality as wasting donated money unless there is a clear reason how rationality books get you to FAI.
P.S. If you want to educate people to help you out as someone speculated you’d be better off teaching them computer science and mathematics.
Remember my post drew no conclusions so for Yvain I have cast no stones I merely ask questions.
Even on the margin? There are already lots of standard textbooks and curricula for mathematics and computer science, whereas I’m not aware of anything else that fills the function of Less Wrong.
If you are previously a donor to SIAI, I’ll be happy to answer you elsewhere.
If not, I am not interested in what you think SIAI donors think. Given your other behavior, I’m also not interested in any statements on your part that you might donate if only circumstances were X. Experience tells me better.
I apologize I rippled your pond.
“If not, I am not interested in what you think SIAI donors think.”
I never claimed to know what SIAI donors think I asked you to think about that. But I think the fact that SIAI has as little money as it does after all these years speaks volumes about SIAI.
“Given your other behavior, ”
Why because I ask questions that when answered honestly you don’t like? Or is it because I don’t blindly hang on every word you speak?
“I’m also not interested in any statements on your part that you might donate if only circumstances were X. Experience tells me better.”
I never claimed I would donate nor will I ever as long as I live. As for experience telling you better, you have none, and considering the lack of money SIAI has and your arrogance you probably never will so I will keep my own council on that part.
“If you are previously a donor to SIAI, I’ll be happy to answer you elsewhere.”
Why, because you don’t want to disrupt the LW image of Eliezer the genius? Or is it because you really are distracted as I suspect or have given up because you cannot solve the problem of FAI another good possibility? These questions are simple easy to answer and I see no real reason you can’t answer them here and now. If you find the answers embarrassing then change, if not then what have you got to loose?
If your next response is as feeble as the last ones have been don’t bother posting them for my sake. You claim you want to be a rationalist then try applying reason to your own actions and answer the questions asked honestly.
The questions are fine. I think it’s the repetitiveness, obvious hostility, and poor grammar and spelling that get on people’s nerves.
Not to mention barely responding to repeated presentations of the correct answer.
...and it being a top-level post instead of Open Thread comment. Probably would’ve been a lot more forgiving if it’d been an Open Thread comment.
You’re that sure that at this point in time you have all the information you’d ever need to make that decision?
Rationality is the art of not screwing up—seeing what is there instead of what you want to see, or are evolutionarily suspectible to seeing. When working on a task that may have (literally) earth-shattering consequences, there may not be a skill that’s more important. Getting people educated about rationality is of prime importance for FAI.