My charitable reading is that he is arguing there will be other people like him and if SIAI wishes to continue growing there does need to be easily digested material.
From my experience as a long-time lurker and occasional poster, LW is not easily accessible to new users. The Sequences are indeed very long and time consuming, and most of them have multiple links to other posts you are supposed to have already read, creating confusion if you should happen to forget the gist of a particular post. Besides, Eliezer draws a number of huge philosophical conclusions (reductionism, computationalism, MWI, the Singularity, etc.), and a lot of people aren’t comfortable swallowing all of that at once. Indeed, the “why should I buy all this?” question has popped into my head many times while reading.
Furthermore, I think criticism like this is good, and the LW crowd should not have such a negative reaction to it. After all, the Sequences do go on and on about not getting unduly emotionally attached to beliefs; if the community can’t take criticism, that is probably a sign that it is getting a little too cozy with its current worldview.
Criticism is good, but this criticism isn’t all that useful. Ultimately, what SIAI does is the conclusion of a chain of reasoning; the Sequences largely present that reasoning. Pointing to a particular gap or problem in that chain is useful; just ignoring it and saying “justify yourselves!” doesn’t advance the debate.
Agreed—criticism of this sort vaguely reminds me of criticism of evolution in that it attacks a particular part of the desired target rather than its fundamental assumptions (my apologies to the original poster). Still, I think we should question the Sequences as much as possible, and even misguided criticism can be useful. I’m not saying we should welcome an unending series of top-level posts like this, but I for one would like to see critical essays on of some of LW’s most treasured posts. (There goes my afternoon...)
My primary point was to inquire about the foundation and credibility of named chain of reasoning. Is it a coherent internal logic that is reasoning about itself or is it based on firm ground?
Take the following example:
A recursively evolving AGI is quickly reaching a level that can be considered superhuman. As no advanced nanotechnology was necessary for its construction it is so far awful limited in what it can accomplish given its vast and fast intellect. Thus it solves all open problems associated with advanced nanotechnology and secretely mails its solutions a researcher. This researcher is very excited and consequently builds a corporation around this new technology. Later the AGI buys the stocks of that company and plants a front man. Due to some superhuman social engineering it finally obtains control of the technology...
At this point we are already deep into subsequent reasoning about something shaky that at the same time is used as evidence of the very reasoning involving it. Taking a conclusion and running with it building a huge framework of further conclusions around it is in my opinion questionable. First this conclusion has to yield marginal evidence of its feasibility, then you are able to create a further hypothesis engaged with further consequences. You are making estimations within a framework that is itself not based on firm ground. The gist of what I was trying to say is not to subsequently base conclusions and actions on other conclusions which themselves do not bear evidence.
I was inquiring about the supportive evidence at the origin of your complex multi-step extrapolations argued to be from inductive generalizations. If there isn’t any, what difference is there between writing fiction and complex multi-step extrapolations argued to be from inductive generalizations?
I’ve read and heard enough to be in doubt since I haven’t come across a single piece of evidence besides some seemingly sound argumentation (as far as I can tell) in favor of some basic principles of unknown accuracy. And even those arguments are sufficiently vague that you cannot differentiate them from mere philosophical musing.
In the case of the SIAI it rather seems to be that there are hypotheses based on other hypotheses that are not yet tested.
Pointing to a particular gap or problem in that chain is useful; just ignoring it and saying “justify yourselves!” doesn’t advance the debate.
Disagree. If you are asking people for money (and they are paying you), the burden is on you to provide justification at multiple levels of detail to your prospective or current donors.
But, but… then you’ll have to, like, repeat yourself a lot!
No shit. If you want to change the world, be prepared to repeat yourself a lot.
If you are running a program where you are trying to convince people on a large scale, then you need to be able to provide overviews of what you are saying at various levels of resolution. Getting annoyed (at one of your own donors!) for such a request is not a way to win.
Edit: At the time, Eliezer didn’t realize that XiXiDu was a donor.
Getting annoyed (at one of your own donors!) for such a request is not a way to win.
I don’t begrudge SIAI at all for using Less Wrong as a platform for increasing its donor base, but I can definitely see myself getting annoyed sooner or later, if SIAI donors keep posting low-quality comments or posts, and then expecting special treatment for being a donor. You can ask Eliezer to not get annoyed, but is it fair to expect all the other LW regulars to do the same as well?
I’m not sure what the solution is to this problem, but I’m hoping that somebody is thinking about it.
I can definitely see myself getting annoyed sooner or later, if SIAI donors keep posting low-quality comments or posts, and then expecting special treatment for being a donor.
Me too. The reason I upvoted this post was because I hoped it would stimulate higher quality discussion (whether complimentary, critical, or both) of SIAI in the future. I’ve been hoping to see such a discussion on LW for a while to help me think through some things.
To be fair, I don’t think XiXiDu expected special treatment for being a donor; he didn’t even mention it until Eliezer basically claimed that he was being insincere about his interest. (EDIT: Thanks to Wei Dai, I see he did mention it. No comment on motivations, then.)
I think that Eliezer’s statement is not an expression of a desire to give donors special treatment in general; it’s a reflection of the fact that, knowing Xi is a donor and proven supporter of SIAI, he then ought to give Xi’s criticism of SIAI more credit for being sincere and worth addressing somehow. If Xi were talking about anything else, it wouldn’t be relevant.
Is there more to this than “I can’t be bothered to read the Sequences—please justify everything you’ve ever said in a few paragraphs for me”?
My charitable reading is that he is arguing there will be other people like him and if SIAI wishes to continue growing there does need to be easily digested material.
From my experience as a long-time lurker and occasional poster, LW is not easily accessible to new users. The Sequences are indeed very long and time consuming, and most of them have multiple links to other posts you are supposed to have already read, creating confusion if you should happen to forget the gist of a particular post. Besides, Eliezer draws a number of huge philosophical conclusions (reductionism, computationalism, MWI, the Singularity, etc.), and a lot of people aren’t comfortable swallowing all of that at once. Indeed, the “why should I buy all this?” question has popped into my head many times while reading.
Furthermore, I think criticism like this is good, and the LW crowd should not have such a negative reaction to it. After all, the Sequences do go on and on about not getting unduly emotionally attached to beliefs; if the community can’t take criticism, that is probably a sign that it is getting a little too cozy with its current worldview.
Criticism is good, but this criticism isn’t all that useful. Ultimately, what SIAI does is the conclusion of a chain of reasoning; the Sequences largely present that reasoning. Pointing to a particular gap or problem in that chain is useful; just ignoring it and saying “justify yourselves!” doesn’t advance the debate.
Agreed—criticism of this sort vaguely reminds me of criticism of evolution in that it attacks a particular part of the desired target rather than its fundamental assumptions (my apologies to the original poster). Still, I think we should question the Sequences as much as possible, and even misguided criticism can be useful. I’m not saying we should welcome an unending series of top-level posts like this, but I for one would like to see critical essays on of some of LW’s most treasured posts. (There goes my afternoon...)
Of course, substantive criticism of specific arguments is always welcome.
My primary point was to inquire about the foundation and credibility of named chain of reasoning. Is it a coherent internal logic that is reasoning about itself or is it based on firm ground?
Take the following example: A recursively evolving AGI is quickly reaching a level that can be considered superhuman. As no advanced nanotechnology was necessary for its construction it is so far awful limited in what it can accomplish given its vast and fast intellect. Thus it solves all open problems associated with advanced nanotechnology and secretely mails its solutions a researcher. This researcher is very excited and consequently builds a corporation around this new technology. Later the AGI buys the stocks of that company and plants a front man. Due to some superhuman social engineering it finally obtains control of the technology...
At this point we are already deep into subsequent reasoning about something shaky that at the same time is used as evidence of the very reasoning involving it. Taking a conclusion and running with it building a huge framework of further conclusions around it is in my opinion questionable. First this conclusion has to yield marginal evidence of its feasibility, then you are able to create a further hypothesis engaged with further consequences. You are making estimations within a framework that is itself not based on firm ground. The gist of what I was trying to say is not to subsequently base conclusions and actions on other conclusions which themselves do not bear evidence.
I was inquiring about the supportive evidence at the origin of your complex multi-step extrapolations argued to be from inductive generalizations. If there isn’t any, what difference is there between writing fiction and complex multi-step extrapolations argued to be from inductive generalizations?
I’ve read and heard enough to be in doubt since I haven’t come across a single piece of evidence besides some seemingly sound argumentation (as far as I can tell) in favor of some basic principles of unknown accuracy. And even those arguments are sufficiently vague that you cannot differentiate them from mere philosophical musing.
In the case of the SIAI it rather seems to be that there are hypotheses based on other hypotheses that are not yet tested.
Disagree. If you are asking people for money (and they are paying you), the burden is on you to provide justification at multiple levels of detail to your prospective or current donors.
But, but… then you’ll have to, like, repeat yourself a lot!
No shit. If you want to change the world, be prepared to repeat yourself a lot.
If so… is that request bad?
If you are running a program where you are trying to convince people on a large scale, then you need to be able to provide overviews of what you are saying at various levels of resolution. Getting annoyed (at one of your own donors!) for such a request is not a way to win.
Edit: At the time, Eliezer didn’t realize that XiXiDu was a donor.
I don’t begrudge SIAI at all for using Less Wrong as a platform for increasing its donor base, but I can definitely see myself getting annoyed sooner or later, if SIAI donors keep posting low-quality comments or posts, and then expecting special treatment for being a donor. You can ask Eliezer to not get annoyed, but is it fair to expect all the other LW regulars to do the same as well?
I’m not sure what the solution is to this problem, but I’m hoping that somebody is thinking about it.
Me too. The reason I upvoted this post was because I hoped it would stimulate higher quality discussion (whether complimentary, critical, or both) of SIAI in the future. I’ve been hoping to see such a discussion on LW for a while to help me think through some things.
In other words, you see XiXiDu’s post as the defector in the Asch experiment who chooses C when the group chooses B but the right answer is A?
To be fair, I don’t think XiXiDu expected special treatment for being a donor; he didn’t even mention it until Eliezer basically claimed that he was being insincere about his interest. (EDIT: Thanks to Wei Dai, I see he did mention it. No comment on motivations, then.)
I think that Eliezer’s statement is not an expression of a desire to give donors special treatment in general; it’s a reflection of the fact that, knowing Xi is a donor and proven supporter of SIAI, he then ought to give Xi’s criticism of SIAI more credit for being sincere and worth addressing somehow. If Xi were talking about anything else, it wouldn’t be relevant.
He mentioned it earlier in a comment reply to Eliezer, and then again in the post itself: