From my experience as a long-time lurker and occasional poster, LW is not easily accessible to new users. The Sequences are indeed very long and time consuming, and most of them have multiple links to other posts you are supposed to have already read, creating confusion if you should happen to forget the gist of a particular post. Besides, Eliezer draws a number of huge philosophical conclusions (reductionism, computationalism, MWI, the Singularity, etc.), and a lot of people aren’t comfortable swallowing all of that at once. Indeed, the “why should I buy all this?” question has popped into my head many times while reading.
Furthermore, I think criticism like this is good, and the LW crowd should not have such a negative reaction to it. After all, the Sequences do go on and on about not getting unduly emotionally attached to beliefs; if the community can’t take criticism, that is probably a sign that it is getting a little too cozy with its current worldview.
Criticism is good, but this criticism isn’t all that useful. Ultimately, what SIAI does is the conclusion of a chain of reasoning; the Sequences largely present that reasoning. Pointing to a particular gap or problem in that chain is useful; just ignoring it and saying “justify yourselves!” doesn’t advance the debate.
Agreed—criticism of this sort vaguely reminds me of criticism of evolution in that it attacks a particular part of the desired target rather than its fundamental assumptions (my apologies to the original poster). Still, I think we should question the Sequences as much as possible, and even misguided criticism can be useful. I’m not saying we should welcome an unending series of top-level posts like this, but I for one would like to see critical essays on of some of LW’s most treasured posts. (There goes my afternoon...)
My primary point was to inquire about the foundation and credibility of named chain of reasoning. Is it a coherent internal logic that is reasoning about itself or is it based on firm ground?
Take the following example:
A recursively evolving AGI is quickly reaching a level that can be considered superhuman. As no advanced nanotechnology was necessary for its construction it is so far awful limited in what it can accomplish given its vast and fast intellect. Thus it solves all open problems associated with advanced nanotechnology and secretely mails its solutions a researcher. This researcher is very excited and consequently builds a corporation around this new technology. Later the AGI buys the stocks of that company and plants a front man. Due to some superhuman social engineering it finally obtains control of the technology...
At this point we are already deep into subsequent reasoning about something shaky that at the same time is used as evidence of the very reasoning involving it. Taking a conclusion and running with it building a huge framework of further conclusions around it is in my opinion questionable. First this conclusion has to yield marginal evidence of its feasibility, then you are able to create a further hypothesis engaged with further consequences. You are making estimations within a framework that is itself not based on firm ground. The gist of what I was trying to say is not to subsequently base conclusions and actions on other conclusions which themselves do not bear evidence.
I was inquiring about the supportive evidence at the origin of your complex multi-step extrapolations argued to be from inductive generalizations. If there isn’t any, what difference is there between writing fiction and complex multi-step extrapolations argued to be from inductive generalizations?
I’ve read and heard enough to be in doubt since I haven’t come across a single piece of evidence besides some seemingly sound argumentation (as far as I can tell) in favor of some basic principles of unknown accuracy. And even those arguments are sufficiently vague that you cannot differentiate them from mere philosophical musing.
In the case of the SIAI it rather seems to be that there are hypotheses based on other hypotheses that are not yet tested.
Pointing to a particular gap or problem in that chain is useful; just ignoring it and saying “justify yourselves!” doesn’t advance the debate.
Disagree. If you are asking people for money (and they are paying you), the burden is on you to provide justification at multiple levels of detail to your prospective or current donors.
But, but… then you’ll have to, like, repeat yourself a lot!
No shit. If you want to change the world, be prepared to repeat yourself a lot.
From my experience as a long-time lurker and occasional poster, LW is not easily accessible to new users. The Sequences are indeed very long and time consuming, and most of them have multiple links to other posts you are supposed to have already read, creating confusion if you should happen to forget the gist of a particular post. Besides, Eliezer draws a number of huge philosophical conclusions (reductionism, computationalism, MWI, the Singularity, etc.), and a lot of people aren’t comfortable swallowing all of that at once. Indeed, the “why should I buy all this?” question has popped into my head many times while reading.
Furthermore, I think criticism like this is good, and the LW crowd should not have such a negative reaction to it. After all, the Sequences do go on and on about not getting unduly emotionally attached to beliefs; if the community can’t take criticism, that is probably a sign that it is getting a little too cozy with its current worldview.
Criticism is good, but this criticism isn’t all that useful. Ultimately, what SIAI does is the conclusion of a chain of reasoning; the Sequences largely present that reasoning. Pointing to a particular gap or problem in that chain is useful; just ignoring it and saying “justify yourselves!” doesn’t advance the debate.
Agreed—criticism of this sort vaguely reminds me of criticism of evolution in that it attacks a particular part of the desired target rather than its fundamental assumptions (my apologies to the original poster). Still, I think we should question the Sequences as much as possible, and even misguided criticism can be useful. I’m not saying we should welcome an unending series of top-level posts like this, but I for one would like to see critical essays on of some of LW’s most treasured posts. (There goes my afternoon...)
Of course, substantive criticism of specific arguments is always welcome.
My primary point was to inquire about the foundation and credibility of named chain of reasoning. Is it a coherent internal logic that is reasoning about itself or is it based on firm ground?
Take the following example: A recursively evolving AGI is quickly reaching a level that can be considered superhuman. As no advanced nanotechnology was necessary for its construction it is so far awful limited in what it can accomplish given its vast and fast intellect. Thus it solves all open problems associated with advanced nanotechnology and secretely mails its solutions a researcher. This researcher is very excited and consequently builds a corporation around this new technology. Later the AGI buys the stocks of that company and plants a front man. Due to some superhuman social engineering it finally obtains control of the technology...
At this point we are already deep into subsequent reasoning about something shaky that at the same time is used as evidence of the very reasoning involving it. Taking a conclusion and running with it building a huge framework of further conclusions around it is in my opinion questionable. First this conclusion has to yield marginal evidence of its feasibility, then you are able to create a further hypothesis engaged with further consequences. You are making estimations within a framework that is itself not based on firm ground. The gist of what I was trying to say is not to subsequently base conclusions and actions on other conclusions which themselves do not bear evidence.
I was inquiring about the supportive evidence at the origin of your complex multi-step extrapolations argued to be from inductive generalizations. If there isn’t any, what difference is there between writing fiction and complex multi-step extrapolations argued to be from inductive generalizations?
I’ve read and heard enough to be in doubt since I haven’t come across a single piece of evidence besides some seemingly sound argumentation (as far as I can tell) in favor of some basic principles of unknown accuracy. And even those arguments are sufficiently vague that you cannot differentiate them from mere philosophical musing.
In the case of the SIAI it rather seems to be that there are hypotheses based on other hypotheses that are not yet tested.
Disagree. If you are asking people for money (and they are paying you), the burden is on you to provide justification at multiple levels of detail to your prospective or current donors.
But, but… then you’ll have to, like, repeat yourself a lot!
No shit. If you want to change the world, be prepared to repeat yourself a lot.