Write a bullet-point summary for each sequence and tell me that one would not be tempted to “dismiss them out of hand, even lacking a chain of arguments leading up to them”, unless one is already familiar with the arguments.
I’ll try, just for fun, to summarize Eliezer’s conclusions of the pre-fun-theory and pre-community-building part of the sequence:
artificial intelligence can self-improve;
with every improvement, the rate at which it can improve increases;
AGI will therefore experience exponential improvement (AI fooms);
even if there’s a cap to this process, the resulting agent will be a very powerful agent, incomprehensibly so (singularity);
an agent effectiveness does not constrain its utility function (orthogonality thesis);
humanity’s utility function occupy a very tiny and fragmented fraction of the set of all possible utility functions (human values are fragile);
if we fail to encode the correct human utility function in a self-improving AGI, even tiny differences will results in a catastrophically unpleasant future (UFAI as x-risk);
an AGI is about to come pretty soon, so we better hurry to figure out how to do the latter point correctly.
That would kind of require that I spend my time reading dozens to hundreds of blog entries espousing a mixture of basic good sense and completely unfalsifiable theories extrapolated from pure mathematics, just so I can summarize them in terms of their most surprising conclusions.
EDIT: The previous comment is not meant as personal disrespect. They’re just meant to point out that treating Eliezer’s Sequences as epistemically superlative and requiring someone to read them all to have even well-informed views on anything is… low-utility, especially considering I have read a fair portion.
I agree with all that, actually. My original point was not that Eliezer was right about everything, or that the Sequences should be canonized into scriptures, but that the conclusions are far enough from the mainstream as to be easily dismissed if presented on their own.
Eli, I want to +1 this comment because I agree with the awkwardness of expecting people to read such a large amount of information to participate in a conversation, but it looks like you’re suggesting also that those articles are “just basic good sense.” Unless I misunderstood you, that’s “obviousness-in-retrospect” (aka hindsight bias). So I won’t go +1 or −1.
I wouldn’t say retrospect, no. Maybe it’s because I’ve mostly read the “Core Sequences” (covering epistemology rather than more controversial subjects), but most of it did seem like basic good sense, in terms of “finding out what is true and actually correcting your beliefs for it”. As in, I wasn’t really surprised all that much by what was written there, since it was mostly giving me vocabulary for things I had already known on some vaguer level.
Maybe I just had an abnormally high exposure to epistemic rationality prior to coming across the Sequences via HPMoR since I found out about those at age 21 rather than younger and was already of the “read everything interesting in sight” bent? Maybe my overexposure to an abnormally scientific clade of people makes me predisposed to think some degree of rationality is normal?
Maybe it was the fact that when I heard about psychics as a kid I bought myself a book on telekinesis, tried it out, and got bitterly disappointed by its failure to work—indicating an anbnormal predisposition towards taking ideas seriously and testing them?
Screw it. Put this one down as “destiny at work”. Everyone here has a story like that; it’s why we’re here.
I think we see eye-to-eye, we both came here with a large amount of pre-existing knowledge and understanding of rationality...and I think for both of us reading all of the sequences is just not going to be a realistic expectation. But by the same token, I can’t go with you when you say the ideas are basic.
Even if you knew them already, they are still very important and useful ideas that most people don’t seem to know or act upon. I have respect for them and the people that write about them, even if I don’t have time to go through all of them, and the inability to do that forms a significant barrier to my participation in the site.
Do we presume Eliezer had to write all those millions of words?
Write a bullet-point summary for each sequence and tell me that one would not be tempted to “dismiss them out of hand, even lacking a chain of arguments leading up to them”, unless one is already familiar with the arguments.
I’ll try, just for fun, to summarize Eliezer’s conclusions of the pre-fun-theory and pre-community-building part of the sequence:
artificial intelligence can self-improve;
with every improvement, the rate at which it can improve increases;
AGI will therefore experience exponential improvement (AI fooms);
even if there’s a cap to this process, the resulting agent will be a very powerful agent, incomprehensibly so (singularity);
an agent effectiveness does not constrain its utility function (orthogonality thesis);
humanity’s utility function occupy a very tiny and fragmented fraction of the set of all possible utility functions (human values are fragile);
if we fail to encode the correct human utility function in a self-improving AGI, even tiny differences will results in a catastrophically unpleasant future (UFAI as x-risk);
an AGI is about to come pretty soon, so we better hurry to figure out how to do the latter point correctly.
Anecdote: I think I’ve had better responses summarizing LW articles in a few paragraphs without linking, than linking to them with short explanations.
It does take a lot to crosss those inferential distances, but I don’t think quite that much.
To be fair, my discussions may not cover a whole sequence, I have the opportunity to pick out what is needed in a particular instance.
That would kind of require that I spend my time reading dozens to hundreds of blog entries espousing a mixture of basic good sense and completely unfalsifiable theories extrapolated from pure mathematics, just so I can summarize them in terms of their most surprising conclusions.
EDIT: The previous comment is not meant as personal disrespect. They’re just meant to point out that treating Eliezer’s Sequences as epistemically superlative and requiring someone to read them all to have even well-informed views on anything is… low-utility, especially considering I have read a fair portion.
I agree with all that, actually. My original point was not that Eliezer was right about everything, or that the Sequences should be canonized into scriptures, but that the conclusions are far enough from the mainstream as to be easily dismissed if presented on their own.
Which ones?
Eli, I want to +1 this comment because I agree with the awkwardness of expecting people to read such a large amount of information to participate in a conversation, but it looks like you’re suggesting also that those articles are “just basic good sense.” Unless I misunderstood you, that’s “obviousness-in-retrospect” (aka hindsight bias). So I won’t go +1 or −1.
I wouldn’t say retrospect, no. Maybe it’s because I’ve mostly read the “Core Sequences” (covering epistemology rather than more controversial subjects), but most of it did seem like basic good sense, in terms of “finding out what is true and actually correcting your beliefs for it”. As in, I wasn’t really surprised all that much by what was written there, since it was mostly giving me vocabulary for things I had already known on some vaguer level.
Maybe I just had an abnormally high exposure to epistemic rationality prior to coming across the Sequences via HPMoR since I found out about those at age 21 rather than younger and was already of the “read everything interesting in sight” bent? Maybe my overexposure to an abnormally scientific clade of people makes me predisposed to think some degree of rationality is normal?
Maybe it was the fact that when I heard about psychics as a kid I bought myself a book on telekinesis, tried it out, and got bitterly disappointed by its failure to work—indicating an anbnormal predisposition towards taking ideas seriously and testing them?
Screw it. Put this one down as “destiny at work”. Everyone here has a story like that; it’s why we’re here.
I think we see eye-to-eye, we both came here with a large amount of pre-existing knowledge and understanding of rationality...and I think for both of us reading all of the sequences is just not going to be a realistic expectation. But by the same token, I can’t go with you when you say the ideas are basic.
Even if you knew them already, they are still very important and useful ideas that most people don’t seem to know or act upon. I have respect for them and the people that write about them, even if I don’t have time to go through all of them, and the inability to do that forms a significant barrier to my participation in the site.