Why not give us a short bullet point list of your conclusions, most readers around here wouldn’t dismiss them out of hand, even lacking a chain of arguments leading up to them.
We sure would. We think we are smart, and the inferential gap the OP mentioned is unfortunately almost invisible from this side. That’s why Eliezer had to write all those millions of words.
Easy test: send a summary/bullet point/whatever as a private message to a few select people from LessWrong, and ask them for their reactions. Possible loss: a few select members become biased due to large inferential gap against the ideas that you gave up to pursue a more important goal. Possible gains: rational feedback to your ideas, supporters, and an estimate of the number of supporters you could gain by sharing your ideas more widely on this site.
Easy test: send a summary/bullet point/whatever as a private message to a few select people from LessWrong, and ask them for their reactions.
That is an interesting test but it is not testing quite the same thing as whether the conclusions would be dismissed out of hand in a post. “Herding cats” is a very different thing to interacting with a particular cat with whom you have opened up a direct mammalian social exchange.
I think you underestimate the potential loss. Worst case scenario one of the people he PMs his ideas to puts them online and spreads links around this site.
Write a bullet-point summary for each sequence and tell me that one would not be tempted to “dismiss them out of hand, even lacking a chain of arguments leading up to them”, unless one is already familiar with the arguments.
I’ll try, just for fun, to summarize Eliezer’s conclusions of the pre-fun-theory and pre-community-building part of the sequence:
artificial intelligence can self-improve;
with every improvement, the rate at which it can improve increases;
AGI will therefore experience exponential improvement (AI fooms);
even if there’s a cap to this process, the resulting agent will be a very powerful agent, incomprehensibly so (singularity);
an agent effectiveness does not constrain its utility function (orthogonality thesis);
humanity’s utility function occupy a very tiny and fragmented fraction of the set of all possible utility functions (human values are fragile);
if we fail to encode the correct human utility function in a self-improving AGI, even tiny differences will results in a catastrophically unpleasant future (UFAI as x-risk);
an AGI is about to come pretty soon, so we better hurry to figure out how to do the latter point correctly.
That would kind of require that I spend my time reading dozens to hundreds of blog entries espousing a mixture of basic good sense and completely unfalsifiable theories extrapolated from pure mathematics, just so I can summarize them in terms of their most surprising conclusions.
EDIT: The previous comment is not meant as personal disrespect. They’re just meant to point out that treating Eliezer’s Sequences as epistemically superlative and requiring someone to read them all to have even well-informed views on anything is… low-utility, especially considering I have read a fair portion.
I agree with all that, actually. My original point was not that Eliezer was right about everything, or that the Sequences should be canonized into scriptures, but that the conclusions are far enough from the mainstream as to be easily dismissed if presented on their own.
Eli, I want to +1 this comment because I agree with the awkwardness of expecting people to read such a large amount of information to participate in a conversation, but it looks like you’re suggesting also that those articles are “just basic good sense.” Unless I misunderstood you, that’s “obviousness-in-retrospect” (aka hindsight bias). So I won’t go +1 or −1.
I wouldn’t say retrospect, no. Maybe it’s because I’ve mostly read the “Core Sequences” (covering epistemology rather than more controversial subjects), but most of it did seem like basic good sense, in terms of “finding out what is true and actually correcting your beliefs for it”. As in, I wasn’t really surprised all that much by what was written there, since it was mostly giving me vocabulary for things I had already known on some vaguer level.
Maybe I just had an abnormally high exposure to epistemic rationality prior to coming across the Sequences via HPMoR since I found out about those at age 21 rather than younger and was already of the “read everything interesting in sight” bent? Maybe my overexposure to an abnormally scientific clade of people makes me predisposed to think some degree of rationality is normal?
Maybe it was the fact that when I heard about psychics as a kid I bought myself a book on telekinesis, tried it out, and got bitterly disappointed by its failure to work—indicating an anbnormal predisposition towards taking ideas seriously and testing them?
Screw it. Put this one down as “destiny at work”. Everyone here has a story like that; it’s why we’re here.
I think we see eye-to-eye, we both came here with a large amount of pre-existing knowledge and understanding of rationality...and I think for both of us reading all of the sequences is just not going to be a realistic expectation. But by the same token, I can’t go with you when you say the ideas are basic.
Even if you knew them already, they are still very important and useful ideas that most people don’t seem to know or act upon. I have respect for them and the people that write about them, even if I don’t have time to go through all of them, and the inability to do that forms a significant barrier to my participation in the site.
Personally I think it is plausible that I would find such a bullet point list as true or mostly true. However, I have already dismissed out of hand the possibility that it would be both true, important, and novel.
When I read this story, I became emotionally invested in Nate (So8res). I empathized with him. He’s the protagonist of the story. Therefore, I have to accept his ideas because otherwise I’d be rejecting his status as protagonist.
We sure would. We think we are smart, and the inferential gap the OP mentioned is unfortunately almost invisible from this side. That’s why Eliezer had to write all those millions of words.
Easy test: send a summary/bullet point/whatever as a private message to a few select people from LessWrong, and ask them for their reactions. Possible loss: a few select members become biased due to large inferential gap against the ideas that you gave up to pursue a more important goal. Possible gains: rational feedback to your ideas, supporters, and an estimate of the number of supporters you could gain by sharing your ideas more widely on this site.
That is an interesting test but it is not testing quite the same thing as whether the conclusions would be dismissed out of hand in a post. “Herding cats” is a very different thing to interacting with a particular cat with whom you have opened up a direct mammalian social exchange.
Perhaps. People, PM me if you’re interested. No guarantees.
In case So8res wants to try this, I’d be quite curious to see the bullet points.
me too
Me too.
And me as well.
I think you underestimate the potential loss. Worst case scenario one of the people he PMs his ideas to puts them online and spreads links around this site.
Do we presume Eliezer had to write all those millions of words?
Write a bullet-point summary for each sequence and tell me that one would not be tempted to “dismiss them out of hand, even lacking a chain of arguments leading up to them”, unless one is already familiar with the arguments.
I’ll try, just for fun, to summarize Eliezer’s conclusions of the pre-fun-theory and pre-community-building part of the sequence:
artificial intelligence can self-improve;
with every improvement, the rate at which it can improve increases;
AGI will therefore experience exponential improvement (AI fooms);
even if there’s a cap to this process, the resulting agent will be a very powerful agent, incomprehensibly so (singularity);
an agent effectiveness does not constrain its utility function (orthogonality thesis);
humanity’s utility function occupy a very tiny and fragmented fraction of the set of all possible utility functions (human values are fragile);
if we fail to encode the correct human utility function in a self-improving AGI, even tiny differences will results in a catastrophically unpleasant future (UFAI as x-risk);
an AGI is about to come pretty soon, so we better hurry to figure out how to do the latter point correctly.
Anecdote: I think I’ve had better responses summarizing LW articles in a few paragraphs without linking, than linking to them with short explanations.
It does take a lot to crosss those inferential distances, but I don’t think quite that much.
To be fair, my discussions may not cover a whole sequence, I have the opportunity to pick out what is needed in a particular instance.
That would kind of require that I spend my time reading dozens to hundreds of blog entries espousing a mixture of basic good sense and completely unfalsifiable theories extrapolated from pure mathematics, just so I can summarize them in terms of their most surprising conclusions.
EDIT: The previous comment is not meant as personal disrespect. They’re just meant to point out that treating Eliezer’s Sequences as epistemically superlative and requiring someone to read them all to have even well-informed views on anything is… low-utility, especially considering I have read a fair portion.
I agree with all that, actually. My original point was not that Eliezer was right about everything, or that the Sequences should be canonized into scriptures, but that the conclusions are far enough from the mainstream as to be easily dismissed if presented on their own.
Which ones?
Eli, I want to +1 this comment because I agree with the awkwardness of expecting people to read such a large amount of information to participate in a conversation, but it looks like you’re suggesting also that those articles are “just basic good sense.” Unless I misunderstood you, that’s “obviousness-in-retrospect” (aka hindsight bias). So I won’t go +1 or −1.
I wouldn’t say retrospect, no. Maybe it’s because I’ve mostly read the “Core Sequences” (covering epistemology rather than more controversial subjects), but most of it did seem like basic good sense, in terms of “finding out what is true and actually correcting your beliefs for it”. As in, I wasn’t really surprised all that much by what was written there, since it was mostly giving me vocabulary for things I had already known on some vaguer level.
Maybe I just had an abnormally high exposure to epistemic rationality prior to coming across the Sequences via HPMoR since I found out about those at age 21 rather than younger and was already of the “read everything interesting in sight” bent? Maybe my overexposure to an abnormally scientific clade of people makes me predisposed to think some degree of rationality is normal?
Maybe it was the fact that when I heard about psychics as a kid I bought myself a book on telekinesis, tried it out, and got bitterly disappointed by its failure to work—indicating an anbnormal predisposition towards taking ideas seriously and testing them?
Screw it. Put this one down as “destiny at work”. Everyone here has a story like that; it’s why we’re here.
I think we see eye-to-eye, we both came here with a large amount of pre-existing knowledge and understanding of rationality...and I think for both of us reading all of the sequences is just not going to be a realistic expectation. But by the same token, I can’t go with you when you say the ideas are basic.
Even if you knew them already, they are still very important and useful ideas that most people don’t seem to know or act upon. I have respect for them and the people that write about them, even if I don’t have time to go through all of them, and the inability to do that forms a significant barrier to my participation in the site.
Personally I think it is plausible that I would find such a bullet point list as true or mostly true. However, I have already dismissed out of hand the possibility that it would be both true, important, and novel.
When I read this story, I became emotionally invested in Nate (So8res). I empathized with him. He’s the protagonist of the story. Therefore, I have to accept his ideas because otherwise I’d be rejecting his status as protagonist.