Thanks for the feedback, John! I’ve moved the Aryeh/Eliezer exchange to a footnote, and I welcome more ideas for ways to improve the piece. (Folks are also welcome to repurpose anything I wrote above to create something new and more beginner-friendly, if you think there’s a germ of a good beginner-friendly piece anywhere in the OP.)
Also, per footnote 1: “I wrote this post to summarize my own top reasons for being worried, not to try to make a maximally compelling or digestible case for others.”
The original reason I wrote this was that Dustin Moskovitz wanted something like this, as an alternative to posts like AGI Ruin:
[H]ave you tried making a layman’s explanation of the case? Do you endorse the summary? I’m aware of much longer versions of the argument, but not shorter ones!
From my POV, a lot of the confusion is around the confidence level. Historically EY makes many arguments to express his confidence, and that makes people feel snowed, like they have to inspect each one. I think it’d be better if there was more clarity about which are strongest.
I think one argument is about the number of relatively independent issues, and that’s still valid, but then you could link out to that list as a separate exercise without losing everyone.
This post is speaking for me and not necessarily for Eliezer, but I figure it may be useful anyway. (A MIRI researcher did review an earlier draft and left comments that I incorporated, at least.)
And indeed, one of the obvious ways it could be useful is if it ends up evolving into (or inspiring) a good introductory resource, though I don’t know how likely that is, I don’t know whether it’s already a good intro-ish resource paired with something else, etc.
Thanks for the feedback, John! I’ve moved the Aryeh/Eliezer exchange to a footnote, and I welcome more ideas for ways to improve the piece. (Folks are also welcome to repurpose anything I wrote above to create something new and more beginner-friendly, if you think there’s a germ of a good beginner-friendly piece anywhere in the OP.)
Tagging @Richard_Ngo
Also, per footnote 1: “I wrote this post to summarize my own top reasons for being worried, not to try to make a maximally compelling or digestible case for others.”
The original reason I wrote this was that Dustin Moskovitz wanted something like this, as an alternative to posts like AGI Ruin:
This post is speaking for me and not necessarily for Eliezer, but I figure it may be useful anyway. (A MIRI researcher did review an earlier draft and left comments that I incorporated, at least.)
And indeed, one of the obvious ways it could be useful is if it ends up evolving into (or inspiring) a good introductory resource, though I don’t know how likely that is, I don’t know whether it’s already a good intro-ish resource paired with something else, etc.