Imagine that this is v0 of a series of documents that need to evolve into humanity’s (/ some specific group’s) actual business plan for saving the world.
That’s part of what I meant to be responding to — not that this post is not useful, but that I don’t see what makes it so special compared to all the other stuff that Eliezer and others have already written.
To put it another way, I would agree that Eliezer has made (what seem to me like) world-historically-significant contributions to understanding and advocating for (against) AI risk.
So, if 2007 Eliezer was asking himself, “Why am I the only one really looking into this?”, I think that’s a very reasonable question.
But here in 2022, I just don’t see this particular post as that significant of a contribution compared to what’s already out there.
Why is this v0 and not https://arbital.com/explore/ai_alignment/, or the Sequences, or any of the documents that Evan links to here?
That’s part of what I meant to be responding to — not that this post is not useful, but that I don’t see what makes it so special compared to all the other stuff that Eliezer and others have already written.
To put it another way, I would agree that Eliezer has made (what seem to me like) world-historically-significant contributions to understanding and advocating for (against) AI risk.
So, if 2007 Eliezer was asking himself, “Why am I the only one really looking into this?”, I think that’s a very reasonable question.
But here in 2022, I just don’t see this particular post as that significant of a contribution compared to what’s already out there.