I agree but, as I’ve understood it, they’re explicitly saying they won’t release any AGI advances they make. What will it do to their credibility to be funding a “secret” AI project?
I honestly worry that this could kill funding for the organization which doesn’t seem optimal in any scenario.
Potential Donor: I’ve been impressed with your work on AI risk. Now, I hear you’re also trying to build an AI yourselves. Who do you have working on your team?
SI: Well, we decided to train high schoolers since we couldn’t find any researchers we could trust.
PD: Hm, so what about the project lead?
SI: Well, he’s done brilliant work on rationality training and wrote a really fantastic Harry Potter fanfic that helped us recruit the high schoolers.
PD: Huh. So, how has the work gone so far?
SI: That’s the best part, we’re keeping it all secret so that our advances don’t fall into the wrong hands. You wouldn’t want that, would you?
PD: [backing away slowly] No, of course not… Well, I need to do a little more reading about your organization, but this sounds, um, good...
I think that some of the issue is that while Eliezer’s conception of these issues has continued to evolve, we continue to both point and be pointed back to posts that he only partially agrees with. We might chart a more accurate position by winding through a thousand comments, but that’s a difficult thing to do.
To pick one example from a recent thread, here he adjusts (or flags for adjustment) his thinking on Oracle AI, but someone who missed that would have no idea from reading older articles.
It seems like our local SI representatives recognize the need for an up to date summary document to point people to. Until then, our current refrain of “read the sequences” will grow increasingly misleading as more and more updates and revisions are spread across years of comments (that said, I still think people should read the sequences :) ).