I’d also like to see more posts that do this sort of “mapping”. I think that mapping AI risk arguments is too neglected—more discussion and examples in this post by Gyrodiot. I’m continuing to work collaboratively in this area in my spare time, and I’m excited that more people are getting involved.
We weren’t trying to fully account for AGI timelines—our choice of scope was based on a mix of personal interest and importance. I know people currently working on posts similar to this that will go in-depth on timelines, discontinuity, paths to AGI, the nature of intelligence, etc. which I’m excited about!
I agree with all your points. You’re right that this post’s scope does not include broader alternatives for reducing AI risk. It was not even designed to guide what people should work on, though it can serve that purpose. We were really just trying to clearly map out some of the discourse, as a starting point and example for future work.
It’s great to hear your thoughts on the post!
I’d also like to see more posts that do this sort of “mapping”. I think that mapping AI risk arguments is too neglected—more discussion and examples in this post by Gyrodiot. I’m continuing to work collaboratively in this area in my spare time, and I’m excited that more people are getting involved.
We weren’t trying to fully account for AGI timelines—our choice of scope was based on a mix of personal interest and importance. I know people currently working on posts similar to this that will go in-depth on timelines, discontinuity, paths to AGI, the nature of intelligence, etc. which I’m excited about!
I agree with all your points. You’re right that this post’s scope does not include broader alternatives for reducing AI risk. It was not even designed to guide what people should work on, though it can serve that purpose. We were really just trying to clearly map out some of the discourse, as a starting point and example for future work.