Promoted to curated: I think this classification is good and useful, both to refer to in conversation and to help people navigate the broader alignment space. And I think the post is presented in a clear and relatively concise way.
I do think there would have been value in connecting it more to past writing about similar topics, though I recognize that this might have easily doubled the effort of writing this post.
Thanks! I agree that more connection to past writings is always good, and I’m happy to update it appropriately—although, upon thinking about it, there’s nothing which really comes to mind as an obvious omission (except perhaps citing sections of Superintelligence?) Of course I’m pretty biased, since I already put in the things which I thought were most important—so I’d be glad to hear any additional suggestions you have.
Promoted to curated: I think this classification is good and useful, both to refer to in conversation and to help people navigate the broader alignment space. And I think the post is presented in a clear and relatively concise way.
I do think there would have been value in connecting it more to past writing about similar topics, though I recognize that this might have easily doubled the effort of writing this post.
Thanks! I agree that more connection to past writings is always good, and I’m happy to update it appropriately—although, upon thinking about it, there’s nothing which really comes to mind as an obvious omission (except perhaps citing sections of Superintelligence?) Of course I’m pretty biased, since I already put in the things which I thought were most important—so I’d be glad to hear any additional suggestions you have.
One place that comes to mind that had a bunch of related writing is Arbital.
I was also thinking about linking to a bunch of related taxonomies. The “Disjunctive AI Risk” paper comes to mind. I will think about other examples.