Important question—is this going to be a broad overview of AI risk in that it will cover different viewpoints (other than just MIRI’s), a little like Responses to Catastrophic AGI Risk was, or is it to be more focused on the MIRI-esque view of things?
Important question—is this going to be a broad overview of AI risk in that it will cover different viewpoints (other than just MIRI’s), a little like Responses to Catastrophic AGI Risk was, or is it to be more focused on the MIRI-esque view of things?
Focused on the MIRI-esque view.