A great many Less Wrongers gave feedback on earlier drafts of “Responses to Catastrophic AGI Risk: A Survey,” which has now been released. This is the preferred discussion page for the paper.
The report, co-authored by past MIRI researcher Kaj Sotala and University of Louisville’s Roman Yampolskiy, is a summary of the extant literature (250+ references) on AGI risk, and can serve either as a guide for researchers or as an introduction for the uninitiated.
Here is the abstract:
Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may pose a catastrophic risk to humanity. After summarizing the arguments for why AGI may pose such a risk, we survey the field’s proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors, and proposals for creating AGIs that are safe due to their internal design.
Responses to Catastrophic AGI Risk: A Survey
A great many Less Wrongers gave feedback on earlier drafts of “Responses to Catastrophic AGI Risk: A Survey,” which has now been released. This is the preferred discussion page for the paper.
The report, co-authored by past MIRI researcher Kaj Sotala and University of Louisville’s Roman Yampolskiy, is a summary of the extant literature (250+ references) on AGI risk, and can serve either as a guide for researchers or as an introduction for the uninitiated.
Here is the abstract: