There are multiple questions here: is AGI an existential threat?, and if so, how can we safely make and use AGI? Or if that is not possible, how can we prevent it being made?
There are strong arguments that the answer to the first question is yes. See, for example, everything that Eliezer has said on the subject. Many others agree; some disagree. Read and judge.
What can be done to avoid catastrophe? The recent dialogues with Eliezer posted here indicate that he has no confidence in most of the work that has been done on this. The people who are doing it presumably disagree. Since AGI has not yet been created, the work is necessarily theoretical. Evidence here consists of mathematical frameworks, arguments, and counterexamples.
There are multiple questions here: is AGI an existential threat?, and if so, how can we safely make and use AGI? Or if that is not possible, how can we prevent it being made?
There are strong arguments that the answer to the first question is yes. See, for example, everything that Eliezer has said on the subject. Many others agree; some disagree. Read and judge.
What can be done to avoid catastrophe? The recent dialogues with Eliezer posted here indicate that he has no confidence in most of the work that has been done on this. The people who are doing it presumably disagree. Since AGI has not yet been created, the work is necessarily theoretical. Evidence here consists of mathematical frameworks, arguments, and counterexamples.