What stood out to me in the video is Eliezer no longer being able to conceive of any positive outcome at all, which is beyond reason. It made me wonder what approach a company could possible develop for alignment, or what a supposedly aligned AI could possibly do, for Eliezer to take back his doom predictions, and suspect that the answer is none. The impression I got was that he is meanwhile closed to the possibility entirely. I found the Time article heartbreaking. These are parents, intelligent, rational parents who I have respect and compassion for, essentially grieving the death of a young, healthy child, based on the unjustified certainty of impeding doom. I’ve read more hopeful accounts from people living in Ukrainian warzones, or in parts of the Sahel swallowed by Sahara, or islands getting drowned by climate change, where the evidence of risk and lack of reason for hope is far more conclusive; at the end of the day, Eliezer is worried that we will fail at making a potentially emerging powerful agent be friendly, while we know extremely little about these agents and their natural alignment tendencies. In comparison to so many other doom scenarios the certainty here is just really not high. I am glad people here are taking AI risk seriously, that this risk is being increasingly recognised more. But this trend towards “dying with dignity” because all hope is seen as lost is very sad, and very worrying, and very wrong. The case for climate change risk is far, far more clear, and yet you will note that climate activists are neither advocating terrorism, nor giving up, nor pronouncing certain doom. There is grief and there is fear and the climate activist scene has many problems, but I have never felt this pronounced wrongness there.
What stood out to me in the video is Eliezer no longer being able to conceive of any positive outcome at all, which is beyond reason. It made me wonder what approach a company could possible develop for alignment, or what a supposedly aligned AI could possibly do, for Eliezer to take back his doom predictions, and suspect that the answer is none. The impression I got was that he is meanwhile closed to the possibility entirely. I found the Time article heartbreaking. These are parents, intelligent, rational parents who I have respect and compassion for, essentially grieving the death of a young, healthy child, based on the unjustified certainty of impeding doom. I’ve read more hopeful accounts from people living in Ukrainian warzones, or in parts of the Sahel swallowed by Sahara, or islands getting drowned by climate change, where the evidence of risk and lack of reason for hope is far more conclusive; at the end of the day, Eliezer is worried that we will fail at making a potentially emerging powerful agent be friendly, while we know extremely little about these agents and their natural alignment tendencies. In comparison to so many other doom scenarios the certainty here is just really not high. I am glad people here are taking AI risk seriously, that this risk is being increasingly recognised more. But this trend towards “dying with dignity” because all hope is seen as lost is very sad, and very worrying, and very wrong. The case for climate change risk is far, far more clear, and yet you will note that climate activists are neither advocating terrorism, nor giving up, nor pronouncing certain doom. There is grief and there is fear and the climate activist scene has many problems, but I have never felt this pronounced wrongness there.
This market by Eliezer about the possible reasons why AI may yet have a positive outcome seems to refute your first sentence.
Also, I haven’t seen any AI notkilleveryoneism people advocating terrorism or giving up.