Yeah, of course, I agree. But, as I said, I didn’t mean that Eliezer did something wrong. I am just rooting for someone else from the AGI Safety community who has hope and who also honestly sees that things are getting terrible, and this is not the reason to give up but the reason to fight harder. If all people who have a plan think it is not good enough and translate this mood to the public, then the public will not believe them, and all people get more depressed than motivated to do something. I think it’s worse than having an insane plan but giving people hope. It’s not about the logical side, like, are there any instructions on what to do (Anthropic, Conjecture, Red Wood, etc. have some). It is about the passion and belief of people who propose plans.
Igor Timofeev
Karma: 9
Hi! I am new for AGI Safety topic and aware about almost no approaches for resolving it. But I am not exactly new in deep learning and I find the identifiability topic of a deep learning models interesting: for example papers like “Advances in Identifiability of Nonlinear Probabilistic Models” by Ilyes Khemakhem or “On Linear Identifiability of Learned Representations”. Does anyone know if there is a some direction of AGI Safety research that somehow relates with the identifiability topic? It seems for me intuitively related but may be it is not.
I think at the moment when hoping for aliens is the best plan we really fucked up )))