Yeah, of course, I agree. But, as I said, I didn’t mean that Eliezer did something wrong. I am just rooting for someone else from the AGI Safety community who has hope and who also honestly sees that things are getting terrible, and this is not the reason to give up but the reason to fight harder. If all people who have a plan think it is not good enough and translate this mood to the public, then the public will not believe them, and all people get more depressed than motivated to do something. I think it’s worse than having an insane plan but giving people hope. It’s not about the logical side, like, are there any instructions on what to do (Anthropic, Conjecture, Red Wood, etc. have some). It is about the passion and belief of people who propose plans.
Gandalf had a plan around which he could rally people around. Eliezer sees no such plan for friendly AI.
Generally, the act of rallying people around a plan means that other possible plans get less attention.
Yeah, of course, I agree. But, as I said, I didn’t mean that Eliezer did something wrong. I am just rooting for someone else from the AGI Safety community who has hope and who also honestly sees that things are getting terrible, and this is not the reason to give up but the reason to fight harder. If all people who have a plan think it is not good enough and translate this mood to the public, then the public will not believe them, and all people get more depressed than motivated to do something. I think it’s worse than having an insane plan but giving people hope. It’s not about the logical side, like, are there any instructions on what to do (Anthropic, Conjecture, Red Wood, etc. have some). It is about the passion and belief of people who propose plans.