The problem is that the key actor is of course OpenAI, not Eliezer, so what Eliezer values on X-risk is not relevant to the analysis. What matters is how much the people at AI companies value them dying, and given that that I believe they don’t value their lives infinitely, then Eliezer’s calculations don’t matter, since he isn’t a relevant actor in a AI company.
The problem is that the key actor is of course OpenAI, not Eliezer, so what Eliezer values on X-risk is not relevant to the analysis. What matters is how much the people at AI companies value them dying, and given that that I believe they don’t value their lives infinitely, then Eliezer’s calculations don’t matter, since he isn’t a relevant actor in a AI company.