I’m going to naively express something that your risk calculation makes me think:
I think EY and I and others who are persuaded by him seem to be rating the expected utility of a x-risk outcome as nothing less (more?) than negative infinity. I.e., whether the risk is 1% or 50% our expected utility from AI x-risk will calculate to approx. negative infinity, which will outweigh even 99% of 10^20+ utility.
This is why shutting it down seems to be the only logical move in this calculation right now. Because if you think that a negative infinity outcome exists at all in the outcome space, then the only solution is to avoid the outcome space completely until you can be assured that it does not include a potentially-negative infinity outcome. It’s not about getting that negative infinity outcome to some tiny expected percentage, it’s about eliminating it from the outcome space entirely.
The problem is that the key actor is of course OpenAI, not Eliezer, so what Eliezer values on X-risk is not relevant to the analysis. What matters is how much the people at AI companies value them dying, and given that that I believe they don’t value their lives infinitely, then Eliezer’s calculations don’t matter, since he isn’t a relevant actor in a AI company.
I’m going to naively express something that your risk calculation makes me think:
I think EY and I and others who are persuaded by him seem to be rating the expected utility of a x-risk outcome as nothing less (more?) than negative infinity. I.e., whether the risk is 1% or 50% our expected utility from AI x-risk will calculate to approx. negative infinity, which will outweigh even 99% of 10^20+ utility.
This is why shutting it down seems to be the only logical move in this calculation right now. Because if you think that a negative infinity outcome exists at all in the outcome space, then the only solution is to avoid the outcome space completely until you can be assured that it does not include a potentially-negative infinity outcome. It’s not about getting that negative infinity outcome to some tiny expected percentage, it’s about eliminating it from the outcome space entirely.
The problem is that the key actor is of course OpenAI, not Eliezer, so what Eliezer values on X-risk is not relevant to the analysis. What matters is how much the people at AI companies value them dying, and given that that I believe they don’t value their lives infinitely, then Eliezer’s calculations don’t matter, since he isn’t a relevant actor in a AI company.