For those who like me didn’t find it immediately obvious what the various parameters in “Table 2” mean:
gai is the economic growth rate per unit time produced by AI, versus g0 without it. (You can take “per unit time” to mean “per year”; I don’t think anything in the paper depends on what the unit of time actually is, but the without-AI figures are appropriate for a unit of one year.)
mai is the personal mortality risk per year with AI, versus m0 without it. (Simple model where everyone independently either dies or doesn’t during each year, with fixed probabilities.)
γ is a risk-aversion parameter: d(utility)/d(wealth) is inversely proportional to the γ power of wealth. So γ=1 is the usual “log utility”; larger values mean you value extra-large amounts of wealth less, enough so that any γ>1 means that the amount of utility you can get from being wealthier is bounded. (Aside: It’s not clear to me that “risk aversion” is a great term for this, since there are reasons to think that actual human risk aversion is not fully explained by decreasing marginal utility.)
The values displayed in the table are “existential risk cutoffs” meaning that you favour developing AI that will get you those benefits in economic growth and mortality rate up to the point at which the probability that it kills everyone equals the existential risk cutoff. (It’s assumed that any other negative consequences of AI are already factored in to those m and g figures.)
The smallest “existential risk cutoff” in the table is about 2%, and to get it you need to assume that the AI doesn’t help with human mortality at all, that it leads to economic growth of 10% per year (compared with a baseline of 2%), and that your marginal utility of wealth drops off really quickly. With larger benefits from AI or more value assigned to enormous future wealth, you get markedly larger figures (i.e., with the assumptions made in the paper you should be more willing to tolerate a greater risk that we all die).
I suspect that actually other assumptions made in the paper diverge enough from actual real-world expectations that we should take these figures with a grain of salt.
For those who like me didn’t find it immediately obvious what the various parameters in “Table 2” mean:
gai is the economic growth rate per unit time produced by AI, versus g0 without it. (You can take “per unit time” to mean “per year”; I don’t think anything in the paper depends on what the unit of time actually is, but the without-AI figures are appropriate for a unit of one year.)
mai is the personal mortality risk per year with AI, versus m0 without it. (Simple model where everyone independently either dies or doesn’t during each year, with fixed probabilities.)
γ is a risk-aversion parameter: d(utility)/d(wealth) is inversely proportional to the γ power of wealth. So γ=1 is the usual “log utility”; larger values mean you value extra-large amounts of wealth less, enough so that any γ>1 means that the amount of utility you can get from being wealthier is bounded. (Aside: It’s not clear to me that “risk aversion” is a great term for this, since there are reasons to think that actual human risk aversion is not fully explained by decreasing marginal utility.)
The values displayed in the table are “existential risk cutoffs” meaning that you favour developing AI that will get you those benefits in economic growth and mortality rate up to the point at which the probability that it kills everyone equals the existential risk cutoff. (It’s assumed that any other negative consequences of AI are already factored in to those m and g figures.)
The smallest “existential risk cutoff” in the table is about 2%, and to get it you need to assume that the AI doesn’t help with human mortality at all, that it leads to economic growth of 10% per year (compared with a baseline of 2%), and that your marginal utility of wealth drops off really quickly. With larger benefits from AI or more value assigned to enormous future wealth, you get markedly larger figures (i.e., with the assumptions made in the paper you should be more willing to tolerate a greater risk that we all die).
I suspect that actually other assumptions made in the paper diverge enough from actual real-world expectations that we should take these figures with a grain of salt.