Even if you don’t assume that the long-term future matters much, preventing AI risk is still a valuable policy objective. Here’s why.
In regulatory cost-benefit analysis, a tool called the “value of a statistical life” is used to measure how much value people place on avoiding risks to their own life (source). Most government agencies, by asking about topics like how much people will pay for safety features in their car or how much people are paid for working in riskier jobs, assign a value of about ten million dollars to one statistical life. That is, reducing the risk of a thousand people dying by one in a thousand each is worth ten million dollars of government money.
If experts on AI such as Stuart Russell are to be believed (and if they’re not to be believed, who is?), then superintelligent AI poses a sizeable risk of leading to the end of humanity. For a very conservative estimate, let’s just assume that the AI will only kill every single American. There are currently over 330 million Americans (source), and so the use of the value of a statistical life implies that reducing AI risk by just one in a million is worth:
330 million Americans * 1 outcome in which all of them die / 1 million outcomes * 10 million dollars / statistical life = $3,300,000,000
No, this is not a misprint. It is worth 3.3 billion dollars to reduce the risk of human extinction due to AI by one in one million, based on the government’s own cost-effectiveness metrics, even assuming that the long-term future has no significance, and even assuming that non-American lives have no significance.
And AI experts say we could do a lot more for a lot less.
Even if you don’t assume that the long-term future matters much, preventing AI risk is still a valuable policy objective. Here’s why.
In regulatory cost-benefit analysis, a tool called the “value of a statistical life” is used to measure how much value people place on avoiding risks to their own life (source). Most government agencies, by asking about topics like how much people will pay for safety features in their car or how much people are paid for working in riskier jobs, assign a value of about ten million dollars to one statistical life. That is, reducing the risk of a thousand people dying by one in a thousand each is worth ten million dollars of government money.
If experts on AI such as Stuart Russell are to be believed (and if they’re not to be believed, who is?), then superintelligent AI poses a sizeable risk of leading to the end of humanity. For a very conservative estimate, let’s just assume that the AI will only kill every single American. There are currently over 330 million Americans (source), and so the use of the value of a statistical life implies that reducing AI risk by just one in a million is worth:
330 million Americans * 1 outcome in which all of them die / 1 million outcomes * 10 million dollars / statistical life = $3,300,000,000
No, this is not a misprint. It is worth 3.3 billion dollars to reduce the risk of human extinction due to AI by one in one million, based on the government’s own cost-effectiveness metrics, even assuming that the long-term future has no significance, and even assuming that non-American lives have no significance.
And AI experts say we could do a lot more for a lot less.
(policymakers)