The answer is that apocalypse insurance—unlike liability insurance—must pay out in advance of the destruction of everyone. If somebody wishes to risk killing you (with some probability), there’s presumably some amount of money they could pay you now, in exchange for the ability to take that risk.
Pretty sure you mean they should pay premiums rather than payouts?
I like the spirit of this idea, but think it’s both theoretically and practically impossible: how do you value apocalypse? Payouts are incalculable/infinite/meaningless if no one is around.
The underlying idea seems sound to me: there are unpredictable civilizational outcomes resulting from pursuing this technology—some spectacular, some horrendous—and the pursuers should not reap all the upside when they’re highly unlikely to bear any meaningful downside risks.
I suspect this line of thinking could be grating to many self-described libertarians who lean e/acc and underweight the possibility that technological progress != prosperity in all cases.
It also seems highly impractical because there is not much precedent for insuring against novel transformative events for which there’s no empirical basis*. Good luck getting OAI, FB, MSFT, etc. to consent to such premiums, much less getting politicians to coalesce around a forced insurance scheme that will inevitably be denounced as stymying progress and innovation with no tangible harms to point to (until it’s too late).
Far more likely (imo) are post hoc reaction scenarios where either:
a) We get spectacular takeoff driven by one/few AI labs that eat all human jobs and accrue all profits, and society deems these payoffs unfair and arrives at a redistribution scheme that seems satisfactory (to the extent “society” or existing political structures have sufficient power to enforce such a scheme)
b) We get a horrendous outcome and everyone’s SOL
* Haven’t researched this and would be delighted to hear discordant examples.
Pretty sure you mean they should pay premiums rather than payouts?
I like the spirit of this idea, but think it’s both theoretically and practically impossible: how do you value apocalypse? Payouts are incalculable/infinite/meaningless if no one is around.
The underlying idea seems sound to me: there are unpredictable civilizational outcomes resulting from pursuing this technology—some spectacular, some horrendous—and the pursuers should not reap all the upside when they’re highly unlikely to bear any meaningful downside risks.
I suspect this line of thinking could be grating to many self-described libertarians who lean e/acc and underweight the possibility that technological progress != prosperity in all cases.
It also seems highly impractical because there is not much precedent for insuring against novel transformative events for which there’s no empirical basis*. Good luck getting OAI, FB, MSFT, etc. to consent to such premiums, much less getting politicians to coalesce around a forced insurance scheme that will inevitably be denounced as stymying progress and innovation with no tangible harms to point to (until it’s too late).
Far more likely (imo) are post hoc reaction scenarios where either:
a) We get spectacular takeoff driven by one/few AI labs that eat all human jobs and accrue all profits, and society deems these payoffs unfair and arrives at a redistribution scheme that seems satisfactory (to the extent “society” or existing political structures have sufficient power to enforce such a scheme)
b) We get a horrendous outcome and everyone’s SOL
* Haven’t researched this and would be delighted to hear discordant examples.