Having noticed that problem as a major bottleneck to useful legislation, I’m now a lot more interested in legal approaches to AI X-risk which focus on catastrophe insurance. That would create a group—the insurers—who are strongly incentivized to acquire the requisite technical skills and then make plans/requirements which actually address some risks.
I don’t understand this. Isn’t the strongest incentive already present (because extinction would effect them)? Or maybe you mean smaller scale ‘catastrophes’?
Case one: would-be-catastrophe-insurers don’t believe in x-risks, don’t care to investigate. (At stake: their lives)
Case two: catastrophe-insurers don’t believe in x-risks, and either don’t care to investigate, or do for some reason I’m not seeing. (At stake: their lives and insurance profits (correlated)).
They can believe in catastrophic but non-existential risks. (Like, AI causes something like crowdstrike periodically if your not trying to prevent that )
I don’t understand this. Isn’t the strongest incentive already present (because extinction would effect them)? Or maybe you mean smaller scale ‘catastrophes’?
I think people mostly don’t believe in extinction risk, so the incentive isn’t nearly as real/immediate.
+1, and even for those who do buy extinction risk to some degree, financial/status incentives usually have more day-to-day influence on behavior.
I’m imagining this:
Case one: would-be-catastrophe-insurers don’t believe in x-risks, don’t care to investigate. (At stake: their lives)
Case two: catastrophe-insurers don’t believe in x-risks, and either don’t care to investigate, or do for some reason I’m not seeing. (At stake: their lives and insurance profits (correlated)).
They can believe in catastrophic but non-existential risks. (Like, AI causes something like crowdstrike periodically if your not trying to prevent that )