I like this proposal. It’s a fun rethinking of the problem.
However,
How can you even approximate a fair price for these payouts? AI risks are extremely conditional and depend on difficult to quantify assumptions. “The model leaked AND optimized itself to work on computers insecure and Internet available at the time of escape AND humans failed to stop it AND...”
For something like a nuclear power plant for instance, most of the risk is all black swans. There are a ton of safety systems and mechanisms to cool the core. We know from actual accidents that when this fails, it’s not because each piece of equipment all failed at the same time. This relates to AI risk because multiplying the series probability does not tell you the true risk.
For all meltdowns i am aware of, the risk happened because human operators or an unexpected common cause made all the levels of safety fail at once.
3 mile island : operators misunderstood the situation and turned off cooling.
Chernobyl: operators bypassed the automated control system with patch cables and put the core into an unstable part of the operating curve.
Fukushima : plant wide power failure, road conditions prevented bringing spare generators on site quickly.
Each cause is coupled, and can thought of as a single cause. Adding n+1 serial defenses might not have helped in each case (depends what it is)
If AI does successfully kill everyone it’s going to be in a way humans didn’t model.
Mineshaft gap argument. Large fees on AI companies simply encourages them to set up shop in countries that don’t charge the fees. In futures where they don’t kill everyone, those countries will flourish or will conquer the planet. So the other countries have to drop these costs and subsidize hasty catch up ASI research or risk losing. In the futures where AI does attack and try to kill everyone, not have tool AI (aligned only with the user) increases the probability that the AI wins. Most defensive measures are stronger if you have your own AI to scale production. (More bunkers, more nukes to fire back, more spacesuits to stop the bio and nano attacks, more drones...)
I like this proposal. It’s a fun rethinking of the problem.
However,
How can you even approximate a fair price for these payouts? AI risks are extremely conditional and depend on difficult to quantify assumptions. “The model leaked AND optimized itself to work on computers insecure and Internet available at the time of escape AND humans failed to stop it AND...”
For something like a nuclear power plant for instance, most of the risk is all black swans. There are a ton of safety systems and mechanisms to cool the core. We know from actual accidents that when this fails, it’s not because each piece of equipment all failed at the same time. This relates to AI risk because multiplying the series probability does not tell you the true risk.
For all meltdowns i am aware of, the risk happened because human operators or an unexpected common cause made all the levels of safety fail at once.
3 mile island : operators misunderstood the situation and turned off cooling.
Chernobyl: operators bypassed the automated control system with patch cables and put the core into an unstable part of the operating curve.
Fukushima : plant wide power failure, road conditions prevented bringing spare generators on site quickly.
Each cause is coupled, and can thought of as a single cause. Adding n+1 serial defenses might not have helped in each case (depends what it is)
If AI does successfully kill everyone it’s going to be in a way humans didn’t model.
Mineshaft gap argument. Large fees on AI companies simply encourages them to set up shop in countries that don’t charge the fees. In futures where they don’t kill everyone, those countries will flourish or will conquer the planet. So the other countries have to drop these costs and subsidize hasty catch up ASI research or risk losing. In the futures where AI does attack and try to kill everyone, not have tool AI (aligned only with the user) increases the probability that the AI wins. Most defensive measures are stronger if you have your own AI to scale production. (More bunkers, more nukes to fire back, more spacesuits to stop the bio and nano attacks, more drones...)