You can think of AI x-risk in terms of Coase theorem: AI work creates an externality at least in expectation, so we can solve that with Coasean bargaining if
there are strong property rights
transactions costs are low
The problem with the AI risk externality is that long-term property rights on Earth are very weak, countries typically collapse and all property that was enforced by the government is lost. AI takeovers are dangerous because they could crush the current governments and just take everything.
I think the transaction costs are probably dominated by information problems (which AIs are actually dangerous?), but costs of negotiation are also something to consider. Still, I think these are the relatively tamer problems, the big one is how to create stronger property rights.
So we mostly need to think about how to make property rights stronger. This can be achieved in various ways:
use legislation to place limitations on AI
use AI to make current governments more resistant to coups or other forms of influence
use AI to outright replace current governments with a new governance system that’s much more resistant to coups or other forms of unwanted influence
make takeoff more gradual and less sharp (reduce the “curvature of takeoff”), for example by allocating more funding to AI companies earlier in the process (which is counterintuitive—faster can be safer if it reduces the curvature of takeoff)
I meant the noise pollution example in my essay to be the Coase theorem, but I agree with you that property rights are not strong enough to solve with AI risk. I agree that AI will open up new paths for solving all kinds of problems, including giving us solutions that could end up helping with alignment.
You can think of AI x-risk in terms of Coase theorem: AI work creates an externality at least in expectation, so we can solve that with Coasean bargaining if
there are strong property rights
transactions costs are low
The problem with the AI risk externality is that long-term property rights on Earth are very weak, countries typically collapse and all property that was enforced by the government is lost. AI takeovers are dangerous because they could crush the current governments and just take everything.
I think the transaction costs are probably dominated by information problems (which AIs are actually dangerous?), but costs of negotiation are also something to consider. Still, I think these are the relatively tamer problems, the big one is how to create stronger property rights.
So we mostly need to think about how to make property rights stronger. This can be achieved in various ways:
use legislation to place limitations on AI
use AI to make current governments more resistant to coups or other forms of influence
use AI to outright replace current governments with a new governance system that’s much more resistant to coups or other forms of unwanted influence
make takeoff more gradual and less sharp (reduce the “curvature of takeoff”), for example by allocating more funding to AI companies earlier in the process (which is counterintuitive—faster can be safer if it reduces the curvature of takeoff)
I meant the noise pollution example in my essay to be the Coase theorem, but I agree with you that property rights are not strong enough to solve with AI risk. I agree that AI will open up new paths for solving all kinds of problems, including giving us solutions that could end up helping with alignment.