In economic science, the tragedy of the commons is a situation in which individual users, who have open access to a resource unhampered by shared social structures or formal rules that govern access and use, act independently according to their own self-interest and, contrary to the common good of all users, cause depletion of the resource through their uncoordinated action.
The usual example of a TotC is a fishing pond: Everyone wants to fish as much as possible, but fish are not infinite, and if you fish them faster than they can reproduce, you end up with less and less fish per catch.
AI development seems to have a similar dynamic: Everyone has an incentive to build more and more powerful AIs, because there is a lot of money to be made in doing so. But more and more powerful AIs being made increases the likelihood of an unstoppable AGI being made.
There are some differences, but I think this is the underlying dynamic driving AI development today. The biggest point of difference is that, whereas one person’s overfishing eventually causes a noticeable negative effect on other fishers, and at the least does not improve their own catches, one firm building a more powerful AI probably does improve the economic situation of the other people who leverage it, up until a critical point.
Are there other tragedies of the commons that exhibit such non-monotonic behavior?
With a little stretch, EVERY coordination problem is a tragedy of the commons. It’s only a matter of identifying the resource that is limited but has uncontrolled consumption.
In this case, it IS a stretch to think of “evil-AGI-free world” as a resource that’s being consumed. and it doesn’t really lead to solutions—many TotC problems can be addressed by defining property rights and figuring out who has the authority/ability to exclude uses in order to protect the long-term value of the resource.
AI development is a tragedy of the commons
Per Wikipedia:
The usual example of a TotC is a fishing pond: Everyone wants to fish as much as possible, but fish are not infinite, and if you fish them faster than they can reproduce, you end up with less and less fish per catch.
AI development seems to have a similar dynamic: Everyone has an incentive to build more and more powerful AIs, because there is a lot of money to be made in doing so. But more and more powerful AIs being made increases the likelihood of an unstoppable AGI being made.
There are some differences, but I think this is the underlying dynamic driving AI development today. The biggest point of difference is that, whereas one person’s overfishing eventually causes a noticeable negative effect on other fishers, and at the least does not improve their own catches, one firm building a more powerful AI probably does improve the economic situation of the other people who leverage it, up until a critical point.
Are there other tragedies of the commons that exhibit such non-monotonic behavior?
With a little stretch, EVERY coordination problem is a tragedy of the commons. It’s only a matter of identifying the resource that is limited but has uncontrolled consumption.
In this case, it IS a stretch to think of “evil-AGI-free world” as a resource that’s being consumed. and it doesn’t really lead to solutions—many TotC problems can be addressed by defining property rights and figuring out who has the authority/ability to exclude uses in order to protect the long-term value of the resource.
Why is it a stretch?
It’s hard to quantify the resource or define how it reduces with use or how it’s replenished. This makes it an imperfect match for the TotC analogy.