Alex Tabarrok says and Seb Krier mostly agrees that AI will not be intelligent enough to figure out how to ‘perfectly organize a modern economy.’ Why? Because the AIs will be part of the economy, and they will be unable to anticipate each other. So by this thinking, they would be able to perfectly organize an economy as it exists today, but not as it will exist when they need to do that. That seems reasonable, if you posit an economy run in ways similar to our own except with frontier AIs as effectively independent economic agents, interacting in ways that look like now such as specialization and limited collaboration, while things get increasingly complex.
What if the AI agents see that it is a win-win if they make themselves behave economically according to certain rules that make the agents and system as a whole more robust and predictable? Bounded competition within voluntary mutually-agreed-upon rules layered on top of the existing laws. If the net wins outweighed the local costs, it could be Pareto optimal, and improved coordination could be enough to reach the new stable state. This could potentially even work when competing against defectors, if the cooperators were sufficiently powerful and formed an alliance against defectors. Smart agents don’t necessarily need to choose to be maximally competitive and unpredictable, they have more choices available to them.
The Alliance for Win-Win Economic Stability sounds kinda good, but if humans were bad at following the Alliance rules and thus got counted as defectors that the Alliance targeted for punishment… then, not so good for us humans.
What if the AI agents see that it is a win-win if they make themselves behave economically according to certain rules that make the agents and system as a whole more robust and predictable? Bounded competition within voluntary mutually-agreed-upon rules layered on top of the existing laws. If the net wins outweighed the local costs, it could be Pareto optimal, and improved coordination could be enough to reach the new stable state. This could potentially even work when competing against defectors, if the cooperators were sufficiently powerful and formed an alliance against defectors. Smart agents don’t necessarily need to choose to be maximally competitive and unpredictable, they have more choices available to them.
The Alliance for Win-Win Economic Stability sounds kinda good, but if humans were bad at following the Alliance rules and thus got counted as defectors that the Alliance targeted for punishment… then, not so good for us humans.