I’m curious how antitrust enforcement will be able to deal with progress in AI. (I know very little about antitrust laws.)
Imagine a small town with five barbershops. Suppose an antitrust law makes it illegal for the five barbershop owners to have a meeting in which they all commit to increase prices by $3.
Suppose that each of the five barbershops will decide to start using some off-the-shelf deep RL based solution to set their prices. Suppose that after some time in which they’re all using such systems, lo and behold, they all gradually increase prices by $3. If the relevant government agency notices this, who can they potentially accuse of committing a crime? Each barbershop owner is just setting their prices to whatever their off-the-shelf system recommends (and that system is a huge neural network that no one understands at a relevant level of abstraction).
This doesn’t require AI, it happens anywhere that competing prices are easily available and fairly mutable. AI will be no more nor less liable than humans making the same decisions would be.
This doesn’t require AI, it happens anywhere that competing prices are easily available and fairly mutable.
It happens without AI to some extent, but if a lot of businesses will be setting prices via RL based systems (which seems to me likely), then I think it may happen to a much greater extent. Consider that in the example above, it may be very hard for the five barbers to coordinate a $3 price increase without any communication (and without AI) if, by assumption, the only Nash equilibrium is the state where all the five barbers charge market prices.
AI will be no more nor less liable than humans making the same decisions would be.
People sometimes go to jail for illegally coordinating prices with competitors; I don’t see how an antitrust enforcement agency will hold anyone liable in the above example.
In theory, antitrust issues could be less of an issue with software, because a company could be ordered to make the source code for their products public. (Though this might set up bad incentives over the long run, I don’t think this is how such things are usually handled—microsoft’s history seems relevant.)
Suppose the code of the deep RL algorithm that was used to train the huge policy network is publicly available on GitHub, as well as everything else that was used to train the policy network, plus the final policy network itself. How can an antitrust enforcement agency use all this to determine whether an antitrust violation has occurred? (in the above example)
I’m curious how antitrust enforcement will be able to deal with progress in AI. (I know very little about antitrust laws.)
Imagine a small town with five barbershops. Suppose an antitrust law makes it illegal for the five barbershop owners to have a meeting in which they all commit to increase prices by $3.
Suppose that each of the five barbershops will decide to start using some off-the-shelf deep RL based solution to set their prices. Suppose that after some time in which they’re all using such systems, lo and behold, they all gradually increase prices by $3. If the relevant government agency notices this, who can they potentially accuse of committing a crime? Each barbershop owner is just setting their prices to whatever their off-the-shelf system recommends (and that system is a huge neural network that no one understands at a relevant level of abstraction).
This doesn’t require AI, it happens anywhere that competing prices are easily available and fairly mutable. AI will be no more nor less liable than humans making the same decisions would be.
It happens without AI to some extent, but if a lot of businesses will be setting prices via RL based systems (which seems to me likely), then I think it may happen to a much greater extent. Consider that in the example above, it may be very hard for the five barbers to coordinate a $3 price increase without any communication (and without AI) if, by assumption, the only Nash equilibrium is the state where all the five barbers charge market prices.
People sometimes go to jail for illegally coordinating prices with competitors; I don’t see how an antitrust enforcement agency will hold anyone liable in the above example.
In theory, antitrust issues could be less of an issue with software, because a company could be ordered to make the source code for their products public. (Though this might set up bad incentives over the long run, I don’t think this is how such things are usually handled—microsoft’s history seems relevant.)
Suppose the code of the deep RL algorithm that was used to train the huge policy network is publicly available on GitHub, as well as everything else that was used to train the policy network, plus the final policy network itself. How can an antitrust enforcement agency use all this to determine whether an antitrust violation has occurred? (in the above example)