Often, AI safety proponents talk about things that might be nice, like agreements to not do dangerous things, and focus on the questions of how to make those agreements in everyone’s interest, or to measure compliance with them, or so on. Often these hopes take the shape of voluntary agreements adopted by professional organizations, or by large companies that jointly dominate a field. [In my personal view, it seems more likely we can convince AI engineers and researchers than legislators to adopt sensible policies, especially in the face of potentially rapid change.]
This paper asks the question: could such agreements even be legal? What underlying factors drive legality, so that we could structure the agreements to maximize the probability that they would hold up in court?
Overall, I appreciated the groundedness of the considerations, and the sense of spotting a hole that I might otherwise have missed. [I’m too used to thinking of antitrust in the context of ‘conspiracy against the public’ that it didn’t occur to me that a ‘conspiracy for the public’ might run afoul of the prohibitions, and yet once pointed out it seems definitely worth checking.]
An obvious followup question that occurs to me: presumably in order to be effective, these agreements would have to be international. [Some sorts of unsafe AI, like autonomous vehicles, mostly do local damage, but other sorts of unsafe AI, like autonomous hackers, can easily do global damage, and creators can preferentially seek out legal environments favorable to their misbehavior.] Are there similar sorts of obstacles that would stand in the way of global coordination?
My summary / commentary:
Often, AI safety proponents talk about things that might be nice, like agreements to not do dangerous things, and focus on the questions of how to make those agreements in everyone’s interest, or to measure compliance with them, or so on. Often these hopes take the shape of voluntary agreements adopted by professional organizations, or by large companies that jointly dominate a field. [In my personal view, it seems more likely we can convince AI engineers and researchers than legislators to adopt sensible policies, especially in the face of potentially rapid change.]
This paper asks the question: could such agreements even be legal? What underlying factors drive legality, so that we could structure the agreements to maximize the probability that they would hold up in court?
Overall, I appreciated the groundedness of the considerations, and the sense of spotting a hole that I might otherwise have missed. [I’m too used to thinking of antitrust in the context of ‘conspiracy against the public’ that it didn’t occur to me that a ‘conspiracy for the public’ might run afoul of the prohibitions, and yet once pointed out it seems definitely worth checking.]
An obvious followup question that occurs to me: presumably in order to be effective, these agreements would have to be international. [Some sorts of unsafe AI, like autonomous vehicles, mostly do local damage, but other sorts of unsafe AI, like autonomous hackers, can easily do global damage, and creators can preferentially seek out legal environments favorable to their misbehavior.] Are there similar sorts of obstacles that would stand in the way of global coordination?