Great post. I agree with almost all of this. What I am uncertain about is the idea that AI existential risk is a rights violation under the most strict understanding of libertarianism.
As another commenter has suggested, we can’t claim that any externality creates rights to stop or punish a given behavior, or libertarianism turns into safetyism.[1] If we take the Non-Aggression Principle as a common standard for a hardline libertarian view of what harms give you a right to restitution or retaliation, it seems that x-risk does not fit this definition.
1.The most clear evidence seems to be that Murray Rothbard wrote the following:
“lt is important to insist [...] that the threat of aggression be palpable, immediate, and direct; in short, that it be embodied in the initiation of an overt act. [...] Once we bring in “threats” to person and property that are vague and future—i.e., are not overt and immediate—then all manner of tyranny becomes excusable.” (The Ethics of Liberty p. 78)
X-risk by its very nature falls into the category of “vague and future.”
2. To take your specific example of flying planes over someone’s house, a follower of Rothbard, Walter Block, has argued that this exact risk is not a violation of the non-aggression principle. He also states that risks from nuclear power are “legitimate under libertarian law.” (p. 295)[2] If we consider AI analogous to these two risks, it would seem Block would not agree that there is a right to seek compensation for x-risk.
3. Matt Zwolinski criticized the NAP for having an “all-or-nothing attitude toward risk” as it does not indicate what level of risk constitutes aggression. Another libertarian writer responded that a risk that constitutes a direct “threat” is aggression, (i.e. pointing a pistol at someone, even if this doesn’t result in the victim being shot) but risks of accidental damage are not aggression unlese these risks are imposed with threats of violence:
“If you don’t wish to assume the risk of driving, then don’t drive. And if you don’t want to run the risk of an airplane crashing into your house, then move to a safer location. (You don’t own the airspace used by planes, after all.)”
This implies to me that Zwolinski’s criticism is accurate with regards to accidents, which would rule out x-risk as a NAP violation.
Conclusion
This shows that at least some libertarians’ understanding of rights does not include x-risk as a violation. I consider this to be a point against their theory of rights, not an argument against pursuing AI safety. The most basic moral instinct suggests that creating a significant risk of destroying all of humanity and its light-cone is a violation of the rights of each member of humanity.[3]
While I think that not including AI x-risk (and other risks/accidental harms) in its definition of proscribable harms means that the NAP is too narrow, the question still stands as to where to draw the line as to what externalities or risks give victims a right to payment, and which do not. I’m curious where you draw the line.
It is possible that I am misunderstanding something about libertarianism or x-risk that contradicts the interpretation I have drawn here.
“Some people’s happiness depends on whether they live in a drug-free world, how income is distributed, or whether the Grand Canyon is developed. Given such moral or ideological tastes, any human activity can generate externalities […] Free expression, for instance, will inevitably offend some, but such offense generally does not justify regulation in the libertarian framework for any of several reasons: because there exists a natural right of free expression, because offense cannot be accurately measured and is easy to falsify, because private bargaining may be more effective inasmuch as such regulation may make government dangerously powerful, and because such regulation may improperly encourage future feelings of offense among citizens.”
Great post. I agree with almost all of this. What I am uncertain about is the idea that AI existential risk is a rights violation under the most strict understanding of libertarianism.
As another commenter has suggested, we can’t claim that any externality creates rights to stop or punish a given behavior, or libertarianism turns into safetyism.[1] If we take the Non-Aggression Principle as a common standard for a hardline libertarian view of what harms give you a right to restitution or retaliation, it seems that x-risk does not fit this definition.
1.The most clear evidence seems to be that Murray Rothbard wrote the following:
X-risk by its very nature falls into the category of “vague and future.”
2. To take your specific example of flying planes over someone’s house, a follower of Rothbard, Walter Block, has argued that this exact risk is not a violation of the non-aggression principle. He also states that risks from nuclear power are “legitimate under libertarian law.” (p. 295)[2] If we consider AI analogous to these two risks, it would seem Block would not agree that there is a right to seek compensation for x-risk.
3. Matt Zwolinski criticized the NAP for having an “all-or-nothing attitude toward risk” as it does not indicate what level of risk constitutes aggression. Another libertarian writer responded that a risk that constitutes a direct “threat” is aggression, (i.e. pointing a pistol at someone, even if this doesn’t result in the victim being shot) but risks of accidental damage are not aggression unlese these risks are imposed with threats of violence:
This implies to me that Zwolinski’s criticism is accurate with regards to accidents, which would rule out x-risk as a NAP violation.
Conclusion
This shows that at least some libertarians’ understanding of rights does not include x-risk as a violation. I consider this to be a point against their theory of rights, not an argument against pursuing AI safety. The most basic moral instinct suggests that creating a significant risk of destroying all of humanity and its light-cone is a violation of the rights of each member of humanity.[3]
While I think that not including AI x-risk (and other risks/accidental harms) in its definition of proscribable harms means that the NAP is too narrow, the question still stands as to where to draw the line as to what externalities or risks give victims a right to payment, and which do not. I’m curious where you draw the line.
It is possible that I am misunderstanding something about libertarianism or x-risk that contradicts the interpretation I have drawn here.
Anyway, thanks for articulating this proposal.
See also this argument by Alexander Volokh:
Block argues that it would be wrong for individuals to own nuclear weapons, but he does not make clear why this is a meaningful distinction.
And any extraterrestrials in our light-cone, if they have rights. But that’s a whole other post.