If the above seemed confusing, just replace “policy” with “regulation” and my point doesn’t change very much. I feel like it’s not that hard to reliably identify worlds with more vs. less government regulation. I agree that in some abstract sense “there is always a policy”, but I am pointing to a much more concrete effect, which is that most passed regulation seems net-negative to me, whether national or international.
I think it’s very reasonable to try to influence and change regulation that will be passed anyways, but it seems that FLI is lobbying for passing new regulation, and importantly saying nothing about where the boundaries of the regulation should be, and exploring when passing that regulation would be net negative.
It seems that on-net, FLI’s actions are going to cause more regulation to happen, with the potential for large negative externalities over the status-quo (which is some mixture of leaving it to market forces, social norms, trust relationships, etc.). Those negative externalities won’t necessarily be directly in the domain of autonomous weapons. Good candidates for externalities are less economic growth, less trust between policy makers and scientists, potential for abuse by autocratic governments, increased confusion and complexity of navigating the AI space due to regulation making everything a minefield.
Another way to phrase my concerns is that over the last 50 years, it appears quite plausible to me that due to interest groups, not too dissimilar to ours such as the environmentalist movement, we have basically crippled our modern economy by passing a ridiculous volume of regulation interfering with every part of business and life. Science has substantially slowed down, with government involvement being one of the top candidates for the cause. This means, that from my perspective of global trustworthiness, we have a responsibility to really carefully analyze whether the marginal piece of legislation we are lobbying for is good, and whether it meets a greatly elevated standard of quality. I don’t see that work as having occurred here.
Indeed, almost all the writing by FLI is almost indistinguishable from writing coming from the environmentalist movement, which has caused very large amounts of harm. I am open to supporting regulation in this space, I just really want us to be in a different reference class than past social movements that called for broad regulation, without taking into account the potential substantial costs of that regulation.
“Regulation,” in the sense of a government limitation on otherwise “free” industry does indeed make a bit more sense, and you’re certainly entitled to the view that many pieces of regulation of the free market are net negative — though again I think it is quite nuanced, as in many cases (DMCA would be one) regulation allows more free markets that might not otherwise exist.
In this case, though, I think the more relevant reference class is “international arms control agreements” like the bioweapons convention, the convention on conventional weapons, the space treaty, landmine treaty, the nuclear nonproliferation treaties, etc. These are not so much regulations as compacts not to develop and use certain weapons. They may also include some prohibitions on industry developing and selling those weapons, but the important part is that the militaries are not making or buying them. (For example, you can’t just decide to build nuclear weapons, but I doubt it is illegal to develop or manufacture a laser-blinding weapon or landmine.)
The issue of regulation in the sense of putting limitations on AI developers (say toward safety issues) is worth debating but I think is a relatively distinct one. It is absolution important to carefully consider whether a given piece of policy or regulation is better than the alternatives (and not, I say again, “better than nothing” because in general the alternative is not “nothing.”) And I think it’s vital to try to improve existing legislation etc., which has been most of FLI for example’s focus.
Hmm, yeah, I think there is still something missing here. I agree that regulation on a “free” industry is one example that I am thinking of, and certainly one that matters a good amount, but I am talking about something a bit broader. More something like “governments in general seem to take actions of a specific shape and structure, and in some domains, actions of that shape can make problems worse, not better”.
Like, whether something is done via an international arms control agreement, or locally passed legislation, the shape of both of those actions is highly similar, with both being heavily constrained to be simple, both being optimized for low trust environments, both requiring high levels of legibility, etc. In this perspective, there are of course some important differences between regulation and “international arms control agreements”, but they clearly share a lot of structure, and their failure modes will probably be pretty similar.
I am also importantly not arguing for “let’s just ignore the whole space”. I am arguing for something much more like “it appears that in order to navigate this space successfully, it seems really important to understand past failures of people who are in a similar reference class to us, and generally enter this space with sensible priors about downside risk”.
I think past participants in this space appear to have very frequently slid down a slope of deception, exaggeration and adversarial discourse norms, as well as ended up being taken advantage of by local power dynamics and short-term incentives, in a way that caused them to lose at least a lot of my trust, and I would like us to act in a way that is trustworthy in this space. I also want us to avoid negative externalities for the surrounding communities and people working in the x-risk space (by engaging in really polarizing discourse norms, or facilitating various forms of implicit threats and violence).
I think one important difference might be that I view the case of AI development and regulation around autonomous weapons deeply interlinked, at the very least via the people involved in both (such as FLI). If we act in a way that doesn’t seem trustworthy in the space of autonomous weapons, then that seems likely to reduce our ability to gain trust and enact legislation that is more directly relevant and important to issues related to AI Alignment. As such, while I agree substantially that the problems in this space shares structure with the longer-term AI Alignment problem, it strikes me as being of paramount importance to display excellent judgement on which problems should be attacked via regulation and which ones should not be. It appears to me that the trust to design and enact legislation is directly dependent on the trust that you will not enact bad or unnecessary regulation, and as such, getting this answer right in the case of autonomous weapons seems pretty important (and which is why I find the test-case framing somewhat less compelling).
My guess is there are a number of pretty large inferential distances here, so not sure how to best proceed. I would be happy to jump on a call sometime, if you are interested. In all of this I also want to make clear that I am really glad you wrote this post, and while I have some disagreements with it, it is a great step forward in helping me understand where FLI is coming from, and it contains a number of arguments I find pretty compelling and hadn’t considered before.
You’re certainly entitled to your (by conventional standards) pretty extreme anti-regulatory view of e.g. the FDA, IRBs, environmental regulations, etc., and to your prior that regulations are in general highly net negative. I don’t share those views but I think we can probably agree that there are regulations (like seatbelts, those governing CFCs, asbestos, leaded gasoline, etc.) that are highly net positive, and others (e.g. criminalization of some drugs, anti-cryptography, industry protections against class action suits, etc.) that are nearly completely negative. What we can do to maximize the former and minimize the latter is a discussion worth having, and a very important one.
In the present case of autonomous weapons, I again think the right reference class is that of things like the bioweapons convention and the space treaty. I think these, also, have been almost unreservedly good: made the world more stable, avoided potentially catastrophic arms races, and left industries (like biotec, pharma, space industry, arms industry) perfectly healthy and arguably (especially for biotech) much better off than they would have been with a reputation mixed up in creating horrifying weapons. I also think in these cases, as with at least some AWs like antipersonnel WMDs, there is a pretty significant asymmetry, with the negative affects (of no regulation) having a tail into extremely bad outcomes, while the negative affects of well-structured regulations seem pretty mild at worst. Those are exactly the sorts of regulations/agreements I think we should be pushing on.
Very glad I wrote up the piece as I did, it’s been great to share and discuss it here with this community, which I have huge respect for!
It’s been a while since I looked into this, but if I remember correctly the space treaty is currently a major obstacle to industrializing space, due to making mostly impossible to have almost any form of private property of asteroids, or the moon, or other planets, and creating a large fraction of regulatory ambiguity and uncertainty for anyone wanting to work in this space.
When I talked to legal experts in the space for an investigation I ran for FHI 3-4 years ago, my sense was that the space treaty was a giant mess, nobody knew what it implied or meant, and that it generally was so ambiguous and unclear, and with so little binding power, that none of the legal scholars expected it to hold up in the long run.
I do think it’s likely that the space treaty overall did help demilitarize space, but given a lot of the complicated concerns around MAD, it’s not obvious to me that that was actually net-positive for the world. In any case, I really don’t see why one would list the space treaty as an obvious success.
I am not an expert on the Outer Space Treaty either, but by also by anecdotal evidence, I have always heard it to be of considerable benefit and a remarkable achievement of diplomatic foresight during the Cold War. However, I would welcome any published criticisms of the Outer Space Treaty you wish to provide.
It’s important to note that the treaty was originally ratified in 1967 (as in, ~two years before landing on the Moon, ~5 years after the Cuban Missile Crisis). If you critique a policy for its effects long after its original passage (as with reference to space mining, or as others have the effects of Section 230 of the CDA passed in 1996), your critique is really about the government(s) failing to update and revise the policy, not with the enactment of original policy. Likewise, it is important to run the counterfactual to the policy never being enacted. In this circumstance, I’m not sure how you envision a breakdown in US-USSR (and other world powers) negotiations on the demilitarization of space in 1967 would have led to better outcomes.
your critique is really about the government(s) failing to update and revise the policy, not with the enactment of original policy.
This feels like a really weird statement to me. It is highly predictably that as soon as a law is in place, that there is an incentive to use it for rent-seeking, and that abolishing a policy is reliably much harder than enacting a new policy. When putting in place legislation, the effects of your actions that I will hold you responsible for of course include the highly predictably problems that will occur when your legislation will not have been updated and revised in a long time. That’s one of the biggest limitations of legislation as a tool for solving problems!
If the above seemed confusing, just replace “policy” with “regulation” and my point doesn’t change very much. I feel like it’s not that hard to reliably identify worlds with more vs. less government regulation. I agree that in some abstract sense “there is always a policy”, but I am pointing to a much more concrete effect, which is that most passed regulation seems net-negative to me, whether national or international.
I think it’s very reasonable to try to influence and change regulation that will be passed anyways, but it seems that FLI is lobbying for passing new regulation, and importantly saying nothing about where the boundaries of the regulation should be, and exploring when passing that regulation would be net negative.
It seems that on-net, FLI’s actions are going to cause more regulation to happen, with the potential for large negative externalities over the status-quo (which is some mixture of leaving it to market forces, social norms, trust relationships, etc.). Those negative externalities won’t necessarily be directly in the domain of autonomous weapons. Good candidates for externalities are less economic growth, less trust between policy makers and scientists, potential for abuse by autocratic governments, increased confusion and complexity of navigating the AI space due to regulation making everything a minefield.
Another way to phrase my concerns is that over the last 50 years, it appears quite plausible to me that due to interest groups, not too dissimilar to ours such as the environmentalist movement, we have basically crippled our modern economy by passing a ridiculous volume of regulation interfering with every part of business and life. Science has substantially slowed down, with government involvement being one of the top candidates for the cause. This means, that from my perspective of global trustworthiness, we have a responsibility to really carefully analyze whether the marginal piece of legislation we are lobbying for is good, and whether it meets a greatly elevated standard of quality. I don’t see that work as having occurred here.
Indeed, almost all the writing by FLI is almost indistinguishable from writing coming from the environmentalist movement, which has caused very large amounts of harm. I am open to supporting regulation in this space, I just really want us to be in a different reference class than past social movements that called for broad regulation, without taking into account the potential substantial costs of that regulation.
“Regulation,” in the sense of a government limitation on otherwise “free” industry does indeed make a bit more sense, and you’re certainly entitled to the view that many pieces of regulation of the free market are net negative — though again I think it is quite nuanced, as in many cases (DMCA would be one) regulation allows more free markets that might not otherwise exist.
In this case, though, I think the more relevant reference class is “international arms control agreements” like the bioweapons convention, the convention on conventional weapons, the space treaty, landmine treaty, the nuclear nonproliferation treaties, etc. These are not so much regulations as compacts not to develop and use certain weapons. They may also include some prohibitions on industry developing and selling those weapons, but the important part is that the militaries are not making or buying them. (For example, you can’t just decide to build nuclear weapons, but I doubt it is illegal to develop or manufacture a laser-blinding weapon or landmine.)
The issue of regulation in the sense of putting limitations on AI developers (say toward safety issues) is worth debating but I think is a relatively distinct one. It is absolution important to carefully consider whether a given piece of policy or regulation is better than the alternatives (and not, I say again, “better than nothing” because in general the alternative is not “nothing.”) And I think it’s vital to try to improve existing legislation etc., which has been most of FLI for example’s focus.
Hmm, yeah, I think there is still something missing here. I agree that regulation on a “free” industry is one example that I am thinking of, and certainly one that matters a good amount, but I am talking about something a bit broader. More something like “governments in general seem to take actions of a specific shape and structure, and in some domains, actions of that shape can make problems worse, not better”.
Like, whether something is done via an international arms control agreement, or locally passed legislation, the shape of both of those actions is highly similar, with both being heavily constrained to be simple, both being optimized for low trust environments, both requiring high levels of legibility, etc. In this perspective, there are of course some important differences between regulation and “international arms control agreements”, but they clearly share a lot of structure, and their failure modes will probably be pretty similar.
I am also importantly not arguing for “let’s just ignore the whole space”. I am arguing for something much more like “it appears that in order to navigate this space successfully, it seems really important to understand past failures of people who are in a similar reference class to us, and generally enter this space with sensible priors about downside risk”.
I think past participants in this space appear to have very frequently slid down a slope of deception, exaggeration and adversarial discourse norms, as well as ended up being taken advantage of by local power dynamics and short-term incentives, in a way that caused them to lose at least a lot of my trust, and I would like us to act in a way that is trustworthy in this space. I also want us to avoid negative externalities for the surrounding communities and people working in the x-risk space (by engaging in really polarizing discourse norms, or facilitating various forms of implicit threats and violence).
I think one important difference might be that I view the case of AI development and regulation around autonomous weapons deeply interlinked, at the very least via the people involved in both (such as FLI). If we act in a way that doesn’t seem trustworthy in the space of autonomous weapons, then that seems likely to reduce our ability to gain trust and enact legislation that is more directly relevant and important to issues related to AI Alignment. As such, while I agree substantially that the problems in this space shares structure with the longer-term AI Alignment problem, it strikes me as being of paramount importance to display excellent judgement on which problems should be attacked via regulation and which ones should not be. It appears to me that the trust to design and enact legislation is directly dependent on the trust that you will not enact bad or unnecessary regulation, and as such, getting this answer right in the case of autonomous weapons seems pretty important (and which is why I find the test-case framing somewhat less compelling).
My guess is there are a number of pretty large inferential distances here, so not sure how to best proceed. I would be happy to jump on a call sometime, if you are interested. In all of this I also want to make clear that I am really glad you wrote this post, and while I have some disagreements with it, it is a great step forward in helping me understand where FLI is coming from, and it contains a number of arguments I find pretty compelling and hadn’t considered before.
You’re certainly entitled to your (by conventional standards) pretty extreme anti-regulatory view of e.g. the FDA, IRBs, environmental regulations, etc., and to your prior that regulations are in general highly net negative. I don’t share those views but I think we can probably agree that there are regulations (like seatbelts, those governing CFCs, asbestos, leaded gasoline, etc.) that are highly net positive, and others (e.g. criminalization of some drugs, anti-cryptography, industry protections against class action suits, etc.) that are nearly completely negative. What we can do to maximize the former and minimize the latter is a discussion worth having, and a very important one.
In the present case of autonomous weapons, I again think the right reference class is that of things like the bioweapons convention and the space treaty. I think these, also, have been almost unreservedly good: made the world more stable, avoided potentially catastrophic arms races, and left industries (like biotec, pharma, space industry, arms industry) perfectly healthy and arguably (especially for biotech) much better off than they would have been with a reputation mixed up in creating horrifying weapons. I also think in these cases, as with at least some AWs like antipersonnel WMDs, there is a pretty significant asymmetry, with the negative affects (of no regulation) having a tail into extremely bad outcomes, while the negative affects of well-structured regulations seem pretty mild at worst. Those are exactly the sorts of regulations/agreements I think we should be pushing on.
Very glad I wrote up the piece as I did, it’s been great to share and discuss it here with this community, which I have huge respect for!
It’s been a while since I looked into this, but if I remember correctly the space treaty is currently a major obstacle to industrializing space, due to making mostly impossible to have almost any form of private property of asteroids, or the moon, or other planets, and creating a large fraction of regulatory ambiguity and uncertainty for anyone wanting to work in this space.
When I talked to legal experts in the space for an investigation I ran for FHI 3-4 years ago, my sense was that the space treaty was a giant mess, nobody knew what it implied or meant, and that it generally was so ambiguous and unclear, and with so little binding power, that none of the legal scholars expected it to hold up in the long run.
I do think it’s likely that the space treaty overall did help demilitarize space, but given a lot of the complicated concerns around MAD, it’s not obvious to me that that was actually net-positive for the world. In any case, I really don’t see why one would list the space treaty as an obvious success.
I am not an expert on the Outer Space Treaty either, but by also by anecdotal evidence, I have always heard it to be of considerable benefit and a remarkable achievement of diplomatic foresight during the Cold War. However, I would welcome any published criticisms of the Outer Space Treaty you wish to provide.
It’s important to note that the treaty was originally ratified in 1967 (as in, ~two years before landing on the Moon, ~5 years after the Cuban Missile Crisis). If you critique a policy for its effects long after its original passage (as with reference to space mining, or as others have the effects of Section 230 of the CDA passed in 1996), your critique is really about the government(s) failing to update and revise the policy, not with the enactment of original policy. Likewise, it is important to run the counterfactual to the policy never being enacted. In this circumstance, I’m not sure how you envision a breakdown in US-USSR (and other world powers) negotiations on the demilitarization of space in 1967 would have led to better outcomes.
This feels like a really weird statement to me. It is highly predictably that as soon as a law is in place, that there is an incentive to use it for rent-seeking, and that abolishing a policy is reliably much harder than enacting a new policy. When putting in place legislation, the effects of your actions that I will hold you responsible for of course include the highly predictably problems that will occur when your legislation will not have been updated and revised in a long time. That’s one of the biggest limitations of legislation as a tool for solving problems!