But in trying to figure out what might work, actually interesting solutions may well arise.
Hmm, this feels like it highlights some problem I have with FLI’s work in this domain. As you seem to agree with here, it’s pretty plausible that there is no legislation that is particularly useful in this space, because legislation is really heavily limited by how complicated and nuanced it can be, and heavily constrained by how robust to rules-lawyering it has to be, and so it’s pretty plausible to me that all legislation in this space is a bad idea.
But both this article, and a bunch of other FLI material, treat the bottom-line as “there is obviously legislation that is good here”, which just feels pretty off to me, and don’t feel like the existing material has met the necessary burden of proof to show that marginal legislation here would work, and be net-positive.
In general, it seems that historically humanity has been vastly overeager in trying to solve problems with legislative regulation, and overestimating what problems can be solved with legislative regulation, causing a really massive amount of damage (and, in my worldview, also contributing substantially to humanity’s inability to navigate existential and catastrophic risk, as evidenced by COVID). As such, I feel like we have a pretty strong responsibility to not call for further regulation that is badly suited to solving the problems at hand, and this domain very much pattern matches to this problem for me.
I can imagine changing my mind on this after looking more into the details and the specific policy proposals, but the specific things I’ve heard of, like mandating to have a human in the loop, don’t seem like good solutions to me that solve substantial parts of the problem. There is some chance that somewhere in FLI’s proposals there are policy solutions that feel both feasible and actually useful to me, but I haven’t encountered them, and on-priors, the type of communication that FLI seems to be pursuing (launching viral campaigns with satirical videos and dramatic presentation, together with blogposts mostly filled with strong rhetoric and little nuanced analysis), really doesn’t fill me with confidence that there is serious effort put into really asking the question of whether regulation here is at all a good idea, and how regulation could backfire.
Maybe this is how policy gets made in practice, but then again, the vast majority of policy passed is strongly net-negative, so just because that’s how policy usually gets made, doesn’t mean we should do the same.
Thanks Oliver for this, which likewise very much helps me understand better where some of the ideological disagreements lie. Your statement “but then again, the vast majority of policy passed is strongly net-negative” encapsulates it well. Leaving aside that (even if we could agree on what “positive” and “negative” were) this seems almost impossible to evaluate, it indicates a view that the absence of a policy on something is “no policy”. Whereas in my view in the vast majority of situations the absence of some policy is some other policy, whether it’s explicit or implicit. Certainly in the case of AWs, the US (and other militaries) will have some policy about AWs. That’s not at issue. At issue is what the policy will be. And some policies (like prohibiting autonomous WMDs) make much less sense in the absence of the context of an international agreement. So creating that context can create the possibility for a wider range of policies, including better ones.
More generally, when you look at arenas where there is “no policy” often there actually is one. For example, as I understand it, the modern social media ecosystem does not exist because there is no policy governing it, but due to the DMCA. Had that Act been different (or nonexistent) other policies would be governing things instead, for better or worse. Or, if there were no FDA, there would still be policy: it would govern advertising, and lawsuits, and independent market-based drug-testers, and so on. In a more abstract sense, I view policy as a general term for the basic legal structure governing how our society works, and there isn’t a “default” setting. There are settings in various situations that are “leave this to market forces” or “tightly regulate this with a strict government agency” and all manner of others, but those are generally choices implicitly or explicitly made. The US has made “leave it to market forces” much more of the norm and default (which has had a lot of great results!), but that is, at a higher scale, still a policy choice. There are lots of other ways to organize a society, and we’re tried some of them. When we’re talking about development of weapons and international diplomacy I don’t think there is a reliably good default — at all.
So I think it’s quite reasonable to ask for what policy proposals are, and evaluate the particular proposals — as Ben and you are doing. But I don’t think it’s fair or wise to assume that the policies that will be generated by existing actors, in AI weaponry, or AGI, or whatever else, are likely to be particularly good. Policies will exist whether we participate or not, and they will come into being due to the efforts of policymakers guided by various interests. That they are most likely better without the input of groups like FLI who understand the issues and stakes, encompass a lot of expertise, and are working purely in the interest of humanity and it’s longterm flourishing, seems improbably even from an outside view. And of course from the inside view seeing the actual work we have done I don’t think it’s the case.
If the above seemed confusing, just replace “policy” with “regulation” and my point doesn’t change very much. I feel like it’s not that hard to reliably identify worlds with more vs. less government regulation. I agree that in some abstract sense “there is always a policy”, but I am pointing to a much more concrete effect, which is that most passed regulation seems net-negative to me, whether national or international.
I think it’s very reasonable to try to influence and change regulation that will be passed anyways, but it seems that FLI is lobbying for passing new regulation, and importantly saying nothing about where the boundaries of the regulation should be, and exploring when passing that regulation would be net negative.
It seems that on-net, FLI’s actions are going to cause more regulation to happen, with the potential for large negative externalities over the status-quo (which is some mixture of leaving it to market forces, social norms, trust relationships, etc.). Those negative externalities won’t necessarily be directly in the domain of autonomous weapons. Good candidates for externalities are less economic growth, less trust between policy makers and scientists, potential for abuse by autocratic governments, increased confusion and complexity of navigating the AI space due to regulation making everything a minefield.
Another way to phrase my concerns is that over the last 50 years, it appears quite plausible to me that due to interest groups, not too dissimilar to ours such as the environmentalist movement, we have basically crippled our modern economy by passing a ridiculous volume of regulation interfering with every part of business and life. Science has substantially slowed down, with government involvement being one of the top candidates for the cause. This means, that from my perspective of global trustworthiness, we have a responsibility to really carefully analyze whether the marginal piece of legislation we are lobbying for is good, and whether it meets a greatly elevated standard of quality. I don’t see that work as having occurred here.
Indeed, almost all the writing by FLI is almost indistinguishable from writing coming from the environmentalist movement, which has caused very large amounts of harm. I am open to supporting regulation in this space, I just really want us to be in a different reference class than past social movements that called for broad regulation, without taking into account the potential substantial costs of that regulation.
“Regulation,” in the sense of a government limitation on otherwise “free” industry does indeed make a bit more sense, and you’re certainly entitled to the view that many pieces of regulation of the free market are net negative — though again I think it is quite nuanced, as in many cases (DMCA would be one) regulation allows more free markets that might not otherwise exist.
In this case, though, I think the more relevant reference class is “international arms control agreements” like the bioweapons convention, the convention on conventional weapons, the space treaty, landmine treaty, the nuclear nonproliferation treaties, etc. These are not so much regulations as compacts not to develop and use certain weapons. They may also include some prohibitions on industry developing and selling those weapons, but the important part is that the militaries are not making or buying them. (For example, you can’t just decide to build nuclear weapons, but I doubt it is illegal to develop or manufacture a laser-blinding weapon or landmine.)
The issue of regulation in the sense of putting limitations on AI developers (say toward safety issues) is worth debating but I think is a relatively distinct one. It is absolution important to carefully consider whether a given piece of policy or regulation is better than the alternatives (and not, I say again, “better than nothing” because in general the alternative is not “nothing.”) And I think it’s vital to try to improve existing legislation etc., which has been most of FLI for example’s focus.
Hmm, yeah, I think there is still something missing here. I agree that regulation on a “free” industry is one example that I am thinking of, and certainly one that matters a good amount, but I am talking about something a bit broader. More something like “governments in general seem to take actions of a specific shape and structure, and in some domains, actions of that shape can make problems worse, not better”.
Like, whether something is done via an international arms control agreement, or locally passed legislation, the shape of both of those actions is highly similar, with both being heavily constrained to be simple, both being optimized for low trust environments, both requiring high levels of legibility, etc. In this perspective, there are of course some important differences between regulation and “international arms control agreements”, but they clearly share a lot of structure, and their failure modes will probably be pretty similar.
I am also importantly not arguing for “let’s just ignore the whole space”. I am arguing for something much more like “it appears that in order to navigate this space successfully, it seems really important to understand past failures of people who are in a similar reference class to us, and generally enter this space with sensible priors about downside risk”.
I think past participants in this space appear to have very frequently slid down a slope of deception, exaggeration and adversarial discourse norms, as well as ended up being taken advantage of by local power dynamics and short-term incentives, in a way that caused them to lose at least a lot of my trust, and I would like us to act in a way that is trustworthy in this space. I also want us to avoid negative externalities for the surrounding communities and people working in the x-risk space (by engaging in really polarizing discourse norms, or facilitating various forms of implicit threats and violence).
I think one important difference might be that I view the case of AI development and regulation around autonomous weapons deeply interlinked, at the very least via the people involved in both (such as FLI). If we act in a way that doesn’t seem trustworthy in the space of autonomous weapons, then that seems likely to reduce our ability to gain trust and enact legislation that is more directly relevant and important to issues related to AI Alignment. As such, while I agree substantially that the problems in this space shares structure with the longer-term AI Alignment problem, it strikes me as being of paramount importance to display excellent judgement on which problems should be attacked via regulation and which ones should not be. It appears to me that the trust to design and enact legislation is directly dependent on the trust that you will not enact bad or unnecessary regulation, and as such, getting this answer right in the case of autonomous weapons seems pretty important (and which is why I find the test-case framing somewhat less compelling).
My guess is there are a number of pretty large inferential distances here, so not sure how to best proceed. I would be happy to jump on a call sometime, if you are interested. In all of this I also want to make clear that I am really glad you wrote this post, and while I have some disagreements with it, it is a great step forward in helping me understand where FLI is coming from, and it contains a number of arguments I find pretty compelling and hadn’t considered before.
You’re certainly entitled to your (by conventional standards) pretty extreme anti-regulatory view of e.g. the FDA, IRBs, environmental regulations, etc., and to your prior that regulations are in general highly net negative. I don’t share those views but I think we can probably agree that there are regulations (like seatbelts, those governing CFCs, asbestos, leaded gasoline, etc.) that are highly net positive, and others (e.g. criminalization of some drugs, anti-cryptography, industry protections against class action suits, etc.) that are nearly completely negative. What we can do to maximize the former and minimize the latter is a discussion worth having, and a very important one.
In the present case of autonomous weapons, I again think the right reference class is that of things like the bioweapons convention and the space treaty. I think these, also, have been almost unreservedly good: made the world more stable, avoided potentially catastrophic arms races, and left industries (like biotec, pharma, space industry, arms industry) perfectly healthy and arguably (especially for biotech) much better off than they would have been with a reputation mixed up in creating horrifying weapons. I also think in these cases, as with at least some AWs like antipersonnel WMDs, there is a pretty significant asymmetry, with the negative affects (of no regulation) having a tail into extremely bad outcomes, while the negative affects of well-structured regulations seem pretty mild at worst. Those are exactly the sorts of regulations/agreements I think we should be pushing on.
Very glad I wrote up the piece as I did, it’s been great to share and discuss it here with this community, which I have huge respect for!
It’s been a while since I looked into this, but if I remember correctly the space treaty is currently a major obstacle to industrializing space, due to making mostly impossible to have almost any form of private property of asteroids, or the moon, or other planets, and creating a large fraction of regulatory ambiguity and uncertainty for anyone wanting to work in this space.
When I talked to legal experts in the space for an investigation I ran for FHI 3-4 years ago, my sense was that the space treaty was a giant mess, nobody knew what it implied or meant, and that it generally was so ambiguous and unclear, and with so little binding power, that none of the legal scholars expected it to hold up in the long run.
I do think it’s likely that the space treaty overall did help demilitarize space, but given a lot of the complicated concerns around MAD, it’s not obvious to me that that was actually net-positive for the world. In any case, I really don’t see why one would list the space treaty as an obvious success.
I am not an expert on the Outer Space Treaty either, but by also by anecdotal evidence, I have always heard it to be of considerable benefit and a remarkable achievement of diplomatic foresight during the Cold War. However, I would welcome any published criticisms of the Outer Space Treaty you wish to provide.
It’s important to note that the treaty was originally ratified in 1967 (as in, ~two years before landing on the Moon, ~5 years after the Cuban Missile Crisis). If you critique a policy for its effects long after its original passage (as with reference to space mining, or as others have the effects of Section 230 of the CDA passed in 1996), your critique is really about the government(s) failing to update and revise the policy, not with the enactment of original policy. Likewise, it is important to run the counterfactual to the policy never being enacted. In this circumstance, I’m not sure how you envision a breakdown in US-USSR (and other world powers) negotiations on the demilitarization of space in 1967 would have led to better outcomes.
your critique is really about the government(s) failing to update and revise the policy, not with the enactment of original policy.
This feels like a really weird statement to me. It is highly predictably that as soon as a law is in place, that there is an incentive to use it for rent-seeking, and that abolishing a policy is reliably much harder than enacting a new policy. When putting in place legislation, the effects of your actions that I will hold you responsible for of course include the highly predictably problems that will occur when your legislation will not have been updated and revised in a long time. That’s one of the biggest limitations of legislation as a tool for solving problems!
Hmm, this feels like it highlights some problem I have with FLI’s work in this domain. As you seem to agree with here, it’s pretty plausible that there is no legislation that is particularly useful in this space, because legislation is really heavily limited by how complicated and nuanced it can be, and heavily constrained by how robust to rules-lawyering it has to be, and so it’s pretty plausible to me that all legislation in this space is a bad idea.
But both this article, and a bunch of other FLI material, treat the bottom-line as “there is obviously legislation that is good here”, which just feels pretty off to me, and don’t feel like the existing material has met the necessary burden of proof to show that marginal legislation here would work, and be net-positive.
In general, it seems that historically humanity has been vastly overeager in trying to solve problems with legislative regulation, and overestimating what problems can be solved with legislative regulation, causing a really massive amount of damage (and, in my worldview, also contributing substantially to humanity’s inability to navigate existential and catastrophic risk, as evidenced by COVID). As such, I feel like we have a pretty strong responsibility to not call for further regulation that is badly suited to solving the problems at hand, and this domain very much pattern matches to this problem for me.
I can imagine changing my mind on this after looking more into the details and the specific policy proposals, but the specific things I’ve heard of, like mandating to have a human in the loop, don’t seem like good solutions to me that solve substantial parts of the problem. There is some chance that somewhere in FLI’s proposals there are policy solutions that feel both feasible and actually useful to me, but I haven’t encountered them, and on-priors, the type of communication that FLI seems to be pursuing (launching viral campaigns with satirical videos and dramatic presentation, together with blogposts mostly filled with strong rhetoric and little nuanced analysis), really doesn’t fill me with confidence that there is serious effort put into really asking the question of whether regulation here is at all a good idea, and how regulation could backfire.
Maybe this is how policy gets made in practice, but then again, the vast majority of policy passed is strongly net-negative, so just because that’s how policy usually gets made, doesn’t mean we should do the same.
Thanks Oliver for this, which likewise very much helps me understand better where some of the ideological disagreements lie. Your statement “but then again, the vast majority of policy passed is strongly net-negative” encapsulates it well. Leaving aside that (even if we could agree on what “positive” and “negative” were) this seems almost impossible to evaluate, it indicates a view that the absence of a policy on something is “no policy”. Whereas in my view in the vast majority of situations the absence of some policy is some other policy, whether it’s explicit or implicit. Certainly in the case of AWs, the US (and other militaries) will have some policy about AWs. That’s not at issue. At issue is what the policy will be. And some policies (like prohibiting autonomous WMDs) make much less sense in the absence of the context of an international agreement. So creating that context can create the possibility for a wider range of policies, including better ones.
More generally, when you look at arenas where there is “no policy” often there actually is one. For example, as I understand it, the modern social media ecosystem does not exist because there is no policy governing it, but due to the DMCA. Had that Act been different (or nonexistent) other policies would be governing things instead, for better or worse. Or, if there were no FDA, there would still be policy: it would govern advertising, and lawsuits, and independent market-based drug-testers, and so on. In a more abstract sense, I view policy as a general term for the basic legal structure governing how our society works, and there isn’t a “default” setting. There are settings in various situations that are “leave this to market forces” or “tightly regulate this with a strict government agency” and all manner of others, but those are generally choices implicitly or explicitly made. The US has made “leave it to market forces” much more of the norm and default (which has had a lot of great results!), but that is, at a higher scale, still a policy choice. There are lots of other ways to organize a society, and we’re tried some of them. When we’re talking about development of weapons and international diplomacy I don’t think there is a reliably good default — at all.
So I think it’s quite reasonable to ask for what policy proposals are, and evaluate the particular proposals — as Ben and you are doing. But I don’t think it’s fair or wise to assume that the policies that will be generated by existing actors, in AI weaponry, or AGI, or whatever else, are likely to be particularly good. Policies will exist whether we participate or not, and they will come into being due to the efforts of policymakers guided by various interests. That they are most likely better without the input of groups like FLI who understand the issues and stakes, encompass a lot of expertise, and are working purely in the interest of humanity and it’s longterm flourishing, seems improbably even from an outside view. And of course from the inside view seeing the actual work we have done I don’t think it’s the case.
If the above seemed confusing, just replace “policy” with “regulation” and my point doesn’t change very much. I feel like it’s not that hard to reliably identify worlds with more vs. less government regulation. I agree that in some abstract sense “there is always a policy”, but I am pointing to a much more concrete effect, which is that most passed regulation seems net-negative to me, whether national or international.
I think it’s very reasonable to try to influence and change regulation that will be passed anyways, but it seems that FLI is lobbying for passing new regulation, and importantly saying nothing about where the boundaries of the regulation should be, and exploring when passing that regulation would be net negative.
It seems that on-net, FLI’s actions are going to cause more regulation to happen, with the potential for large negative externalities over the status-quo (which is some mixture of leaving it to market forces, social norms, trust relationships, etc.). Those negative externalities won’t necessarily be directly in the domain of autonomous weapons. Good candidates for externalities are less economic growth, less trust between policy makers and scientists, potential for abuse by autocratic governments, increased confusion and complexity of navigating the AI space due to regulation making everything a minefield.
Another way to phrase my concerns is that over the last 50 years, it appears quite plausible to me that due to interest groups, not too dissimilar to ours such as the environmentalist movement, we have basically crippled our modern economy by passing a ridiculous volume of regulation interfering with every part of business and life. Science has substantially slowed down, with government involvement being one of the top candidates for the cause. This means, that from my perspective of global trustworthiness, we have a responsibility to really carefully analyze whether the marginal piece of legislation we are lobbying for is good, and whether it meets a greatly elevated standard of quality. I don’t see that work as having occurred here.
Indeed, almost all the writing by FLI is almost indistinguishable from writing coming from the environmentalist movement, which has caused very large amounts of harm. I am open to supporting regulation in this space, I just really want us to be in a different reference class than past social movements that called for broad regulation, without taking into account the potential substantial costs of that regulation.
“Regulation,” in the sense of a government limitation on otherwise “free” industry does indeed make a bit more sense, and you’re certainly entitled to the view that many pieces of regulation of the free market are net negative — though again I think it is quite nuanced, as in many cases (DMCA would be one) regulation allows more free markets that might not otherwise exist.
In this case, though, I think the more relevant reference class is “international arms control agreements” like the bioweapons convention, the convention on conventional weapons, the space treaty, landmine treaty, the nuclear nonproliferation treaties, etc. These are not so much regulations as compacts not to develop and use certain weapons. They may also include some prohibitions on industry developing and selling those weapons, but the important part is that the militaries are not making or buying them. (For example, you can’t just decide to build nuclear weapons, but I doubt it is illegal to develop or manufacture a laser-blinding weapon or landmine.)
The issue of regulation in the sense of putting limitations on AI developers (say toward safety issues) is worth debating but I think is a relatively distinct one. It is absolution important to carefully consider whether a given piece of policy or regulation is better than the alternatives (and not, I say again, “better than nothing” because in general the alternative is not “nothing.”) And I think it’s vital to try to improve existing legislation etc., which has been most of FLI for example’s focus.
Hmm, yeah, I think there is still something missing here. I agree that regulation on a “free” industry is one example that I am thinking of, and certainly one that matters a good amount, but I am talking about something a bit broader. More something like “governments in general seem to take actions of a specific shape and structure, and in some domains, actions of that shape can make problems worse, not better”.
Like, whether something is done via an international arms control agreement, or locally passed legislation, the shape of both of those actions is highly similar, with both being heavily constrained to be simple, both being optimized for low trust environments, both requiring high levels of legibility, etc. In this perspective, there are of course some important differences between regulation and “international arms control agreements”, but they clearly share a lot of structure, and their failure modes will probably be pretty similar.
I am also importantly not arguing for “let’s just ignore the whole space”. I am arguing for something much more like “it appears that in order to navigate this space successfully, it seems really important to understand past failures of people who are in a similar reference class to us, and generally enter this space with sensible priors about downside risk”.
I think past participants in this space appear to have very frequently slid down a slope of deception, exaggeration and adversarial discourse norms, as well as ended up being taken advantage of by local power dynamics and short-term incentives, in a way that caused them to lose at least a lot of my trust, and I would like us to act in a way that is trustworthy in this space. I also want us to avoid negative externalities for the surrounding communities and people working in the x-risk space (by engaging in really polarizing discourse norms, or facilitating various forms of implicit threats and violence).
I think one important difference might be that I view the case of AI development and regulation around autonomous weapons deeply interlinked, at the very least via the people involved in both (such as FLI). If we act in a way that doesn’t seem trustworthy in the space of autonomous weapons, then that seems likely to reduce our ability to gain trust and enact legislation that is more directly relevant and important to issues related to AI Alignment. As such, while I agree substantially that the problems in this space shares structure with the longer-term AI Alignment problem, it strikes me as being of paramount importance to display excellent judgement on which problems should be attacked via regulation and which ones should not be. It appears to me that the trust to design and enact legislation is directly dependent on the trust that you will not enact bad or unnecessary regulation, and as such, getting this answer right in the case of autonomous weapons seems pretty important (and which is why I find the test-case framing somewhat less compelling).
My guess is there are a number of pretty large inferential distances here, so not sure how to best proceed. I would be happy to jump on a call sometime, if you are interested. In all of this I also want to make clear that I am really glad you wrote this post, and while I have some disagreements with it, it is a great step forward in helping me understand where FLI is coming from, and it contains a number of arguments I find pretty compelling and hadn’t considered before.
You’re certainly entitled to your (by conventional standards) pretty extreme anti-regulatory view of e.g. the FDA, IRBs, environmental regulations, etc., and to your prior that regulations are in general highly net negative. I don’t share those views but I think we can probably agree that there are regulations (like seatbelts, those governing CFCs, asbestos, leaded gasoline, etc.) that are highly net positive, and others (e.g. criminalization of some drugs, anti-cryptography, industry protections against class action suits, etc.) that are nearly completely negative. What we can do to maximize the former and minimize the latter is a discussion worth having, and a very important one.
In the present case of autonomous weapons, I again think the right reference class is that of things like the bioweapons convention and the space treaty. I think these, also, have been almost unreservedly good: made the world more stable, avoided potentially catastrophic arms races, and left industries (like biotec, pharma, space industry, arms industry) perfectly healthy and arguably (especially for biotech) much better off than they would have been with a reputation mixed up in creating horrifying weapons. I also think in these cases, as with at least some AWs like antipersonnel WMDs, there is a pretty significant asymmetry, with the negative affects (of no regulation) having a tail into extremely bad outcomes, while the negative affects of well-structured regulations seem pretty mild at worst. Those are exactly the sorts of regulations/agreements I think we should be pushing on.
Very glad I wrote up the piece as I did, it’s been great to share and discuss it here with this community, which I have huge respect for!
It’s been a while since I looked into this, but if I remember correctly the space treaty is currently a major obstacle to industrializing space, due to making mostly impossible to have almost any form of private property of asteroids, or the moon, or other planets, and creating a large fraction of regulatory ambiguity and uncertainty for anyone wanting to work in this space.
When I talked to legal experts in the space for an investigation I ran for FHI 3-4 years ago, my sense was that the space treaty was a giant mess, nobody knew what it implied or meant, and that it generally was so ambiguous and unclear, and with so little binding power, that none of the legal scholars expected it to hold up in the long run.
I do think it’s likely that the space treaty overall did help demilitarize space, but given a lot of the complicated concerns around MAD, it’s not obvious to me that that was actually net-positive for the world. In any case, I really don’t see why one would list the space treaty as an obvious success.
I am not an expert on the Outer Space Treaty either, but by also by anecdotal evidence, I have always heard it to be of considerable benefit and a remarkable achievement of diplomatic foresight during the Cold War. However, I would welcome any published criticisms of the Outer Space Treaty you wish to provide.
It’s important to note that the treaty was originally ratified in 1967 (as in, ~two years before landing on the Moon, ~5 years after the Cuban Missile Crisis). If you critique a policy for its effects long after its original passage (as with reference to space mining, or as others have the effects of Section 230 of the CDA passed in 1996), your critique is really about the government(s) failing to update and revise the policy, not with the enactment of original policy. Likewise, it is important to run the counterfactual to the policy never being enacted. In this circumstance, I’m not sure how you envision a breakdown in US-USSR (and other world powers) negotiations on the demilitarization of space in 1967 would have led to better outcomes.
This feels like a really weird statement to me. It is highly predictably that as soon as a law is in place, that there is an incentive to use it for rent-seeking, and that abolishing a policy is reliably much harder than enacting a new policy. When putting in place legislation, the effects of your actions that I will hold you responsible for of course include the highly predictably problems that will occur when your legislation will not have been updated and revised in a long time. That’s one of the biggest limitations of legislation as a tool for solving problems!