I suspect that the social institutions of Law and Money are likely to become increasingly irrelevant background to the development of ASI.
Deterrence Fails.
If you believe that there is a good chance of immortal utopia, and a large chance of paperclips in the next 5 years, the threat that the cops might throw you in jail, (on the off chance that they are still in power) is negligible.
The law is blind to safety.
The law is bureaucratic and ossified. It is probably not employing much top talent, as it’s hard to tell top talent from the rest if you aren’t as good yourself (and it doesn’t have the budget or glamor to attract them). Telling whether an organization is on line for not destroying the world is HARD. The safety protocols are being invented on the fly by each team, the system is very complex and technical and only half built. The teams that would destroy the world aren’t idiots, they are still producing long papers full of maths and talking about the importance of safety a lot. There are no examples to work with, or understood laws.
Likely as not (not really, too much conjugation here), you get some random inspector with a checklist full of thing that sound like a good idea to people who don’t understand the problem. All AI work has to have an emergency stop button that turns the power off. (The idea of an AI circumventing this was not considered by the person who wrote the list).
All the law can really do is tell what public image an AI group want’s to present, provide funding to everyone, and get in everyone’s way. Telling cops to “smash all GPU’s” would have an effect on AI progress. The fund vs smash axis is about the only lever they have. They can’t even tell an AI project from a maths convention from a normal programming project if the project leaders are incentivized to obfuscate.
After ASI, governments are likely only relevant if the ASI was programmed to care about them. Neither paperclippers or FAI will care about the law. The law might be relevant if we had tasky ASI that was not trivial to leverage into a decisive strategic advantage. (An AI that can put a strawberry on a plate without destroying the world, but that’s about the limit of its safe operation.)
Such an AI embodies an understanding of intelligence and could easily be accidentally modified to destroy the world. Such scenarios might involve ASI and timescales long enough for the law to act.
I don’t know how the law can handle something that, can easily destroy the world, has some economic value (if you want to flirt danger) and, with further research could grant supreme power. The discovery must be limited to a small group of people, (law of large number of nonexperts, one will do something stupid). I don’t think the law could notice what it was, after all the robot in-front of the inspector only puts strawberries on plates. They can’t tell how powerful it would be with an unbounded utility function.
I suspect that the social institutions of Law and Money are likely to become increasingly irrelevant background to the development of ASI.
Deterrence Fails.
If you believe that there is a good chance of immortal utopia, and a large chance of paperclips in the next 5 years, the threat that the cops might throw you in jail, (on the off chance that they are still in power) is negligible.
The law is blind to safety.
The law is bureaucratic and ossified. It is probably not employing much top talent, as it’s hard to tell top talent from the rest if you aren’t as good yourself (and it doesn’t have the budget or glamor to attract them). Telling whether an organization is on line for not destroying the world is HARD. The safety protocols are being invented on the fly by each team, the system is very complex and technical and only half built. The teams that would destroy the world aren’t idiots, they are still producing long papers full of maths and talking about the importance of safety a lot. There are no examples to work with, or understood laws.
Likely as not (not really, too much conjugation here), you get some random inspector with a checklist full of thing that sound like a good idea to people who don’t understand the problem. All AI work has to have an emergency stop button that turns the power off. (The idea of an AI circumventing this was not considered by the person who wrote the list).
All the law can really do is tell what public image an AI group want’s to present, provide funding to everyone, and get in everyone’s way. Telling cops to “smash all GPU’s” would have an effect on AI progress. The fund vs smash axis is about the only lever they have. They can’t even tell an AI project from a maths convention from a normal programming project if the project leaders are incentivized to obfuscate.
After ASI, governments are likely only relevant if the ASI was programmed to care about them. Neither paperclippers or FAI will care about the law. The law might be relevant if we had tasky ASI that was not trivial to leverage into a decisive strategic advantage. (An AI that can put a strawberry on a plate without destroying the world, but that’s about the limit of its safe operation.)
Such an AI embodies an understanding of intelligence and could easily be accidentally modified to destroy the world. Such scenarios might involve ASI and timescales long enough for the law to act.
I don’t know how the law can handle something that, can easily destroy the world, has some economic value (if you want to flirt danger) and, with further research could grant supreme power. The discovery must be limited to a small group of people, (law of large number of nonexperts, one will do something stupid). I don’t think the law could notice what it was, after all the robot in-front of the inspector only puts strawberries on plates. They can’t tell how powerful it would be with an unbounded utility function.