In what circumstances are you needing international coordination, and what is needed. I put substantial probability in a world that goes straight from a few researchers, ignored by the world, to super-intelligent AI with nanotech. In what world model do we have serious discussions about ASI in the UN, and in what worlds do we need them?
This sort of thing is really difficult to regulate for several reasons.
1) What are we banning? Potentially dangerous computer programs. What programs designs are dangerous? Experts disagree on what kind of algorithms might lead AGI. If someone hands you an arbitrary program, how do you tell if its allowed? (Assuming you don’t want to ban all programming.) If we can’t say what we are banning, how can we ban it.
2) We can’t limit access. Enriched uranium is rare, hard to make and easy to track. This lets governments stop people from making nukes easily. Some recreational drugs can be produced with seeds and a room full of plants. The substances needed are accessible to most people. Drugs are easy to detect in minuscule quantities, and some of the equipment needed to produce them is hard to hide. Law enforcement has largely failed to stop them. If we knew that AGI required a GPU farm, limiting access somewhat might be doable. If serious processing power and memory are not required, you are trying to ban something that can be encrypted, can be sent anywhere in the world in moments, can be copied indefinitely. Look at how successful law enforcement agencies have been in stopping people making malware, or in censoring anything.
3) Unlike nearly every other circumstance in law enforcement, incentives don’t work. As soon as someone actually thinks they have a shot at AGI, making it illegal won’t stop them. If they succeed in making a safe AGI, programmed to do whatever they want, your threats are meaningless. If they make a papeclipper, the threat of jail is even more meaningless.
4) Unlike with most crimes, the people likely to make AGI are exceptionally smart. Less likely to be caught.
5) You have to succeed Every time.
Arresting everyone with “deep learning” on their CV would slow progress, banning all computer chips from being made would have worked in 1950, but if an old smartphone is enough, don’t expect even an authoritarian world government to be able to round up every last one. So stopping new chips being made and smashing GPU’s would slow work, but not stop it. Legislation can slow the arrival of AGI, but not be confidant of stopping it without a LOT of collateral.
The treaty can constrain the largest projects like DeepMind. Identifying them can be done by an international court. We don’t need to defend against the smartest humans, only the crackpots that still think we’re in an arms race instead of a bomb defusal operation. Imagine that the Manhattan project mathematicians had computed that a nuke had a 90% chance of igniting the atmosphere. Would their generals still have been table thumping that they should develop the nukes before the Nazis do? I think the more pressing concern is to make every potential researcher, including the Nazis, aware of the 90% chance. Legislation helping to slow is all your top-level comment requires.
In what circumstances are you needing international coordination, and what is needed. I put substantial probability in a world that goes straight from a few researchers, ignored by the world, to super-intelligent AI with nanotech. In what world model do we have serious discussions about ASI in the UN, and in what worlds do we need them?
In your world, a treaty might make everyone keep the researchers in check until alignment is solved.
This sort of thing is really difficult to regulate for several reasons.
1) What are we banning? Potentially dangerous computer programs. What programs designs are dangerous? Experts disagree on what kind of algorithms might lead AGI. If someone hands you an arbitrary program, how do you tell if its allowed? (Assuming you don’t want to ban all programming.) If we can’t say what we are banning, how can we ban it.
2) We can’t limit access. Enriched uranium is rare, hard to make and easy to track. This lets governments stop people from making nukes easily. Some recreational drugs can be produced with seeds and a room full of plants. The substances needed are accessible to most people. Drugs are easy to detect in minuscule quantities, and some of the equipment needed to produce them is hard to hide. Law enforcement has largely failed to stop them. If we knew that AGI required a GPU farm, limiting access somewhat might be doable. If serious processing power and memory are not required, you are trying to ban something that can be encrypted, can be sent anywhere in the world in moments, can be copied indefinitely. Look at how successful law enforcement agencies have been in stopping people making malware, or in censoring anything.
3) Unlike nearly every other circumstance in law enforcement, incentives don’t work. As soon as someone actually thinks they have a shot at AGI, making it illegal won’t stop them. If they succeed in making a safe AGI, programmed to do whatever they want, your threats are meaningless. If they make a papeclipper, the threat of jail is even more meaningless.
4) Unlike with most crimes, the people likely to make AGI are exceptionally smart. Less likely to be caught.
5) You have to succeed Every time.
Arresting everyone with “deep learning” on their CV would slow progress, banning all computer chips from being made would have worked in 1950, but if an old smartphone is enough, don’t expect even an authoritarian world government to be able to round up every last one. So stopping new chips being made and smashing GPU’s would slow work, but not stop it. Legislation can slow the arrival of AGI, but not be confidant of stopping it without a LOT of collateral.
The treaty can constrain the largest projects like DeepMind. Identifying them can be done by an international court. We don’t need to defend against the smartest humans, only the crackpots that still think we’re in an arms race instead of a bomb defusal operation. Imagine that the Manhattan project mathematicians had computed that a nuke had a 90% chance of igniting the atmosphere. Would their generals still have been table thumping that they should develop the nukes before the Nazis do? I think the more pressing concern is to make every potential researcher, including the Nazis, aware of the 90% chance. Legislation helping to slow is all your top-level comment requires.