I can’t think of much that the government could do to seriously reduce the chance of UFAI, although taxing/banning GPUs (or just all powerful computers) could help. (It wouldn’t be easy for one government to put much of a dent in moors law)
I don’t think that the government is capable of distinguishing people that are doing actual safety work from people who say the word “safety” a lot.
If I write some code, how is the government even going to tell if its AI? Does this mean that any piece of code anywhere has to be sent to some government safety checkers before it can be run? Either that check needs to be automated, or it requires a vast amount of skilled labour, or you are making programming almost illegal.
If the ban on writing arbitrary code without safety approval isn’t actively enforced, especially if getting approval is slow or expensive, then many researchers will run code now, with the intention of getting approval later if they want to publish (Test your program until you’ve removed all the bugs, then send that to the safety checkers).
There is no way that legislators can distinguish between ordinary code and experimental AI designs that would be more than moderately inconvenient to avoid.
The government can throw lots of money at anyone who talks about AI safety, and if they are lucky, as much as 10% of that money might go to people doing actual research.
They could legislate that all CS courses must have a class on AI safety, and maybe some of those classes would be any good.
I think that if Nick Bostrum was POTUS, they could pass some fairly useful AI safety laws, but nothing gamechanging. I think that to be useful at all, the rules either need to be high fidelity (a large amount of information directly from the AI safety community) or too drastic for anyone to support them. “Destroy all computers” is a short and easily memorable slogan, but way outside the overton window. Any non drastic proposals that made a significant difference can’t be described in slogans.
I can’t think of much that the government could do to seriously reduce the chance of UFAI, although taxing/banning GPUs (or just all powerful computers) could help. (It wouldn’t be easy for one government to put much of a dent in moors law)
I don’t think that the government is capable of distinguishing people that are doing actual safety work from people who say the word “safety” a lot.
If I write some code, how is the government even going to tell if its AI? Does this mean that any piece of code anywhere has to be sent to some government safety checkers before it can be run? Either that check needs to be automated, or it requires a vast amount of skilled labour, or you are making programming almost illegal.
If the ban on writing arbitrary code without safety approval isn’t actively enforced, especially if getting approval is slow or expensive, then many researchers will run code now, with the intention of getting approval later if they want to publish (Test your program until you’ve removed all the bugs, then send that to the safety checkers).
There is no way that legislators can distinguish between ordinary code and experimental AI designs that would be more than moderately inconvenient to avoid.
The government can throw lots of money at anyone who talks about AI safety, and if they are lucky, as much as 10% of that money might go to people doing actual research.
They could legislate that all CS courses must have a class on AI safety, and maybe some of those classes would be any good.
I think that if Nick Bostrum was POTUS, they could pass some fairly useful AI safety laws, but nothing gamechanging. I think that to be useful at all, the rules either need to be high fidelity (a large amount of information directly from the AI safety community) or too drastic for anyone to support them. “Destroy all computers” is a short and easily memorable slogan, but way outside the overton window. Any non drastic proposals that made a significant difference can’t be described in slogans.