I like you writing about this: the policy problem is not mentioned often enough on this forum. Agree that it needs to be part of AGI safety research.
I have no deep insights to add, just a few high level remarks:
to pass laws internationally that make it illegal to operate or supervise an AGI that is not properly equipped with the relevant control mechanisms. I think this proposal is necessary but insufficient. The biggest problem with it is that it is totally unenforceable.
I feel that the ‘totally unenforceable’ meme is very dangerous—it is too often used as an excuse by people who are looking for reasons to stay out of the policy game. I also feel that your comments further down in the post in fact contradict this ‘totally unenforceable’.
Presuming that AGI is ultimately instantiated as a fancy program written in machine code, actually ensuring that no individual is running ‘unregulated’ code on their machine would require oversight measures draconian enough to render them logistically and politically inconceivable, especially in Western democracies.
You mean, exactly like how the oversight measures against making unregulated copies of particular strings of bits, in order to protect the business model of the music industry and Hollywood, was politically inconceivable in the time period from the 1980s till now, especially in Western democracies? We can argue about how effective this oversight has been, but many things are politically conceivable.
My last high-level remark is that there is a lot of AI policy research, and some of it is also applicable to AGI and x-risk. However, it is very rare to see AI policy researchers post on this forum.
Thanks for your comment! I agree with both of your hesitations and I think I will make the relevant changes to the post: instead of ‘totally unenforceable,’ I’ll say ‘seems quite challenging to enforce.’ I believe that this is true (and I hope that the broad takeaway from this post is basically the opposite of ‘researchers need to stay out of the policy game,’ so I’m not too concerned that I’d be incentivizing the wrong behavior).
To your point, ‘logistically and politically inconceivable’ is probably similarly overblown. I will change it to ‘highly logistically and politically fraught.’ You’re right that the general failure of these policies shouldn’t be equated with their inconceivability. (I am fairly confident that, if we were so inclined, we could go download a free copy of any movie or song we could dream of—I wouldn’t consider this a case study of policy success—only of policy conceivability!).
I like you writing about this: the policy problem is not mentioned often enough on this forum. Agree that it needs to be part of AGI safety research.
I have no deep insights to add, just a few high level remarks:
I feel that the ‘totally unenforceable’ meme is very dangerous—it is too often used as an excuse by people who are looking for reasons to stay out of the policy game. I also feel that your comments further down in the post in fact contradict this ‘totally unenforceable’.
You mean, exactly like how the oversight measures against making unregulated copies of particular strings of bits, in order to protect the business model of the music industry and Hollywood, was politically inconceivable in the time period from the 1980s till now, especially in Western democracies? We can argue about how effective this oversight has been, but many things are politically conceivable.
My last high-level remark is that there is a lot of AI policy research, and some of it is also applicable to AGI and x-risk. However, it is very rare to see AI policy researchers post on this forum.
Thanks for your comment! I agree with both of your hesitations and I think I will make the relevant changes to the post: instead of ‘totally unenforceable,’ I’ll say ‘seems quite challenging to enforce.’ I believe that this is true (and I hope that the broad takeaway from this post is basically the opposite of ‘researchers need to stay out of the policy game,’ so I’m not too concerned that I’d be incentivizing the wrong behavior).
To your point, ‘logistically and politically inconceivable’ is probably similarly overblown. I will change it to ‘highly logistically and politically fraught.’ You’re right that the general failure of these policies shouldn’t be equated with their inconceivability. (I am fairly confident that, if we were so inclined, we could go download a free copy of any movie or song we could dream of—I wouldn’t consider this a case study of policy success—only of policy conceivability!).