I think it’s obviously a bad idea to deploy AGI that has an unacceptably high chance of causing irreparable harm. I think the questions of “what chance is unacceptably high?” and “what is the chance of causing irreparable harm for this proposed AGI?” are both complicated technical questions that I am not optimistic will be answered well by policy-makers or government bodies. I currently expect it’ll take serious effort to have answers at all when we need them, let alone answers that could persuade Congress.
This makes me especially worried about attempts to shift policy that aren’t in touch with the growing science of AI Alignment, but then there’s something of a double bind: if the policy efforts are close to the safety research efforts, and so you’re giving the best available advice to the policymakers, but you pay the price of backlash from AI researchers if they think regulation-by-policy is a mistake. If the two are distant, then the safety researchers can say their hands are clean, but now the regulation is even more likely to be a mistake.
Thanks for your comments, these are interesting points. I agree that these are hard questions and that it’s not clear that policymakers will be good at answering them. However, I don’t think AI researchers themselves are any better, which you seem to imply. I’ve worked as an engineer myself and I’ve seen that when engineers or scientists are close to their own topic, their judgement of any risks/downsides of this topic does not become more reliable, but less. AGI safety researchers will be convinced about AGI risk, but I’m afraid their judgement of their own remedies will also not be the best judgment available. You’re right, these risk estimates may be technical and politicians will not have the opportunity to look into the details. What I would have in mind is more a governmental body. We have an environmental planning agency in The Netherlands for example, helping politicians with technical climate questions. Something like that for AGI—with knowledgeable people, but not tied to AI research themselves—that’s how close you can come to a good risk estimate I think.
You might also say that any X-risk above a certain threshold, say 1%, is too high. Then perhaps it doesn’t even matter whether it’s 10% or 15%. Although I still think it’s important impartial experts in service of the public find out.
I think it’s obviously a bad idea to deploy AGI that has an unacceptably high chance of causing irreparable harm. I think the questions of “what chance is unacceptably high?” and “what is the chance of causing irreparable harm for this proposed AGI?” are both complicated technical questions that I am not optimistic will be answered well by policy-makers or government bodies. I currently expect it’ll take serious effort to have answers at all when we need them, let alone answers that could persuade Congress.
This makes me especially worried about attempts to shift policy that aren’t in touch with the growing science of AI Alignment, but then there’s something of a double bind: if the policy efforts are close to the safety research efforts, and so you’re giving the best available advice to the policymakers, but you pay the price of backlash from AI researchers if they think regulation-by-policy is a mistake. If the two are distant, then the safety researchers can say their hands are clean, but now the regulation is even more likely to be a mistake.
Thanks for your comments, these are interesting points. I agree that these are hard questions and that it’s not clear that policymakers will be good at answering them. However, I don’t think AI researchers themselves are any better, which you seem to imply. I’ve worked as an engineer myself and I’ve seen that when engineers or scientists are close to their own topic, their judgement of any risks/downsides of this topic does not become more reliable, but less. AGI safety researchers will be convinced about AGI risk, but I’m afraid their judgement of their own remedies will also not be the best judgment available. You’re right, these risk estimates may be technical and politicians will not have the opportunity to look into the details. What I would have in mind is more a governmental body. We have an environmental planning agency in The Netherlands for example, helping politicians with technical climate questions. Something like that for AGI—with knowledgeable people, but not tied to AI research themselves—that’s how close you can come to a good risk estimate I think.
You might also say that any X-risk above a certain threshold, say 1%, is too high. Then perhaps it doesn’t even matter whether it’s 10% or 15%. Although I still think it’s important impartial experts in service of the public find out.