Can you explain more about how FDA for AI could lead to s-risk? “Risk of suffering on an astronomical scale.” I’m skeptical. The FDA may be evil, but it’s not that evil… ;)
I take Ayen’s concern very, very seriously. I think the most immediate risk is that the AI Regulatory Bureau (AIRB) would regulate real AI safety, so MIRI wouldn’t be able to get anything done. Even if you wrote the law saying “this doesn’t apply to AI Alignment research,” the courts could interpret that sufficiently narrowly such that the moment you turn on an actual computer you are now a regulated entity per AIRB Ruling 3A..
In this world, we thought we were making it harder for DeepMind to conduct AI research. But they have plenty of money to throw at compliance so it barely slows them down. What we actually do, is it make it illegal for MIRI to operate.
I realize the irony in this. There is an alignment problem for regulation, which, while not as difficult as AI is also quite hard.
I’d find it really hard to imagine MIRI getting regulated. It’s more common that regulation steps in where an end user or consumer could be harmed, and for that you need to deploy products to those users/consumers. As far as I’m aware, this is quite far from the kind of safety research MIRI does.
Sorry I must be really dumb but I didn’t understand what you mean by the alignment problem for regulation? Aligning regulators to regulate the important/potentially harmful bits? I don’t think this is completely random, even if focused more on trivial issues, they’re more likely to support safety teams (although sure the models they’ll be working on making safe won’t be as capable, that’s the point).
Can you explain more about how FDA for AI could lead to s-risk? “Risk of suffering on an astronomical scale.” I’m skeptical. The FDA may be evil, but it’s not that evil… ;)
I would also appreciate an elaboration by Aiyen on the suffering risk point.
I take Ayen’s concern very, very seriously. I think the most immediate risk is that the AI Regulatory Bureau (AIRB) would regulate real AI safety, so MIRI wouldn’t be able to get anything done. Even if you wrote the law saying “this doesn’t apply to AI Alignment research,” the courts could interpret that sufficiently narrowly such that the moment you turn on an actual computer you are now a regulated entity per AIRB Ruling 3A..
In this world, we thought we were making it harder for DeepMind to conduct AI research. But they have plenty of money to throw at compliance so it barely slows them down. What we actually do, is it make it illegal for MIRI to operate.
I realize the irony in this. There is an alignment problem for regulation, which, while not as difficult as AI is also quite hard.
I’d find it really hard to imagine MIRI getting regulated. It’s more common that regulation steps in where an end user or consumer could be harmed, and for that you need to deploy products to those users/consumers. As far as I’m aware, this is quite far from the kind of safety research MIRI does.
Sorry I must be really dumb but I didn’t understand what you mean by the alignment problem for regulation? Aligning regulators to regulate the important/potentially harmful bits? I don’t think this is completely random, even if focused more on trivial issues, they’re more likely to support safety teams (although sure the models they’ll be working on making safe won’t be as capable, that’s the point).