I’d find it really hard to imagine MIRI getting regulated. It’s more common that regulation steps in where an end user or consumer could be harmed, and for that you need to deploy products to those users/consumers. As far as I’m aware, this is quite far from the kind of safety research MIRI does.
Sorry I must be really dumb but I didn’t understand what you mean by the alignment problem for regulation? Aligning regulators to regulate the important/potentially harmful bits? I don’t think this is completely random, even if focused more on trivial issues, they’re more likely to support safety teams (although sure the models they’ll be working on making safe won’t be as capable, that’s the point).
I’d find it really hard to imagine MIRI getting regulated. It’s more common that regulation steps in where an end user or consumer could be harmed, and for that you need to deploy products to those users/consumers. As far as I’m aware, this is quite far from the kind of safety research MIRI does.
Sorry I must be really dumb but I didn’t understand what you mean by the alignment problem for regulation? Aligning regulators to regulate the important/potentially harmful bits? I don’t think this is completely random, even if focused more on trivial issues, they’re more likely to support safety teams (although sure the models they’ll be working on making safe won’t be as capable, that’s the point).