This is a clever idea, using what’s often considered a bug of regulation as a feature instead. I need to think on this, not sure whether I think it’s a good approach or not yet...
Simply put, AI Alignment has failed.
I do think this is an overstatement. There’s no misaligned AGI yet that I’m aware of, so how has alignment failed? I agree with the thrust of what you were saying though that it feels needlessly risky bet everything on the technical alignment lever when the governance & strategy lever is available too.
This is a clever idea, using what’s often considered a bug of regulation as a feature instead. I need to think on this, not sure whether I think it’s a good approach or not yet...
I do think this is an overstatement. There’s no misaligned AGI yet that I’m aware of, so how has alignment failed? I agree with the thrust of what you were saying though that it feels needlessly risky bet everything on the technical alignment lever when the governance & strategy lever is available too.