In section 5, I explain how CoEm is an agenda with relaxed constraints. It does try to reduce the alignment tax to make the safety solution competitive for lab to use. Instead it considers there’s enough advance in international governance that you have full control over how your AI get built and that there’s enforcement mechanism to ensure no competitive but unsafe AI can be built somewhere else.
That’s what the bifurcation of narrative is about: not letting lab implement only solution that have low alignment tax because this could just not be enough.
In section 5, I explain how CoEm is an agenda with relaxed constraints. It does try to reduce the alignment tax to make the safety solution competitive for lab to use. Instead it considers there’s enough advance in international governance that you have full control over how your AI get built and that there’s enforcement mechanism to ensure no competitive but unsafe AI can be built somewhere else.
That’s what the bifurcation of narrative is about: not letting lab implement only solution that have low alignment tax because this could just not be enough.