But also my suggestion in the post that AGI labs should diversify their alignment approaches assumed that labs exchange their matured frameworks for alignment (or in fact make them public) so that each lab can apply multiple alignment theories/​frameworks while designing and training their AI simultaneously. This way, each AI could be aligned to a higher degree with people than if only a single theory was applied.
But also my suggestion in the post that AGI labs should diversify their alignment approaches assumed that labs exchange their matured frameworks for alignment (or in fact make them public) so that each lab can apply multiple alignment theories/​frameworks while designing and training their AI simultaneously. This way, each AI could be aligned to a higher degree with people than if only a single theory was applied.