Lately I’ve been appreciating, more and more, something I’m starting to call “Meta-Alignment.” Like, with everything that touches AI, we have to make sure that thing is aligned just enough to where it won’t mess up or “misalign” the alignment project. For example, we need to be careful about the discourse surrounding alignment, because we might give the wrong idea to people who will vote on policy or work on AI/AI adjacent fields themselves. Or policy needs to be carefully aligned, so it doesn’t create misaligned incentives that mess up the alignment project; the same goes for policies in companies that work with AI. This is probably a statement of the obvious, but it is really a daunting prospect the more I think about it.
Lately I’ve been appreciating, more and more, something I’m starting to call “Meta-Alignment.” Like, with everything that touches AI, we have to make sure that thing is aligned just enough to where it won’t mess up or “misalign” the alignment project. For example, we need to be careful about the discourse surrounding alignment, because we might give the wrong idea to people who will vote on policy or work on AI/AI adjacent fields themselves. Or policy needs to be carefully aligned, so it doesn’t create misaligned incentives that mess up the alignment project; the same goes for policies in companies that work with AI. This is probably a statement of the obvious, but it is really a daunting prospect the more I think about it.