This is a very carefully reasoned and detailed post, which lays out a clear framework for thinking about approaches to alignment, and I’m especially excited because it points to one quadrant—engineering-focused research without human models—as highly neglected. For these three reasons I’ve curated the post.
This is a very carefully reasoned and detailed post, which lays out a clear framework for thinking about approaches to alignment, and I’m especially excited because it points to one quadrant—engineering-focused research without human models—as highly neglected. For these three reasons I’ve curated the post.