Promoted to curated: I like this post as a relatively self-contained explanation for why AI Alignment is hard. It’s not perfect, in that I do think it makes a bunch of inferences implicitly and without calling sufficient attention to them, but I still think overall this seems to me like one of the best things to link to when someone asks about why AI Alignment is an open problem.
Promoted to curated: I like this post as a relatively self-contained explanation for why AI Alignment is hard. It’s not perfect, in that I do think it makes a bunch of inferences implicitly and without calling sufficient attention to them, but I still think overall this seems to me like one of the best things to link to when someone asks about why AI Alignment is an open problem.