I’m going to give more general feedback than Dorikka and say that there is a lot of material on AI safety in the Less Wrong sequences and the literature produced by MIRI and FHI people, and any LW post about AI safety is going to have to engage with all that material (at least), if only implicitly, before it gets upvotes.
I’m going to give more general feedback than Dorikka and say that there is a lot of material on AI safety in the Less Wrong sequences and the literature produced by MIRI and FHI people, and any LW post about AI safety is going to have to engage with all that material (at least), if only implicitly, before it gets upvotes.