While Eliezer wrote posts explaining various issues about AI alignment on Arbital nobody linked to the explanations on LessWrong.
In case anybody reading this is curious about those AI alignment posts:
https://arbital.com/explore/ai_alignment/
(note: loads slowly)
Just to say, these are amazing. I would rate them above Superintelligence, or indeed almost any other resource, for increasing someone’s concrete understanding of AI safety
In case anybody reading this is curious about those AI alignment posts:
https://arbital.com/explore/ai_alignment/
(note: loads slowly)
Just to say, these are amazing. I would rate them above Superintelligence, or indeed almost any other resource, for increasing someone’s concrete understanding of AI safety