Alignment & AgencyRaemonApr 9, 2022, 9:57 PMAn Orthodox Case Against Utility FunctionsabramdemskiApr 7, 2020, 7:18 PM154 points66 comments8 min readLW link2 reviewsThe Pointers Problem: Human Values Are A Function Of Humans’ Latent VariablesjohnswentworthNov 18, 2020, 5:47 PM128 points50 comments11 min readLW link2 reviewsAlignment By DefaultjohnswentworthAug 12, 2020, 6:54 PM174 points96 comments11 min readLW link2 reviewsAn overview of 11 proposals for building safe advanced AIevhubMay 29, 2020, 8:38 PM220 points37 comments38 min readLW link2 reviewsThe ground of optimizationAlex FlintJun 20, 2020, 12:38 AM248 points80 comments27 min readLW link1 reviewSearch versus designAlex FlintAug 16, 2020, 4:53 PM109 points40 comments36 min readLW link1 reviewInner Alignment: Explain like I’m 12 EditionRafael HarthAug 1, 2020, 3:24 PM184 points47 comments13 min readLW link2 reviewsInaccessible informationpaulfchristianoJun 3, 2020, 5:10 AM83 points17 comments14 min readLW link2 reviews(ai-alignment.com)AGI safety from first principles: IntroductionRichard_NgoSep 28, 2020, 7:53 PM128 points18 comments2 min readLW link1 reviewIs Success the Enemy of Freedom? (Full)alkjashOct 26, 2020, 8:25 PM302 points69 comments9 min readLW link1 review(radimentary.wordpress.com)