Alignment & AgencyRaemon9 Apr 2022 21:57 UTCAn Orthodox Case Against Utility Functionsabramdemski7 Apr 2020 19:18 UTC155 points65 comments8 min readLW link2 reviewsThe Pointers Problem: Human Values Are A Function Of Humans’ Latent Variablesjohnswentworth18 Nov 2020 17:47 UTC128 points49 comments11 min readLW link2 reviewsAlignment By Defaultjohnswentworth12 Aug 2020 18:54 UTC174 points96 comments11 min readLW link2 reviewsAn overview of 11 proposals for building safe advanced AIevhub29 May 2020 20:38 UTC213 points36 comments38 min readLW link2 reviewsThe ground of optimizationAlex Flint20 Jun 2020 0:38 UTC247 points80 comments27 min readLW link1 reviewSearch versus designAlex Flint16 Aug 2020 16:53 UTC108 points40 comments36 min readLW link1 reviewInner Alignment: Explain like I’m 12 EditionRafael Harth1 Aug 2020 15:24 UTC181 points47 comments13 min readLW link2 reviewsInaccessible informationpaulfchristiano3 Jun 2020 5:10 UTC83 points17 comments14 min readLW link2 reviews(ai-alignment.com)AGI safety from first principles: IntroductionRichard_Ngo28 Sep 2020 19:53 UTC128 points18 comments2 min readLW link1 reviewIs Success the Enemy of Freedom? (Full)alkjash26 Oct 2020 20:25 UTC294 points69 comments9 min readLW link1 review(radimentary.wordpress.com)