RSS

Lauro Langosco

Karma: 542

https://​​www.laurolangosco.com/​​

Long-Term Fu­ture Fund Ask Us Any­thing (Septem­ber 2023)

Aug 31, 2023, 12:28 AM
33 points
6 comments1 min readLW link
(forum.effectivealtruism.org)

Lauro Lan­gosco’s Shortform

Lauro LangoscoJun 16, 2023, 10:17 PM
4 points
4 comments1 min readLW link

An Ex­er­cise to Build In­tu­itions on AGI Risk

Lauro LangoscoJun 7, 2023, 6:35 PM
52 points
3 comments8 min readLW link

Uncer­tainty about the fu­ture does not im­ply that AGI will go well

Lauro LangoscoJun 1, 2023, 5:38 PM
62 points
11 comments7 min readLW link

Re­search Direc­tion: Be the AGI you want to see in the world

Feb 5, 2023, 7:15 AM
43 points
0 comments7 min readLW link

Some rea­sons why a pre­dic­tor wants to be a consequentialist

Lauro LangoscoApr 15, 2022, 3:02 PM
23 points
16 comments5 min readLW link

Align­ment re­searchers, how use­ful is ex­tra com­pute for you?

Lauro LangoscoFeb 19, 2022, 3:35 PM
8 points
4 comments1 min readLW link

[Question] What al­ign­ment-re­lated con­cepts should be bet­ter known in the broader ML com­mu­nity?

Lauro LangoscoDec 9, 2021, 8:44 PM
6 points
4 comments1 min readLW link

Dis­cus­sion: Ob­jec­tive Ro­bust­ness and In­ner Align­ment Terminology

Jun 23, 2021, 11:25 PM
73 points
7 comments9 min readLW link

Em­piri­cal Ob­ser­va­tions of Ob­jec­tive Ro­bust­ness Failures

Jun 23, 2021, 11:23 PM
63 points
5 comments9 min readLW link