RSS

AlexMennen

Karma: 4,460

What is cal­ibra­tion?

AlexMennenMar 13, 2023, 6:30 AM
27 points
1 comment4 min readLW link

Search­ing for a model’s con­cepts by their shape – a the­o­ret­i­cal framework

Feb 23, 2023, 8:14 PM
51 points
0 comments19 min readLW link

Event [Berkeley]: Align­ment Col­lab­o­ra­tor Speed-Meeting

Dec 19, 2022, 2:24 AM
18 points
2 comments1 min readLW link

Why bet Kelly?

AlexMennenNov 15, 2022, 6:12 PM
32 points
14 comments5 min readLW link

Aver­age prob­a­bil­ities, not log odds

AlexMennenNov 12, 2021, 9:39 PM
27 points
20 comments5 min readLW link

Map­ping Out Alignment

Aug 15, 2020, 1:02 AM
43 points
0 comments5 min readLW link

AlexMen­nen’s Shortform

AlexMennenDec 8, 2019, 4:51 AM
7 points
1 commentLW link

When wish­ful think­ing works

AlexMennenSep 1, 2018, 11:43 PM
41 points
1 comment3 min readLW link

Safely and use­fully spec­tat­ing on AIs op­ti­miz­ing over toy worlds

AlexMennenJul 31, 2018, 6:30 PM
24 points
16 comments2 min readLW link

Com­pu­ta­tional effi­ciency rea­sons not to model VNM-ra­tio­nal prefer­ence re­la­tions with util­ity functions

AlexMennenJul 25, 2018, 2:11 AM
16 points
5 comments3 min readLW link

A com­ment on the IDA-AlphaGoZero metaphor; ca­pa­bil­ities ver­sus alignment

AlexMennenJul 11, 2018, 1:03 AM
40 points
1 comment1 min readLW link

Log­i­cal un­cer­tainty and math­e­mat­i­cal uncertainty

AlexMennenJul 1, 2018, 12:33 AM
0 points
0 comments1 min readLW link
(www.lesswrong.com)

Log­i­cal un­cer­tainty and Math­e­mat­i­cal uncertainty

AlexMennenJun 26, 2018, 1:08 AM
35 points
6 comments4 min readLW link

More on the Lin­ear Utility Hy­poth­e­sis and the Lev­er­age Prior

AlexMennenFeb 26, 2018, 11:53 PM
16 points
4 comments9 min readLW link

Value learn­ing sub­prob­lem: learn­ing goals of sim­ple agents

AlexMennenDec 18, 2017, 2:05 AM
0 points
0 comments2 min readLW link

Against the Lin­ear Utility Hy­poth­e­sis and the Lev­er­age Penalty

AlexMennenDec 14, 2017, 6:38 PM
41 points
47 comments11 min readLW link

Be­ing leg­ible to other agents by com­mit­ting to us­ing weaker rea­son­ing systems

AlexMennenDec 3, 2017, 7:49 AM
4 points
1 comment3 min readLW link

Me­ta­math­e­mat­ics and probability

AlexMennenSep 22, 2017, 4:04 AM
1 point
0 comments1 min readLW link
(alexmennen.com)

Me­ta­math­e­mat­ics and Probability

AlexMennenSep 22, 2017, 3:07 AM
1 point
0 comments1 min readLW link
(alexmennen.com)

Den­sity Zero Exploration

AlexMennenAug 17, 2017, 12:43 AM
4 points
0 comments2 min readLW link