RSS

agilecaveman

Karma: 80

Re­search agenda for AI safety and a bet­ter civilization

agilecavemanJul 22, 2020, 6:35 AM
12 points
2 comments16 min readLW link

Post-Ra­tion­al­ity and Ra­tion­al­ity, A Dialogue

agilecavemanNov 13, 2018, 5:55 AM
2 points
2 comments10 min readLW link

Au­mann’s Agree­ment Revisited

agilecavemanAug 27, 2018, 6:21 AM
6 points
1 comment7 min readLW link

Prob­lems in­te­grat­ing de­ci­sion the­ory and in­verse re­in­force­ment learning

agilecavemanMay 8, 2018, 5:11 AM
7 points
2 comments3 min readLW link

Dou­ble Crux­ing the AI Foom debate

agilecavemanApr 27, 2018, 6:46 AM
17 points
3 comments11 min readLW link

Fermi Para­dox—Refer­ence Class ar­gu­ments and Other Possibilities

agilecavemanApr 22, 2018, 12:56 AM
1 point
0 comments1 min readLW link
(steemit.com)

Meetup : Seat­tle Sec­u­lar Solstice

agilecavemanNov 18, 2016, 10:00 PM
1 point
0 comments1 min readLW link

Ex­plor­ing No­tions of “Utility Ap­prox­i­ma­tion” and test­ing quantilizers

agilecavemanJun 17, 2016, 6:17 AM
1 point
0 comments1 min readLW link
(www.overleaf.com)

Meetup : Dis­cus­sion of AI as a pos­i­tive and nega­tive fac­tor in global risk

agilecavemanJan 10, 2016, 4:31 AM
2 points
0 comments1 min readLW link

Meetup : MIRIx: Sleep­ing Beauty dis­cus­sion

agilecavemanDec 23, 2015, 4:29 AM
2 points
0 comments1 min readLW link

Meetup : Dona­tion De­ci­sion Day

agilecavemanDec 23, 2015, 4:03 AM
2 points
0 comments1 min readLW link

Meetup : Seat­tle Solstice

agilecavemanNov 9, 2015, 10:17 PM
2 points
0 comments1 min readLW link

At­tempt­ing to re­fine “max­i­miza­tion” with 3 new -izers

agilecavemanAug 11, 2015, 6:07 AM
2 points
1 comment1 min readLW link
(www.overleaf.com)