RSS

williamsae

Karma: −20

The Over­looked Ne­ces­sity of Com­plete Se­man­tic Rep­re­sen­ta­tion in AI Safety and Alignment

williamsaeAug 15, 2024, 7:42 PM
−1 points
0 comments3 min readLW link

Why the Solu­tions to AI Align­ment are Likely Out­side the Over­ton Window

williamsaeJun 6, 2023, 2:21 PM
−6 points
0 comments3 min readLW link

Do You Really Want Effec­tive Altru­ism?

williamsaeJun 4, 2023, 8:06 AM
−7 points
3 comments7 min readLW link

Are All Ex­is­ten­tial Risks Equiv­a­lent to a Lack of Gen­eral Col­lec­tive In­tel­li­gence? And is GCI there­fore the Most Im­por­tant Hu­man In­no­va­tion in the His­tory and Im­me­di­ate Fu­ture of Mankind?

williamsaeJun 19, 2020, 8:38 PM
0 points
0 comments5 min readLW link