RSS

williamsae

Karma: −20

The Over­looked Ne­ces­sity of Com­plete Se­man­tic Rep­re­sen­ta­tion in AI Safety and Alignment

williamsae15 Aug 2024 19:42 UTC
−1 points
0 comments3 min readLW link

Why the Solu­tions to AI Align­ment are Likely Out­side the Over­ton Window

williamsae6 Jun 2023 14:21 UTC
−6 points
0 comments3 min readLW link

Do You Really Want Effec­tive Altru­ism?

williamsae4 Jun 2023 8:06 UTC
−7 points
3 comments7 min readLW link

Are All Ex­is­ten­tial Risks Equiv­a­lent to a Lack of Gen­eral Col­lec­tive In­tel­li­gence? And is GCI there­fore the Most Im­por­tant Hu­man In­no­va­tion in the His­tory and Im­me­di­ate Fu­ture of Mankind?

williamsae19 Jun 2020 20:38 UTC
0 points
0 comments5 min readLW link