RSS

Michael Soareverix

Karma: 93

Our Ex­ist­ing Solu­tions to AGI Align­ment (semi-safe)

Michael SoareverixJul 21, 2022, 7:00 PM
12 points
1 comment3 min readLW link

Mus­ings on the Hu­man Ob­jec­tive Function

Michael SoareverixJul 15, 2022, 7:13 AM
3 points
0 comments3 min readLW link

Three Min­i­mum Pivotal Acts Pos­si­ble by Nar­row AI

Michael SoareverixJul 12, 2022, 9:51 AM
0 points
4 comments2 min readLW link

Could an AI Align­ment Sand­box be use­ful?

Michael SoareverixJul 2, 2022, 5:06 AM
2 points
1 comment1 min readLW link