RSS

zeshen

Karma: 404

Feedback welcomed: www.admonymous.co/​zeshen

Non-loss of con­trol AGI-re­lated catas­tro­phes are out of con­trol too

Jun 12, 2023, 12:01 PM
2 points
3 comments24 min readLW link

[Question] Is there a way to sort LW search re­sults by date posted?

zeshenMar 12, 2023, 4:56 AM
5 points
1 comment1 min readLW link

A new­comer’s guide to the tech­ni­cal AI safety field

zeshenNov 4, 2022, 2:29 PM
42 points
3 comments10 min readLW link

Embed­ding safety in ML development

zeshenOct 31, 2022, 12:27 PM
24 points
1 comment18 min readLW link

aisafety.com­mu­nity—A liv­ing doc­u­ment of AI safety communities

Oct 28, 2022, 5:50 PM
58 points
23 comments1 min readLW link

My Thoughts on the ML Safety Course

zeshenSep 27, 2022, 1:15 PM
50 points
3 comments17 min readLW link

Sum­mary of ML Safety Course

zeshenSep 27, 2022, 1:05 PM
7 points
0 comments6 min readLW link

Levels of goals and alignment

zeshenSep 16, 2022, 4:44 PM
27 points
4 comments6 min readLW link

What if we ap­proach AI safety like a tech­ni­cal en­g­ineer­ing safety problem

zeshenAug 20, 2022, 10:29 AM
36 points
4 comments7 min readLW link

I missed the crux of the al­ign­ment prob­lem the whole time

zeshenAug 13, 2022, 10:11 AM
53 points
7 comments3 min readLW link