Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
zeshen
Karma:
404
Feedback welcomed:
www.admonymous.co/zeshen
All
Posts
Comments
New
Top
Old
Non-loss of control AGI-related catastrophes are out of control too
Yi-Yang
,
Mo Putera
and
zeshen
Jun 12, 2023, 12:01 PM
2
points
3
comments
24
min read
LW
link
[Question]
Is there a way to sort LW search results by date posted?
zeshen
Mar 12, 2023, 4:56 AM
5
points
1
comment
1
min read
LW
link
A newcomer’s guide to the technical AI safety field
zeshen
Nov 4, 2022, 2:29 PM
42
points
3
comments
10
min read
LW
link
Embedding safety in ML development
zeshen
Oct 31, 2022, 12:27 PM
24
points
1
comment
18
min read
LW
link
aisafety.community—A living document of AI safety communities
zeshen
and
plex
Oct 28, 2022, 5:50 PM
58
points
23
comments
1
min read
LW
link
My Thoughts on the ML Safety Course
zeshen
Sep 27, 2022, 1:15 PM
50
points
3
comments
17
min read
LW
link
Summary of ML Safety Course
zeshen
Sep 27, 2022, 1:05 PM
7
points
0
comments
6
min read
LW
link
Levels of goals and alignment
zeshen
Sep 16, 2022, 4:44 PM
27
points
4
comments
6
min read
LW
link
What if we approach AI safety like a technical engineering safety problem
zeshen
Aug 20, 2022, 10:29 AM
36
points
4
comments
7
min read
LW
link
I missed the crux of the alignment problem the whole time
zeshen
Aug 13, 2022, 10:11 AM
53
points
7
comments
3
min read
LW
link
Back to top