Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
zeshen
Karma:
403
Feedback welcomed:
www.admonymous.co/zeshen
All
Posts
Comments
New
Top
Old
Non-loss of control AGI-related catastrophes are out of control too
Yi-Yang
,
Mo Putera
and
zeshen
12 Jun 2023 12:01 UTC
2
points
3
comments
24
min read
LW
link
[Question]
Is there a way to sort LW search results by date posted?
zeshen
12 Mar 2023 4:56 UTC
5
points
1
comment
1
min read
LW
link
A newcomer’s guide to the technical AI safety field
zeshen
4 Nov 2022 14:29 UTC
42
points
3
comments
10
min read
LW
link
Embedding safety in ML development
zeshen
31 Oct 2022 12:27 UTC
24
points
1
comment
18
min read
LW
link
aisafety.community—A living document of AI safety communities
zeshen
and
plex
28 Oct 2022 17:50 UTC
58
points
23
comments
1
min read
LW
link
My Thoughts on the ML Safety Course
zeshen
27 Sep 2022 13:15 UTC
50
points
3
comments
17
min read
LW
link
Summary of ML Safety Course
zeshen
27 Sep 2022 13:05 UTC
7
points
0
comments
6
min read
LW
link
Levels of goals and alignment
zeshen
16 Sep 2022 16:44 UTC
27
points
4
comments
6
min read
LW
link
What if we approach AI safety like a technical engineering safety problem
zeshen
20 Aug 2022 10:29 UTC
36
points
4
comments
7
min read
LW
link
I missed the crux of the alignment problem the whole time
zeshen
13 Aug 2022 10:11 UTC
53
points
7
comments
3
min read
LW
link
Back to top