Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Jan Czechowski
Karma:
107
All
Posts
Comments
New
Top
Old
[Question]
Correcting human error vs doing exactly what you’re told—is there literature on this in context of general system design?
Jan Czechowski
29 Jun 2022 21:30 UTC
6
points
0
comments
1
min read
LW
link
Steganography and the CycleGAN—alignment failure case study
Jan Czechowski
11 Jun 2022 9:41 UTC
34
points
0
comments
4
min read
LW
link
Free course review — Reliable and Interpretable Artificial Intelligence (ETH Zurich)
Jan Czechowski
10 Aug 2021 16:36 UTC
7
points
0
comments
3
min read
LW
link
Jan Czechowski’s Shortform
Jan Czechowski
19 Jul 2021 21:55 UTC
2
points
11
comments
1
min read
LW
link
Back to top