Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
Dr_Manhattan comments on
Stuart Russell: AI value alignment problem must be an “intrinsic part” of the field’s mainstream agenda
Dr_Manhattan
26 Nov 2014 21:30 UTC
1
point
Why does the compromise have to be a function of simplified values? I don’t think I implied that.
Back to top
Why does the compromise have to be a function of simplified values? I don’t think I implied that.