Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
driplikesake
Karma:
1
All
Posts
Comments
New
Top
Old
driplikesake
3 Dec 2017 4:56 UTC
2
points
on:
Message to any future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” [AI alignment prize entry]
Counter to point 4.5.1.: Couldn’t a RAI simulate an FAI to create indexical uncertainty as well?
Back to top
Counter to point 4.5.1.: Couldn’t a RAI simulate an FAI to create indexical uncertainty as well?