I would love to read a rationality textbook authored by a paperclip maximizer.
If for no other reason that it means they aren’t actually an agent that is maximizing paperclips. That’s be dangerous!
Almost any human existential risk is also a paperclip risk.
Current theme: default
Less Wrong (text)
Less Wrong (link)
If for no other reason that it means they aren’t actually an agent that is maximizing paperclips. That’s be dangerous!
Almost any human existential risk is also a paperclip risk.