I would love to read a rationality textbook authored by a paperclip maximizer.
Me too. After all, a traditional paperclip maximizer would be quite rational—in fact much more rational than anyone known today—but its objectives (and therefore likely its textbook examples) would appear very unusual indeed!
If for no other reason that it means they aren’t actually an agent that is maximizing paperclips. That’s be dangerous!
Almost any human existential risk is also a paperclip risk.
I would love to read a rationality textbook authored by a paperclip maximizer.
Me too. After all, a traditional paperclip maximizer would be quite rational—in fact much more rational than anyone known today—but its objectives (and therefore likely its textbook examples) would appear very unusual indeed!
If for no other reason that it means they aren’t actually an agent that is maximizing paperclips. That’s be dangerous!
Almost any human existential risk is also a paperclip risk.