In popular AGI takeoff scenarios, AGIs end up doing something “unnatural” where the complexity of the goal is completely orthogonal to their ability to fulfill that goal. I’m looking for a scenario in which an AGI’s objective function is “very unnatural”. Not maximizing paperclips, building companies, or performing content recommendation, but something distinctly un-human. Thank you!
[Question] Very Unnatural Tasks?
In popular AGI takeoff scenarios, AGIs end up doing something “unnatural” where the complexity of the goal is completely orthogonal to their ability to fulfill that goal. I’m looking for a scenario in which an AGI’s objective function is “very unnatural”. Not maximizing paperclips, building companies, or performing content recommendation, but something distinctly un-human. Thank you!