Well, a paperclip maximizer has an identifiable goal. What is the identifiable goal of humans?
Well, “finding new algorithms” aka learning may itself be a kind of algorithm, but certainly of a higher-level than a simple algorithms aka instinct or reflex. I think there is a qualitative difference between an entity that cannot learn and an entity that can.
Well, a paperclip maximizer has an identifiable goal. What is the identifiable goal of humans?
Well, “finding new algorithms” aka learning may itself be a kind of algorithm, but certainly of a higher-level than a simple algorithms aka instinct or reflex. I think there is a qualitative difference between an entity that cannot learn and an entity that can.