I don’t understand your reply.
How would you know that there’s no reason for terminal goals of a superintelligence “to get complicated” if humans, being “simple agents” in this context, are not sufficiently intelligent to consider highly complex goals?
I don’t understand your reply.
How would you know that there’s no reason for terminal goals of a superintelligence “to get complicated” if humans, being “simple agents” in this context, are not sufficiently intelligent to consider highly complex goals?