In the case of a very competent generalist, who isn’t demonstrably loyal, the very world itself (over which they are able to exert optimization power) will seem to ally itself with them. If you are a middlingly competent leader-by-luck, such an individual will of course be a threat to your position of power, unless your power/welfare is something the compent general optimizer is choosing to optimize for. So… it’s the alignment problem all over again. Human-to-human alignment costs us a lot of social overhead.
In the case of a very competent generalist, who isn’t demonstrably loyal, the very world itself (over which they are able to exert optimization power) will seem to ally itself with them. If you are a middlingly competent leader-by-luck, such an individual will of course be a threat to your position of power, unless your power/welfare is something the compent general optimizer is choosing to optimize for. So… it’s the alignment problem all over again. Human-to-human alignment costs us a lot of social overhead.