Intelligent minds always come with built-in drives; there’s nothing that in general makes goals chosen by another intelligence worse than those arrived through any other process (e.g. natural selection in the case of humans).
One of the closest corresponding human institutions—slavery—has a very bad reputation, and for good reason: Humans are typically not set up to do this sort of thing, so it tends to make them miserable. Even if you could get around that, there’s massive moral issues with subjugating an existing intelligent entity that would prefer not to be. Neither of those inherently apply to newly designed entities. Misery is still something that’s very much worth avoiding, but that issue is largely orthogonal to how the entity’s goals are determined.
Intelligent minds always come with built-in drives; there’s nothing that in general makes goals chosen by another intelligence worse than those arrived through any other process (e.g. natural selection in the case of humans).
One of the closest corresponding human institutions—slavery—has a very bad reputation, and for good reason: Humans are typically not set up to do this sort of thing, so it tends to make them miserable. Even if you could get around that, there’s massive moral issues with subjugating an existing intelligent entity that would prefer not to be. Neither of those inherently apply to newly designed entities. Misery is still something that’s very much worth avoiding, but that issue is largely orthogonal to how the entity’s goals are determined.
For a counterargument to your first claim, see the Wisdom of Nature paper by Bostrom and Sandberg (2009 I think).