But why? That would be strictly more dangerous—way, way more dangerous—than a superintelligence that isn’t a “proper mind” in this sense!
(...)
(Because it would be a terrible idea. Obviously.)
Why? Do you think humans are doing such a great job? I sure don’t. I’m interested in the creation of something saner than humans, because humans mostly are not. Obviously. :)
Yes, I guess the central questions I’m trying to pose here are this: Do those humans that control the AI even have a sufficient understanding of good and bad? Can any human group be trusted with the power of a superintelligence long-term? Or if you say that only the initial goal specification matters, then can anyone be trusted to specify such goals without royally messing it up, intentionally or unintentionally?
Given the state of the world, given the flaws of humans, I certainly don’t think so. Therefore, the goal should be the creation of something less messed up to take over. That doesn’t require alignment to some common human value system (Whatever that even should be! It’s not like humans actually have a common value system, at least not one with each other’s best interests at heart.).