Caledonian—I’d say that one of the key concepts in my current understanding of the Singularity is that it’s the polar opposite of a hard-wired goal. Surely the very idea is that we don’t know what happens inside/beyond a singularity, hence the name?
The whole point of attempting a “Friendly AI” is that its proponents believe that it IS possible to exclude entire branches of possibility from an AI’s courses of action—that the superhuman intelligence can be made safe. Not merely friendly in a human sense, but favorable to human interests, not ‘evil’.
Of course, they cannot provide an objective and rigorous description of what “being in human interests” actually entails, nor can they explain clearly what ‘evil’ is. But they know it when they see it, apparently. And since many of them seem to believe that ‘values’ are arbitrary, they’ve never bothered subjecting what they value to analysis.
Perhaps the possibility that a consequence of an entity being utterly good might be its being utterly unsafe has never occurred to them. And perhaps the possibility that superhuman general intelligence might analyze their values and find them lacking never occurred to them either. That would explain a lot.
The whole point of attempting a “Friendly AI” is that its proponents believe that it IS possible to exclude entire branches of possibility from an AI’s courses of action—that the superhuman intelligence can be made safe. Not merely friendly in a human sense, but favorable to human interests, not ‘evil’.
Of course, they cannot provide an objective and rigorous description of what “being in human interests” actually entails, nor can they explain clearly what ‘evil’ is. But they know it when they see it, apparently. And since many of them seem to believe that ‘values’ are arbitrary, they’ve never bothered subjecting what they value to analysis.
Perhaps the possibility that a consequence of an entity being utterly good might be its being utterly unsafe has never occurred to them. And perhaps the possibility that superhuman general intelligence might analyze their values and find them lacking never occurred to them either. That would explain a lot.
Why would being good make you unsafe?
Caledonian hasn’t posted anything since 2009, if you said that in hopes of him responding.