Shane, religious fundamentalists routinely act based on their beliefs about God. Do you think that makes “God” a natural category that any superintelligence would ponder? I see “human thoughts about God” and “things that humans justify by referring to God” and “things you can get people to do by invoking God” as natural categories for any AI operating on modern Earth, though an unfriendly AI wouldn’t give it a second thought after wiping out humanity. But to go from here to reasoning about what God would actually be like is a needless and unnatural step.
If Bob believes that a locked safe, impenetrable to Bob, contains a valuable diamond, then Bob’s belief is a natural category when it comes to predicting and manipulating Bob; but the actual diamond is irrelevant, at least to predicting in manipulating Bob, so long as Bob can’t look directly at the diamond, and so long as we already know what Bob believes about the diamond.
In the same sense, an unfriendly AI has no reason consider what really is right as a natural category, to apply its own intelligence to the moral questions that humans are asking, any more than it has a motive to apply its own intelligence to the theological questions that humans used to ask. It has no interest, as humans do, in the idealized form of the answer; only in what humans believe and can be argued into.
Shane, religious fundamentalists routinely act based on their beliefs about God. Do you think that makes “God” a natural category that any superintelligence would ponder? I see “human thoughts about God” and “things that humans justify by referring to God” and “things you can get people to do by invoking God” as natural categories for any AI operating on modern Earth, though an unfriendly AI wouldn’t give it a second thought after wiping out humanity. But to go from here to reasoning about what God would actually be like is a needless and unnatural step.
If Bob believes that a locked safe, impenetrable to Bob, contains a valuable diamond, then Bob’s belief is a natural category when it comes to predicting and manipulating Bob; but the actual diamond is irrelevant, at least to predicting in manipulating Bob, so long as Bob can’t look directly at the diamond, and so long as we already know what Bob believes about the diamond.
In the same sense, an unfriendly AI has no reason consider what really is right as a natural category, to apply its own intelligence to the moral questions that humans are asking, any more than it has a motive to apply its own intelligence to the theological questions that humans used to ask. It has no interest, as humans do, in the idealized form of the answer; only in what humans believe and can be argued into.