AIs taking over the world because they have implausibly human-like cognitive architectures and they hate us or resent us or desire higher status than us is an easy concept to get across. An AI immediately taking apart the world to use its mass for something else because its goal system is nothing like ours and its utility function doesn’t even have a term for human values is more difficult; because of anthropomorphic bias, it will be much less salient to people, even if it is more probable.
I actually come from that outside-LW viewpoint that finds the former scenario involving “human-like cognitive architectures” as vastly more probable than “AI immediately taking apart the world to use its mass for something else because its goal system is nothing like ours and its utility function doesn’t even have a term for human values”.
So it could be that your viewpoint is more likely, and the rest of us are suffering from “anthropomorphic bias”, but it also could be that anthropomorphic bias is in fact a self-fulfilling prophecy.
So it could be that your viewpoint is more likely, and the rest of us are suffering from “anthropomorphic bias”, but it also could be that anthropomorphic bias is in fact a self-fulfilling prophecy.
I don’t see how. We could get something like that if we get uploads before AGI, but that would really be more like an enhanced human taking over the world. Aside from that, where’s the self-fulfilling prophecy? If people expect AGIs to exhibit human-like emotions and primate status drives and go terribly wrong as a result, why does that increase the chance that the creators of the first powerful AGI will build human-like emotions and primate status drives into it?
Actual uploads are a far end point along a continuum of human-like cognitive architectures, and have the additional complexity of scanning technology which lags far behind electronics. You don’t need uploads for anthropomorphic AI—you just need to loosely reverse engineer the brain.
Also, “human-like cognitive architectures” is a wide spectrum that does not require human-like emotions or primate status drives—consider the example of Alexithymia.
Understanding human languages is a practical prerequisite for any AI to reach high levels of intelligence, and the implied anthropomorphic cognitive capacities required for true linguistic thinking heavily constrains the design space.
The self-fulfilling prophecy is that anthropomorphic AI will be both easier for us to create and more useful for us—so the bias is correct in a self-reinforcing manner.
I actually come from that outside-LW viewpoint that finds the former scenario involving “human-like cognitive architectures” as vastly more probable than “AI immediately taking apart the world to use its mass for something else because its goal system is nothing like ours and its utility function doesn’t even have a term for human values”.
So it could be that your viewpoint is more likely, and the rest of us are suffering from “anthropomorphic bias”, but it also could be that anthropomorphic bias is in fact a self-fulfilling prophecy.
I don’t see how. We could get something like that if we get uploads before AGI, but that would really be more like an enhanced human taking over the world. Aside from that, where’s the self-fulfilling prophecy? If people expect AGIs to exhibit human-like emotions and primate status drives and go terribly wrong as a result, why does that increase the chance that the creators of the first powerful AGI will build human-like emotions and primate status drives into it?
Actual uploads are a far end point along a continuum of human-like cognitive architectures, and have the additional complexity of scanning technology which lags far behind electronics. You don’t need uploads for anthropomorphic AI—you just need to loosely reverse engineer the brain.
Also, “human-like cognitive architectures” is a wide spectrum that does not require human-like emotions or primate status drives—consider the example of Alexithymia.
Understanding human languages is a practical prerequisite for any AI to reach high levels of intelligence, and the implied anthropomorphic cognitive capacities required for true linguistic thinking heavily constrains the design space.
The self-fulfilling prophecy is that anthropomorphic AI will be both easier for us to create and more useful for us—so the bias is correct in a self-reinforcing manner.