So it could be that your viewpoint is more likely, and the rest of us are suffering from “anthropomorphic bias”, but it also could be that anthropomorphic bias is in fact a self-fulfilling prophecy.
I don’t see how. We could get something like that if we get uploads before AGI, but that would really be more like an enhanced human taking over the world. Aside from that, where’s the self-fulfilling prophecy? If people expect AGIs to exhibit human-like emotions and primate status drives and go terribly wrong as a result, why does that increase the chance that the creators of the first powerful AGI will build human-like emotions and primate status drives into it?
Actual uploads are a far end point along a continuum of human-like cognitive architectures, and have the additional complexity of scanning technology which lags far behind electronics. You don’t need uploads for anthropomorphic AI—you just need to loosely reverse engineer the brain.
Also, “human-like cognitive architectures” is a wide spectrum that does not require human-like emotions or primate status drives—consider the example of Alexithymia.
Understanding human languages is a practical prerequisite for any AI to reach high levels of intelligence, and the implied anthropomorphic cognitive capacities required for true linguistic thinking heavily constrains the design space.
The self-fulfilling prophecy is that anthropomorphic AI will be both easier for us to create and more useful for us—so the bias is correct in a self-reinforcing manner.
I don’t see how. We could get something like that if we get uploads before AGI, but that would really be more like an enhanced human taking over the world. Aside from that, where’s the self-fulfilling prophecy? If people expect AGIs to exhibit human-like emotions and primate status drives and go terribly wrong as a result, why does that increase the chance that the creators of the first powerful AGI will build human-like emotions and primate status drives into it?
Actual uploads are a far end point along a continuum of human-like cognitive architectures, and have the additional complexity of scanning technology which lags far behind electronics. You don’t need uploads for anthropomorphic AI—you just need to loosely reverse engineer the brain.
Also, “human-like cognitive architectures” is a wide spectrum that does not require human-like emotions or primate status drives—consider the example of Alexithymia.
Understanding human languages is a practical prerequisite for any AI to reach high levels of intelligence, and the implied anthropomorphic cognitive capacities required for true linguistic thinking heavily constrains the design space.
The self-fulfilling prophecy is that anthropomorphic AI will be both easier for us to create and more useful for us—so the bias is correct in a self-reinforcing manner.