If we know they aren’t conscious, then it is a non-issue. A random sample from conscious beings would land on the SAI with probability 0. I’m concerned we create something accidently conscious.
I am skeptical it is easy to avoid. If it can simulate a conscious being, why isn’t that simulation conscious? If consciousness is a property of the physical universe, then an isomorphic process would have the same properties. And if it can’t simulate a conscious being, then it is not a superintelligence.
It can, however, possibly have a non-conscious outer-program… and avoid simulating people. That seems like a reasonable proposal.
If we know they aren’t conscious, then it is a non-issue. A random sample from conscious beings would land on the SAI with probability 0. I’m concerned we create something accidently conscious.
I am skeptical it is easy to avoid. If it can simulate a conscious being, why isn’t that simulation conscious? If consciousness is a property of the physical universe, then an isomorphic process would have the same properties. And if it can’t simulate a conscious being, then it is not a superintelligence.
It can, however, possibly have a non-conscious outer-program… and avoid simulating people. That seems like a reasonable proposal.