It seems strange to create something so far beyond ourselves, and have its values be ultimately that of a child or a servant.
Would you say the same of a steam engine, or Stockfish, or Mathematica? All of those vastly exceed human performance in various ways!
I don’t see much reason to think that very very capable AI systems are necessarily personlike or conscious, or have something-it-is-like-to-be-them—even if we imagine that they are designed and/or trained to behave in ways compatible with and promoting of human values and flourishing. Of course if an AI system does have these things I would also consider it a moral patient, but I’d prefer that our AI systems just aren’t moral patients until humanity has sorted out a lot more of our confusions.
I’d prefer that our AI systems just aren’t moral patients until humanity has sorted out a lot more of our confusions
I share this preference, but one of the confusions is whether our AI systems (and their impending successors) are moral patients. Which is a fact about AI systems and moral patienthood, and isn’t influenced by our hopes for it being true or not.
If we know they aren’t conscious, then it is a non-issue. A random sample from conscious beings would land on the SAI with probability 0. I’m concerned we create something accidently conscious.
I am skeptical it is easy to avoid. If it can simulate a conscious being, why isn’t that simulation conscious? If consciousness is a property of the physical universe, then an isomorphic process would have the same properties. And if it can’t simulate a conscious being, then it is not a superintelligence.
It can, however, possibly have a non-conscious outer-program… and avoid simulating people. That seems like a reasonable proposal.
Would you say the same of a steam engine, or Stockfish, or Mathematica? All of those vastly exceed human performance in various ways!
I don’t see much reason to think that very very capable AI systems are necessarily personlike or conscious, or have something-it-is-like-to-be-them—even if we imagine that they are designed and/or trained to behave in ways compatible with and promoting of human values and flourishing. Of course if an AI system does have these things I would also consider it a moral patient, but I’d prefer that our AI systems just aren’t moral patients until humanity has sorted out a lot more of our confusions.
I share this preference, but one of the confusions is whether our AI systems (and their impending successors) are moral patients. Which is a fact about AI systems and moral patienthood, and isn’t influenced by our hopes for it being true or not.
If we know they aren’t conscious, then it is a non-issue. A random sample from conscious beings would land on the SAI with probability 0. I’m concerned we create something accidently conscious.
I am skeptical it is easy to avoid. If it can simulate a conscious being, why isn’t that simulation conscious? If consciousness is a property of the physical universe, then an isomorphic process would have the same properties. And if it can’t simulate a conscious being, then it is not a superintelligence.
It can, however, possibly have a non-conscious outer-program… and avoid simulating people. That seems like a reasonable proposal.