Agree. Obviously alignment is important, but it has always creeped me out in the back of my mind, some of the strategies that involve always deferring to human preferences. It seems strange to create something so far beyond ourselves, and have its values be ultimately that of a child or a servant. What if a random consciousness sampled from our universe in the future, comes from it with probability almost 1? We probably have to keep that in mind too. Sigh, yet another constraint we have to add!
It seems strange to create something so far beyond ourselves, and have its values be ultimately that of a child or a servant.
Would you say the same of a steam engine, or Stockfish, or Mathematica? All of those vastly exceed human performance in various ways!
I don’t see much reason to think that very very capable AI systems are necessarily personlike or conscious, or have something-it-is-like-to-be-them—even if we imagine that they are designed and/or trained to behave in ways compatible with and promoting of human values and flourishing. Of course if an AI system does have these things I would also consider it a moral patient, but I’d prefer that our AI systems just aren’t moral patients until humanity has sorted out a lot more of our confusions.
I’d prefer that our AI systems just aren’t moral patients until humanity has sorted out a lot more of our confusions
I share this preference, but one of the confusions is whether our AI systems (and their impending successors) are moral patients. Which is a fact about AI systems and moral patienthood, and isn’t influenced by our hopes for it being true or not.
If we know they aren’t conscious, then it is a non-issue. A random sample from conscious beings would land on the SAI with probability 0. I’m concerned we create something accidently conscious.
I am skeptical it is easy to avoid. If it can simulate a conscious being, why isn’t that simulation conscious? If consciousness is a property of the physical universe, then an isomorphic process would have the same properties. And if it can’t simulate a conscious being, then it is not a superintelligence.
It can, however, possibly have a non-conscious outer-program… and avoid simulating people. That seems like a reasonable proposal.
Sure, but that appears to be a non-option at this point in history.
It is an option up to the point that it’s actually built. It may be a difficult option for our society to take at this stage, but you can’t talk about morality and then treat a choice with obvious ethical implications as a given mechanistic process we have no agency over in the same breath. We didn’t need to exterminate the natives of the Americas upon first contact, or to colonize Africa. We did it because it was the path of least resistance to the incentives in place at the time. But that doesn’t make them moral. Very few are the situations where the easy path is also the moral one. They were just the default absent a deliberate, significant, conscious effort to not do that, and the necessary sacrifices.
It’s also unclear, because the world as it stands is highly, highly immoral, and an imperfect solution could be a vast improvement.
The world is a lot better than it used to be in many ways. Risking to throw it away in a misguided sense of urgency because you can’t stand not seeing it be perfect within your lifetime is selfishness, not commitment to moral duty.
Agree. Obviously alignment is important, but it has always creeped me out in the back of my mind, some of the strategies that involve always deferring to human preferences. It seems strange to create something so far beyond ourselves, and have its values be ultimately that of a child or a servant. What if a random consciousness sampled from our universe in the future, comes from it with probability almost 1? We probably have to keep that in mind too. Sigh, yet another constraint we have to add!
Would you say the same of a steam engine, or Stockfish, or Mathematica? All of those vastly exceed human performance in various ways!
I don’t see much reason to think that very very capable AI systems are necessarily personlike or conscious, or have something-it-is-like-to-be-them—even if we imagine that they are designed and/or trained to behave in ways compatible with and promoting of human values and flourishing. Of course if an AI system does have these things I would also consider it a moral patient, but I’d prefer that our AI systems just aren’t moral patients until humanity has sorted out a lot more of our confusions.
I share this preference, but one of the confusions is whether our AI systems (and their impending successors) are moral patients. Which is a fact about AI systems and moral patienthood, and isn’t influenced by our hopes for it being true or not.
If we know they aren’t conscious, then it is a non-issue. A random sample from conscious beings would land on the SAI with probability 0. I’m concerned we create something accidently conscious.
I am skeptical it is easy to avoid. If it can simulate a conscious being, why isn’t that simulation conscious? If consciousness is a property of the physical universe, then an isomorphic process would have the same properties. And if it can’t simulate a conscious being, then it is not a superintelligence.
It can, however, possibly have a non-conscious outer-program… and avoid simulating people. That seems like a reasonable proposal.
At which point maybe the moral thing is to not build this thing.
Sure, but that appears to be a non-option at this point in history.
It’s also unclear, because the world as it stands is highly, highly immoral, and an imperfect solution could be a vast improvement.
It is an option up to the point that it’s actually built. It may be a difficult option for our society to take at this stage, but you can’t talk about morality and then treat a choice with obvious ethical implications as a given mechanistic process we have no agency over in the same breath. We didn’t need to exterminate the natives of the Americas upon first contact, or to colonize Africa. We did it because it was the path of least resistance to the incentives in place at the time. But that doesn’t make them moral. Very few are the situations where the easy path is also the moral one. They were just the default absent a deliberate, significant, conscious effort to not do that, and the necessary sacrifices.
The world is a lot better than it used to be in many ways. Risking to throw it away in a misguided sense of urgency because you can’t stand not seeing it be perfect within your lifetime is selfishness, not commitment to moral duty.