Ownership and Artificial Intelligence
(This is a subject that appears incredibly important to me, but it’s received no discussion on LW from what I can see with a brief search. Please do link to articles about this if I missed them.)
Edit: This is all assuming that the first powerful AIs developed aren’t exponentially self-improving; if there’s no significant period of time where powerful AIs exist but they’re not so powerful that the ownership relations between them and their creators don’t matter, these questions are obviously not important.
What are some proposed ownership situations between artificial intelligence and its creators? Suppose a group of people creates some powerful artificial intelligence that appears to be conscious in most/every way—who owns it? Should the AI legally have self-ownership, and all the responsibility for its actions and ownership of the results of its labor that implies? Or, should strong AI be protected by IP, the way non-strong AI code already can be, treated as a tool rather than a conscious agent? It seems wise to implore people to not create AIs that want to have total free agency and generally act like humans, but that’s hardly a guarantee that nobody will, and then you have the ethical issue of not being able to just kill them once they’re created (if they “want” to exist and appear genuinely conscious). Are there any proposed tests to determine whether a synthetic agent should be able to own itself or become the property of its creators?
I imagine there aren’t yet good answers to all these questions, but surely, there’s some discussion of the issue somewhere, whether in rationalist/futurist circles or just sci-fi. Also, please correct me on any poor word choice you notice that needlessly limits the topic; it’s broad, and I’m not yet completely familiar with the lingo of this subject.
See Nonperson Predicates
See Nonsentient Optimizers
Understatement of the year?
That was worded poorly. Thank you for reminding me of the Robots series.
For a science fictional handling, see The Life Cycle of Software Objects by Ted Chiang. It’s about various implications of sentient software pets.
Charles Stross’ Saturn’s Children (robots are imprinted on humans, but the human race is gone) might be of interest, though it’s a less likely scenario based on slightly modified recordings of human minds/brains.
I for one do not have a problem with discussing scenarios other than the one which is deemed to be most likely, important, or scary. (I don’t particularly have anything to contribute to it either, just wanted there to be another viewpoint in the comments.)
I see this isn’t very well-received. Could anyone do me the favor of explaining why? Is it just because I’m asking questions that people believe have already been addressed here? I’m new to posting on LW.
Some people here have the opinion that AI will definitely or very likely be immensely powerful and uncontrollable from the start.
So any argument that doesn’t share this premise won’t be well received. If you want to talk about things like this, say assuming no FOOM, what would happen or should we do?
However lesswrong is not really a good place for this sort of discussion. I don’t really think there is a good place. Until we know how the technology for AI will work discussion seems moot.
For what it is worth, I expect that for certain sorts of AI the courts will adapt the laws for animals.
If AI don’t FOOM or if friendliness or Friendliness turns out to be easy to establish, then the issues with AI become much more minor. It is only in those situations where your question becomes worth considering. The question then becomes interesting intellectually but probably not a question that requires massive resources. The concerns over AI are primarily due to FOOMing + potential unFriendliness.
A software agent with enough optimizing power to make this question relevant will do whatever it wants (i.e., has been programmed to want). Worrying about ownership at that point seems misplaced.
Suppose a powerful AI commits a serious crime, and the reason it wanted to commit that crime wasn’t because it was explicitly programmed to commit it, but instead emerged as a result of completely benign-appearing learning rules it was given. Would the AI be held legally liable in court like a person, or just disabled and the creators held liable? Are the creators liable for the actions of an unfriendly AI, even if they honestly and knowledgeably attempted to make it friendly?
Or, say that same powerful AI designs something, by itself, that could be patented. Can the creators completely claim that patent, or would it be shared, or would the AI get total credit?
If this strong AI enters a contract with a human (or another strong AI) for whatever reason, would/should a court of law recognize that contract?
These are all questions that seem relevant to the broader concept of ownership.
The answers to your questions are, in order:
Who does the AI want to be held accountable for the crime?
Who does the AI want to get credit for the invention?
Does the AI want the court to recognize their contract?
This is presuming, of course, that all humans have not been made into paperclips.
Do you believe that there’s truly no chance a powerful AI wouldn’t immediately dominate human society? Or restated: will a strong AI, if created, necessarily be unfriendly and also able to take control of human society (likely meaning exponentially self-improving)?
It’s very likely, but not necessary.
If it’s substantially smarter than humans, yes, whether or not massively recursive self-improvement plays a role. By “substantially smarter”, I mean an intelligence such that the difference between Einstein and the average human looks like a rounding error in comparison.
What do you think a meaningful probability, if one can be assigned, would be for the first strong AI to exhibit both of those traits? (Not trying to “grill” you; I can’t even imagine a good order of magnitude to put on that probability)
I don’t think I can come up with numerical probabilities, but I consider “massively smarter than a human” and “unfriendly” to be the default values for those characteristics, and don’t expect the first AGI to differ from the default unless there is a massive deliberate effort to make it otherwise.