If some law is so obviously a good idea in all possible circumstances, the AI will do it whether it is law following or human preference following.
As explained in the second post, I don’t agree that that’s implied if the AI is intent-aligned but not aligned with some deeper moral framework like CEV.
The question isn’t if there are laws that are better than nothing. Its whether we are better encoding what we want the AI to do into laws, or into terms of a utility function. Which format (or maybe some other format) is best for encoding our preferences.
I agree that that is an important question. I think we have a very long track record of embedding our values into law. The point of this sequence is to argue that we should therefore at a minimum explore pointing to (some subset of) laws, which has a number of benefits relative to trying to integrate values into the utility function objectively. I will defend that idea more fully in a later post, but to briefly motivate the idea, law (as compared to something like the values that would come from CEV) is more or less completely written down, much more agreed-upon, much more formalized, and has built-in processes for resolving ambiguities and contradictions.
If the human has never imagined mind uploading, does A go up to the human and explain what it is, asking if maybe that law should be changed?
A cartoon version of this may be that A says “It’s not clear whether that’s legal, and if it’s not legal it would be very bad (murder), so I can’t proceed until there’s clarification.” If the human still wants to proceed, they can try to:
I think we have a very long track record of embedding our values into law.
I mean you could say that if we haven’t figured out how to do it well in the last 10,000 years, maybe don’t plan on doing it in the next 10. That’s kind of being mean though.
If you have a functioning arbitration process, can’t you just say “don’t do bad things” and leave everything down to the arbitration?
I also kind of feel that adding laws is going in the direction of more complexity. And we really want as simple as possible. (Ie the minimal AI that can sit in a MIRI basement and help them figure out the rest of AI theory or something)
If the human still wants to proceed, they can try to:
I was talking about a scenario where the human has never imagined the possibility, and asking if the AI mentions the possibility to the human (knowing the human may change the law to get it)
The human says “cure my cancer”. The AI reasons that it can
Tell the human of a drug that cures its cancer in the conventional sense.
Tell the human about mind uploading, never mentioning the chemical cure.
If the AI picks 2, the human will change the “law” (which isn’t the actual law, its just some text file the AI wants to obey). Then the AI can upload the human and the human will have a life the AI judges as overall better for them.
You don’t want the AI to never mention a really good idea because it happens to be illegal on a technicality. You also don’t want all the plans to be “persuade humans to make everything legal, then …”
As explained in the second post, I don’t agree that that’s implied if the AI is intent-aligned but not aligned with some deeper moral framework like CEV.
I agree that that is an important question. I think we have a very long track record of embedding our values into law. The point of this sequence is to argue that we should therefore at a minimum explore pointing to (some subset of) laws, which has a number of benefits relative to trying to integrate values into the utility function objectively. I will defend that idea more fully in a later post, but to briefly motivate the idea, law (as compared to something like the values that would come from CEV) is more or less completely written down, much more agreed-upon, much more formalized, and has built-in processes for resolving ambiguities and contradictions.
A cartoon version of this may be that A says “It’s not clear whether that’s legal, and if it’s not legal it would be very bad (murder), so I can’t proceed until there’s clarification.” If the human still wants to proceed, they can try to:
Change the law.
Get a declaratory judgment that it’s not in fact against the law.
I mean you could say that if we haven’t figured out how to do it well in the last 10,000 years, maybe don’t plan on doing it in the next 10. That’s kind of being mean though.
If you have a functioning arbitration process, can’t you just say “don’t do bad things” and leave everything down to the arbitration?
I also kind of feel that adding laws is going in the direction of more complexity. And we really want as simple as possible. (Ie the minimal AI that can sit in a MIRI basement and help them figure out the rest of AI theory or something)
I was talking about a scenario where the human has never imagined the possibility, and asking if the AI mentions the possibility to the human (knowing the human may change the law to get it)
The human says “cure my cancer”. The AI reasons that it can
Tell the human of a drug that cures its cancer in the conventional sense.
Tell the human about mind uploading, never mentioning the chemical cure.
If the AI picks 2, the human will change the “law” (which isn’t the actual law, its just some text file the AI wants to obey). Then the AI can upload the human and the human will have a life the AI judges as overall better for them.
You don’t want the AI to never mention a really good idea because it happens to be illegal on a technicality. You also don’t want all the plans to be “persuade humans to make everything legal, then …”