Thanks for an excellent reply! One possible crux is that I don’t think that synthesized human values are particularly useful; I’d expect that AGI systems can do their own synthesis from a much wider range of evidence (including law, fiction, direct observation, etc.). As to the specific points, I’d respond:
There is no unified legal theory precise enough to be practically useful for AI understanding human preferences and values; liberal and social democracies alike tend to embed constraints in law, with individuals and communities pursuing their values in the lacunae.
The rigorous tests of legal theories are carried out inside the system of law, and bent by systems of unjust power (e.g. disenfranchisement). We cannot validate laws or legal theories in any widely agreed-upon manner.
Law often lacks settled precedent, especially regarding new technologies, or disagreements between nations or different cultures.
I reject the assertion that imposition by a government necessarily makes a law legitimate. While I agree we don’t have a mechanism to ‘align the rest of the humans’ with a theory or meta-theory, I don’t think this is relevant (and in any case it’s equally applicable to law).
I agree that “moral lock-in” would be a disaster. However, I dispute that law accurately reflects the evolving will of citizens; or the proposition that so reflecting citizen’s will is consistently good (c.f. reproductive rights, civil rights, impacts on foreign nationals or future generations...)
These points are about law as it exists as a widely-deployed technology, not idealized democratic law. However, only the former is available to would-be AGI developers!
Law does indeed provide useful evidence about human values, coordination problems, and legitimacy—but this alone does not distinguish it.
There does seem to be legal theory precise enough to be practically useful for AI understanding human preferences and values. To take just one example: the huge amount of legal theory on the how to craft directives. For instance, whether to make directives in contracts and legislation more of a rule nature or a standards nature. Rules (e.g., “do not drive more than 60 miles per hour”) are more targeted directives than standards. If comprehensive enough for the complexity of their application, rules allow the rule-maker to have more clarity than standards over the outcomes that will be realized conditional on the specified states (and agents’ actions in those states, which are a function of any behavioral impact the rules might have had). Standards (e.g., “drive reasonably” for California highways) allow parties to contracts, judges, regulators, and citizens to develop shared understandings and adapt them to novel situations (i.e., to generalize expectations regarding actions taken to unspecified states of the world). If rules are not written with enough potential states of the world in mind, they can lead to unanticipated undesirable outcomes (e.g., a driver following the rule above is too slow to bring their passenger to the hospital in time to save their life), but to enumerate all the potentially relevant state-action pairs is excessively costly outside of the simplest environments. In practice, most legal provisions land somewhere on a spectrum between pure rule and pure standard, and legal theory can help us estimate the right location and combination of “rule-ness” and “standard-ness” when specifying new AI objectives. There are other helpful legal theory dimensions to legal provision implementation related to the rule-ness versus standard-ness axis that could further elucidate AI design, e.g., “determinacy,” “privately adaptable” (“rules that allocate initial entitlements but do not specify end-states”), and “catalogs” (“a legal command comprising a specific enumeration of behaviors, prohibitions, or items that share a salient common denominator and a residual category—often denoted by the words “and the like” or “such as””).
Laws are validated in a widely agreed-upon manner: court opinions.
I agree that law lacks settled precedent across nations, but within a nation like the U.S.: at any given time, there is a settled precedent. New precedents are routinely set, but at any given time there is a body of law that represents the latest versioning.
It seems that a crux of our overall disagreement about the usefulness of law is whether imposition by a democratic government makes a law legitimate. My arguments depend on that being true.
In response to “I dispute that law accurately reflects the evolving will of citizens; or the proposition that so reflecting citizen’s will is consistently good”, I agree it does not represent the evolving will of citizens perfectly, but it does so better than any alternative. I think reflecting the latest version of citizens’ views is important because I hope we continue on a positive trajectory to having better views over time.
The bottom line is that democratic law is far from perfect, but, as a process, I don’t see any better alternative that would garner the buy-in needed to practically elicit human values in a scalable manner that could inform AGI about society-level choices.
Thanks for an excellent reply! One possible crux is that I don’t think that synthesized human values are particularly useful; I’d expect that AGI systems can do their own synthesis from a much wider range of evidence (including law, fiction, direct observation, etc.). As to the specific points, I’d respond:
There is no unified legal theory precise enough to be practically useful for AI understanding human preferences and values; liberal and social democracies alike tend to embed constraints in law, with individuals and communities pursuing their values in the lacunae.
The rigorous tests of legal theories are carried out inside the system of law, and bent by systems of unjust power (e.g. disenfranchisement). We cannot validate laws or legal theories in any widely agreed-upon manner.
Law often lacks settled precedent, especially regarding new technologies, or disagreements between nations or different cultures.
I reject the assertion that imposition by a government necessarily makes a law legitimate. While I agree we don’t have a mechanism to ‘align the rest of the humans’ with a theory or meta-theory, I don’t think this is relevant (and in any case it’s equally applicable to law).
I agree that “moral lock-in” would be a disaster. However, I dispute that law accurately reflects the evolving will of citizens; or the proposition that so reflecting citizen’s will is consistently good (c.f. reproductive rights, civil rights, impacts on foreign nationals or future generations...)
These points are about law as it exists as a widely-deployed technology, not idealized democratic law. However, only the former is available to would-be AGI developers!
Law does indeed provide useful evidence about human values, coordination problems, and legitimacy—but this alone does not distinguish it.
Thanks for the reply.
There does seem to be legal theory precise enough to be practically useful for AI understanding human preferences and values. To take just one example: the huge amount of legal theory on the how to craft directives. For instance, whether to make directives in contracts and legislation more of a rule nature or a standards nature. Rules (e.g., “do not drive more than 60 miles per hour”) are more targeted directives than standards. If comprehensive enough for the complexity of their application, rules allow the rule-maker to have more clarity than standards over the outcomes that will be realized conditional on the specified states (and agents’ actions in those states, which are a function of any behavioral impact the rules might have had). Standards (e.g., “drive reasonably” for California highways) allow parties to contracts, judges, regulators, and citizens to develop shared understandings and adapt them to novel situations (i.e., to generalize expectations regarding actions taken to unspecified states of the world). If rules are not written with enough potential states of the world in mind, they can lead to unanticipated undesirable outcomes (e.g., a driver following the rule above is too slow to bring their passenger to the hospital in time to save their life), but to enumerate all the potentially relevant state-action pairs is excessively costly outside of the simplest environments. In practice, most legal provisions land somewhere on a spectrum between pure rule and pure standard, and legal theory can help us estimate the right location and combination of “rule-ness” and “standard-ness” when specifying new AI objectives. There are other helpful legal theory dimensions to legal provision implementation related to the rule-ness versus standard-ness axis that could further elucidate AI design, e.g., “determinacy,” “privately adaptable” (“rules that allocate initial entitlements but do not specify end-states”), and “catalogs” (“a legal command comprising a specific enumeration of behaviors, prohibitions, or items that share a salient common denominator and a residual category—often denoted by the words “and the like” or “such as””).
Laws are validated in a widely agreed-upon manner: court opinions.
I agree that law lacks settled precedent across nations, but within a nation like the U.S.: at any given time, there is a settled precedent. New precedents are routinely set, but at any given time there is a body of law that represents the latest versioning.
It seems that a crux of our overall disagreement about the usefulness of law is whether imposition by a democratic government makes a law legitimate. My arguments depend on that being true.
In response to “I dispute that law accurately reflects the evolving will of citizens; or the proposition that so reflecting citizen’s will is consistently good”, I agree it does not represent the evolving will of citizens perfectly, but it does so better than any alternative. I think reflecting the latest version of citizens’ views is important because I hope we continue on a positive trajectory to having better views over time.
The bottom line is that democratic law is far from perfect, but, as a process, I don’t see any better alternative that would garner the buy-in needed to practically elicit human values in a scalable manner that could inform AGI about society-level choices.