Yes, a conviction for an unjust law is bad—that’s not in dispute. The problem is whether you are appropriately fighting this injustice by nullifying, if you reason by TDT, a decision theory that ranks very well on numerous desiderata that account for these intuitions. Appealing to specific moral duty doesn’t resolve the problem, for the same reason that appealing to greed doesn’t justify two-boxing on Newcomb’s problem.
For my part, I do have a big problem with drug laws (despite not planning to use them). And before thinking about this as a rationalist, I did favor jury nullification. But if I put my TDT hat on, here are the problems I see with it:
Like what loqi said, if jurors tended to nullify, then the system, anticipating this, would either not use juries, or resort to extremely stringent screening methods to make sure there are no nullifiers (potentially involving massive violations of privacy for jurors, which in turn has to make them very circumspect about talking about their political views… even today people complain about the intrusiveness of questions jurors get asked). Or reduce the conviction difficulty threshold so that they can allow a conviction to go through if jurors acquit “for a bad reason”.
(I think this is similar to the old debate about whether you should cut down the law to get to the devil—or Hitler, etc. Or whether a judge in a slavery-supporting society who “sees the light” should suddenly start ignoring the laws and make whatever decisions go against slavery.)
TDT asks you to consider the consequences, conditioning on yourself (and all similar instantiations of your algorithm) being the kind of person who would keep quiet about your opinion on nullification and the particular law, and then nullify if you didn’t like the law.
This seems to be equivalent to setting humans to take the policy of “using any discretion I have to bend application of laws in the direction of the legal regime I prefer”. But if humans deemed this optimal, it would not be possible to have a “rule of law” system, where there are definite laws, and people can know when they’re breaking them, and which disallows favoritism. It would probably not be possible to implement any law except those which are extremely popular (which may be a good thing).
So I’m forced to conclude that jury nullification is in the unfortunate position of “feeding off of its own existence”, like stealing or two-boxing. While it may appear that it’s a chance to shift utility toward people you like, your deciding to do so has broader implications.
So at the very least you should say upfront that you regard it as optimal to nullify unjust laws, which will get you tossed out of the pool but otherwise unhurt. (I’ve heard that if you even exhibit familiarity with related keywords, that’s enough to get dismissed.)
Thanks, though part of my question is, due in part to the sorts of issues you bring up… should I consider it optimal for me to run the algorithm of nullify laws that seem unjust to me?
But if humans deemed this optimal, it would not be possible to have a “rule of law” system, where there are definite laws, and people can know when they’re breaking them, and which disallows favoritism.
The consequent here is independent of the antecedent—I don’t think the system you describe is possible under either circumstance.
While it may appear that it’s a chance to shift utility toward people you like, your deciding to do so has broader implications.
So at the very least you should say upfront that you regard it as optimal to nullify unjust laws, which will get you tossed out of the pool but otherwise unhurt.
I’m not seeing how this follows without some additional value judgments. You’re basically saying “Widespread nullification would fuck up the legal system, so don’t do it”, instead of ”… so beware of the trade-offs involved”.
The consequent here is independent of the antecedent—I don’t think the system you describe is possible under either circumstance.
Not perfectly, no, but any decent approximation has the norm I described (that you shouldn’t use your discretion to favor lawbreakers simply because that would make the law closer to what you personally desire) as a pre-requisite—I don’t see how it would be otherwise.
I thought that’s what what I was doing with:
While it may appear that it’s a chance to shift utility toward people you like, your deciding to do so has broader implications.
Yes, a conviction for an unjust law is bad—that’s not in dispute. The problem is whether you are appropriately fighting this injustice by nullifying, if you reason by TDT, a decision theory that ranks very well on numerous desiderata that account for these intuitions. Appealing to specific moral duty doesn’t resolve the problem, for the same reason that appealing to greed doesn’t justify two-boxing on Newcomb’s problem.
For my part, I do have a big problem with drug laws (despite not planning to use them). And before thinking about this as a rationalist, I did favor jury nullification. But if I put my TDT hat on, here are the problems I see with it:
Like what loqi said, if jurors tended to nullify, then the system, anticipating this, would either not use juries, or resort to extremely stringent screening methods to make sure there are no nullifiers (potentially involving massive violations of privacy for jurors, which in turn has to make them very circumspect about talking about their political views… even today people complain about the intrusiveness of questions jurors get asked). Or reduce the conviction difficulty threshold so that they can allow a conviction to go through if jurors acquit “for a bad reason”.
(I think this is similar to the old debate about whether you should cut down the law to get to the devil—or Hitler, etc. Or whether a judge in a slavery-supporting society who “sees the light” should suddenly start ignoring the laws and make whatever decisions go against slavery.)
TDT asks you to consider the consequences, conditioning on yourself (and all similar instantiations of your algorithm) being the kind of person who would keep quiet about your opinion on nullification and the particular law, and then nullify if you didn’t like the law.
This seems to be equivalent to setting humans to take the policy of “using any discretion I have to bend application of laws in the direction of the legal regime I prefer”. But if humans deemed this optimal, it would not be possible to have a “rule of law” system, where there are definite laws, and people can know when they’re breaking them, and which disallows favoritism. It would probably not be possible to implement any law except those which are extremely popular (which may be a good thing).
So I’m forced to conclude that jury nullification is in the unfortunate position of “feeding off of its own existence”, like stealing or two-boxing. While it may appear that it’s a chance to shift utility toward people you like, your deciding to do so has broader implications.
So at the very least you should say upfront that you regard it as optimal to nullify unjust laws, which will get you tossed out of the pool but otherwise unhurt. (I’ve heard that if you even exhibit familiarity with related keywords, that’s enough to get dismissed.)
Thanks, though part of my question is, due in part to the sorts of issues you bring up… should I consider it optimal for me to run the algorithm of nullify laws that seem unjust to me?
The consequent here is independent of the antecedent—I don’t think the system you describe is possible under either circumstance.
I’m not seeing how this follows without some additional value judgments. You’re basically saying “Widespread nullification would fuck up the legal system, so don’t do it”, instead of ”… so beware of the trade-offs involved”.
Not perfectly, no, but any decent approximation has the norm I described (that you shouldn’t use your discretion to favor lawbreakers simply because that would make the law closer to what you personally desire) as a pre-requisite—I don’t see how it would be otherwise.
I thought that’s what what I was doing with: