So if there existed a hypothetical institution with the power to mete out preventive imprisonment, and which would reliably base its decisions on mathematically sound consequentialist arguments, would you be OK with it? I’m really curious how many consequentialists here would bite that bullet.
If this institution is totally honest, and extremely accurate in making predictions, so that obeying the laws it enforces is like one-boxing in Newcomb’s problem, and somehow an institution with this predictive power has no better option than imprisonment, then yes I would be OK with it.
I don’t trust any human institution to satisfy the first two criteria (honesty and accuracy), and I expect anything that does satisfy the first two would not satisfy the third (not better option).
This seems to be the largest lapse of logic in the (otherwise very good) above post. Only a few paragraphs above an argument involving the reversal test, the author apparently fails to apply it in a situation where it’s strikingly applicable.
The topic of preemptive imprisonment was not under discussion, so it seems strange to consider it an error not to apply a reversal test to it.
If this institution is totally honest, and extremely accurate in making predictions, so that obeying the laws it enforces is like one-boxing in Newcomb’s problem, and somehow an institution with this predictive power has no better option than imprisonment, then yes I would be OK with it.
Please see the edit I just added to the post; it seems like my wording wasn’t precise enough. I had in mind statistical treatment of large groups, not prediction of behavior on an individual basis (which I assume is the point of your analogy with Newcomb’s problem).
The topic of preemptive imprisonment was not under discussion, so it seems strange to consider it an error not to apply a reversal test to it.
I agree that it’s not critical to the main point of the post, but I would say that it’s a question that deserves at least a passing mention in any discussion of a consequentialist model of blame, even a tangential one.
If this institution is totally honest, and extremely accurate in making predictions, so that obeying the laws it enforces is like one-boxing in Newcomb’s problem, and somehow an institution with this predictive power has no better option than imprisonment, then yes I would be OK with it.
Please see the edit I just added to the post; it seems like my wording wasn’t precise enough. I had in mind statistical treatment of large groups, not prediction of behavior on an individual basis (which I assume is the point of your analogy with Newcomb’s problem).
I would also be ok with this… however by your own definition it would never happen in practice, except for extreme cases like cults or a rage virus that only infects redheads.
How much of a statistical correlation would you require? Anything over 50%? 90%? 99%? I’d still have a problem with this. “It is better [one hundred] guilty Persons should escape than that one innocent Person should suffer.”—Ben Franklin
Sorry, not arguing from authority, the quote is a declaration of my values (or maybe just a heuristic :-), I just wanted to attribute it accurately.
My problem may just be lack of imagination. How could this work in reality? If we are talking about groups that are statistically more likely to commit crimes, we already have those. How is what is proposed above different from imprisoning these groups? Is it just a matter of doing a cost-benefit analysis?
Preemptive improsonment (if thats what they’re calling it) is just wrong on the grounds that our most sacred rights would be violated by it. One could argue that our current system does this by making attempted murder, death threats, etc, a crime, but thats alot more practical then grouping potential criminals by statistics. How far do you go? The only possible conclusion of such a system would be mass extermination (I’m serious.) Eliminate all but the people least likely to commit a crime, those that have genes that make them extremely non aggresive (or easily controlled.) Hell, why not just exterminate EVERYONE. No crimes EVER. Human values are complex, and if you reduce them to “do whats best for everyone”, you basically agree to abolish the ones we do have.
EDIT: This was a long time ago and I have absolutely no idea what I meant by this. I won’t delete it, but note even I think this is stupid as hell.
Your first sentence is a classic summary of the deontological position. There’s nothing on Less Wrong I can think of explaining why most of us wouldn’t agree with it, which is a darned shame in my opinion.
The part about mass extermination I can talk about more confidently. Consequentialists only do things if the benefits are greater than the cost. Preemptive imprisonment would work if the benefits in lower crime were greater than the very real cost to the imprisoned individual. Mass extermination doesn’t leave anyone better off, cause they’re all dead, so there’s no benefit and a huge cost.
Your first sentence is a classic summary of the deontological position. There’s nothing on Less Wrong I can think of explaining why most of us wouldn’t agree with it, which is a darned shame in my opinion.
Err, maybe “most sacred rights” was the wrong wording. How about “moral values”. Same thing, don’t get technical.
The part about mass extermination I can talk about more confidently. Consequentialists only do things if the benefits are greater than the cost. Preemptive imprisonment would work if the benefits in lower crime were greater than the very real cost to the imprisoned individual. Mass extermination doesn’t leave anyone better off, cause they’re all dead, so there’s no benefit and a huge cost.
But your assuming that “Mass extermination doesn’t leave anyone better off, cause they’re all dead”. How do you define “better off”. Once you can do this, maybe that will make more sense. Oh, by the way, exterminating groups of individuals could make, in certain situations, things “better off”. So maybe mass exterminations would have no advantage, but slaughtering that entire mafia family could save us alot of trouble. Then you get back to the “eye for an eye” scenario. Harsher punishments create a greater deterent for the individual and the rest of society. Not to mention that amputations and executions are by far cheaper and easier then prisons.
Err, maybe “most sacred rights” was the wrong wording. How about “moral values”.
This goes deeper than you think. The position we’re advocating, in essence, is that
There are no inalienable rights or ontologically basic moral values. Everything we’re talking about when we use normative language is a part of us, not a property of the universe as a whole.
This doesn’t force us to be nihilists. Even if it’s just me that cares about not executing innocent people, I still care about it.
It’s really easy to get confused thinking about ethics; it’s a slippery problem.
This doesn’t mean we should throw out all general rules; some absolute ethical injunctions should be followed even when it “seems like they shouldn’t”, because of the risk of one’s own thought processes being corrupted in typical human ways.
This may sound strange, but in typical situations it all adds up to normality: you won’t see a rationalist consequentialist running around offing people because they’ve calculated them to be net negatives for human values. It can change the usual answers in extreme hypotheticals, in dealing with uncertainty, and in dealing with large numbers; but that’s because “common-sense” thinking ends up being practically incoherent in recognizable ways when those variables are added.
I don’t expect you to agree with all of this, but I hope you’ll give it the benefit of the doubt as something new, which might make sense when discussed further...
So maybe mass exterminations would have no advantage, but slaughtering that entire mafia family could save us alot of trouble.
In theory, sure. In practice, there’s a large number of social dynamics, involving things such as people’s tendency to abuse power, that would make this option non-worthwhile.
Similar considerations apply to a lot of other things, including many of the ones you mention, such as creating an “eye for eye” society. Yes, you could get overall bad results if you just single-mindedly optimized for one or two variables, but that’s why we try to look at the whole picture.
In theory, sure. In practice, there’s a large number of social dynamics, involving things such as people’s tendency to abuse power, that would make this option non-worthwhile.
Allright, so what if it was done by a hypothetical super intelligent AI or an omnicient being of somesort. Would you be ok with it then?
Similar considerations apply to a lot of other things, including many of the ones you mention, such as creating an “eye for eye” society. Yes, you could get overall bad results if you just single-mindedly optimized for one or two variables, but that’s why we try to look at the whole picture.
This is exactly what I mean. What are we trying to “optimize” for?
Allright, so what if it was done by a hypothetical super intelligent AI or an omnicient being of somesort. Would you be ok with it then?
Probably not, because it really was a super-intelligent AI, it could solve the problem without needing to kill anyone.
This is exactly what I mean. What are we trying to “optimize” for?
For general well-being. Something among the lines of “the amount of happiness minus the amount of suffering”, or “the successful implementation of preferences” would probably be a decent first approximation, but even those have plenty of caveats (we probably wouldn’t want to just turn everyone to wireheads, for instance). Human values are too complex to really be summed in any brief description. Or book-length ones, for that matter.
Probably not, because it really was a super-intelligent AI, it could solve the problem without needing to kill anyone.
They could possibly come up with an alternative, but we must consider that it very well may be the most efficient thing to do is to kill them, unless we implement goals that make the killing the least efficient option. If your going with AI, then there is another thing to consider: time. How much time would the AI spend considering its options and judging the person in question? Shortest amount of time possible? Longest? There is no such thing as an ultimate trade off.
For general well-being. Something among the lines of “the amount of happiness minus the amount of suffering”, or “the successful implementation of preferences” would probably be a decent first approximation, but even those have plenty of caveats (we probably wouldn’t want to just turn everyone to wireheads, for instance). Human values are too complex to really be summed in any brief description. Or book-length ones, for that matter.
In other words, we have to set its goal as the ability to predict our values, which is a problem since you can’t make AI goals in english.
They could possibly come up with an alternative, but we must consider that it very well may be the most efficient thing to do is to kill them, unless we implement goals that make the killing the least efficient option. If your going with AI, then there is another thing to consider: time. How much time would the AI spend considering its options and judging the person in question? Shortest amount of time possible? Longest? There is no such thing as an ultimate trade off.
I’m not sure of what exactly you’re trying to say here.
In other words, we have to set its goal as the ability to predict our values, which is a problem since you can’t make AI goals in english.
If this institution is totally honest, and extremely accurate in making predictions, so that obeying the laws it enforces is like one-boxing in Newcomb’s problem, and somehow an institution with this predictive power has no better option than imprisonment, then yes I would be OK with it.
I don’t trust any human institution to satisfy the first two criteria (honesty and accuracy), and I expect anything that does satisfy the first two would not satisfy the third (not better option).
The topic of preemptive imprisonment was not under discussion, so it seems strange to consider it an error not to apply a reversal test to it.
Please see the edit I just added to the post; it seems like my wording wasn’t precise enough. I had in mind statistical treatment of large groups, not prediction of behavior on an individual basis (which I assume is the point of your analogy with Newcomb’s problem).
I agree that it’s not critical to the main point of the post, but I would say that it’s a question that deserves at least a passing mention in any discussion of a consequentialist model of blame, even a tangential one.
I would also be ok with this… however by your own definition it would never happen in practice, except for extreme cases like cults or a rage virus that only infects redheads.
How much of a statistical correlation would you require? Anything over 50%? 90%? 99%? I’d still have a problem with this. “It is better [one hundred] guilty Persons should escape than that one innocent Person should suffer.”—Ben Franklin
An article by SteveLandsburg on a similar quote.
And a historical overview of related quotes.
Enough to justify imprisoning everyone. It depends on how long they’d stay in jail, the magnitude of the crime, etc.
I really don’t care what Ben Franklin thinks.
Sorry, not arguing from authority, the quote is a declaration of my values (or maybe just a heuristic :-), I just wanted to attribute it accurately.
My problem may just be lack of imagination. How could this work in reality? If we are talking about groups that are statistically more likely to commit crimes, we already have those. How is what is proposed above different from imprisoning these groups? Is it just a matter of doing a cost-benefit analysis?
It’s not different. Vladmir is arguing that if you agree with the article, you should also support preemptive imprisonment.
Preemptive improsonment (if thats what they’re calling it) is just wrong on the grounds that our most sacred rights would be violated by it. One could argue that our current system does this by making attempted murder, death threats, etc, a crime, but thats alot more practical then grouping potential criminals by statistics. How far do you go? The only possible conclusion of such a system would be mass extermination (I’m serious.) Eliminate all but the people least likely to commit a crime, those that have genes that make them extremely non aggresive (or easily controlled.) Hell, why not just exterminate EVERYONE. No crimes EVER. Human values are complex, and if you reduce them to “do whats best for everyone”, you basically agree to abolish the ones we do have.
EDIT: This was a long time ago and I have absolutely no idea what I meant by this. I won’t delete it, but note even I think this is stupid as hell.
Your first sentence is a classic summary of the deontological position. There’s nothing on Less Wrong I can think of explaining why most of us wouldn’t agree with it, which is a darned shame in my opinion.
The part about mass extermination I can talk about more confidently. Consequentialists only do things if the benefits are greater than the cost. Preemptive imprisonment would work if the benefits in lower crime were greater than the very real cost to the imprisoned individual. Mass extermination doesn’t leave anyone better off, cause they’re all dead, so there’s no benefit and a huge cost.
Err, maybe “most sacred rights” was the wrong wording. How about “moral values”. Same thing, don’t get technical.
But your assuming that “Mass extermination doesn’t leave anyone better off, cause they’re all dead”. How do you define “better off”. Once you can do this, maybe that will make more sense. Oh, by the way, exterminating groups of individuals could make, in certain situations, things “better off”. So maybe mass exterminations would have no advantage, but slaughtering that entire mafia family could save us alot of trouble. Then you get back to the “eye for an eye” scenario. Harsher punishments create a greater deterent for the individual and the rest of society. Not to mention that amputations and executions are by far cheaper and easier then prisons.
This goes deeper than you think. The position we’re advocating, in essence, is that
There are no inalienable rights or ontologically basic moral values. Everything we’re talking about when we use normative language is a part of us, not a property of the universe as a whole.
This doesn’t force us to be nihilists. Even if it’s just me that cares about not executing innocent people, I still care about it.
It’s really easy to get confused thinking about ethics; it’s a slippery problem.
The best way to make sure that more of what we value happens, generally speaking, is some form of consequentialist calculus. (I personally hesitate to call this utilitarianism because that’s often thought of as concerned only with whether people are happy, and I care about some other things as well.)
This doesn’t mean we should throw out all general rules; some absolute ethical injunctions should be followed even when it “seems like they shouldn’t”, because of the risk of one’s own thought processes being corrupted in typical human ways.
This may sound strange, but in typical situations it all adds up to normality: you won’t see a rationalist consequentialist running around offing people because they’ve calculated them to be net negatives for human values. It can change the usual answers in extreme hypotheticals, in dealing with uncertainty, and in dealing with large numbers; but that’s because “common-sense” thinking ends up being practically incoherent in recognizable ways when those variables are added.
I don’t expect you to agree with all of this, but I hope you’ll give it the benefit of the doubt as something new, which might make sense when discussed further...
In theory, sure. In practice, there’s a large number of social dynamics, involving things such as people’s tendency to abuse power, that would make this option non-worthwhile.
Similar considerations apply to a lot of other things, including many of the ones you mention, such as creating an “eye for eye” society. Yes, you could get overall bad results if you just single-mindedly optimized for one or two variables, but that’s why we try to look at the whole picture.
Allright, so what if it was done by a hypothetical super intelligent AI or an omnicient being of somesort. Would you be ok with it then?
This is exactly what I mean. What are we trying to “optimize” for?
Probably not, because it really was a super-intelligent AI, it could solve the problem without needing to kill anyone.
For general well-being. Something among the lines of “the amount of happiness minus the amount of suffering”, or “the successful implementation of preferences” would probably be a decent first approximation, but even those have plenty of caveats (we probably wouldn’t want to just turn everyone to wireheads, for instance). Human values are too complex to really be summed in any brief description. Or book-length ones, for that matter.
They could possibly come up with an alternative, but we must consider that it very well may be the most efficient thing to do is to kill them, unless we implement goals that make the killing the least efficient option. If your going with AI, then there is another thing to consider: time. How much time would the AI spend considering its options and judging the person in question? Shortest amount of time possible? Longest? There is no such thing as an ultimate trade off.
In other words, we have to set its goal as the ability to predict our values, which is a problem since you can’t make AI goals in english.
I’m not sure of what exactly you’re trying to say here.
Yup.