Suppose you are a judge in deciding whether person X or Y commited a murder. Let’s also assume your society has the death penalty. A supermajority of society (say, encouraged by the popular media) has come to think that X committed the crime, which would decrease their confidence in the justice system if he is set free, but you know (e.g. because you know Bayes) that Y was responsible. We also assume you know that Y won’t reoffend if set free because (say) they have been too spooked by this episode. Will you condemn X or Y?
(Before you quibble your way out of this, read The Least Convenient Possible World)
If you said X, you pass.
Just a response to “Saddam Hussein doesn’t deserve so much as a stubbed toe.”
N.B. This does not mean I’m against consequentialism.
… which would decrease their confidence in the justice system if he is set free...
By condemning X, I uphold the people’s trust in the justice system, while making it unworthy of that trust. By condemning Y, I reduce the people’s trust in the justice system, while making the system worthy of their trust. But what is their trust worth, without the reality that they trust in?
If I intend the justice system to be worthy of confidence, I desire to act to make it worthy of confidence. If I intend it to be unworthy of confidence, I desire to act to make it unworthy of confidence. Let me not become unattached to my desires, nor attached to what I do not desire.
Also, there is no Least Convenient Possible World. The Least Convenient Possible World for your interlocutors is the Most Convenient Possible World for yourself, the one where you get to just say “Suppose that such and such, which you think is Bad, were actually Good. Then it would be Good, wouldn’t it?”
In the least convenient possible world, condemning an innocent in this one case will not make the system generally less worthy of confidence. Maybe you know it will never happen again.
ETA: It is not for the proponent of an argument to fabricate a Least Convenient Possible World—that is, a Most Convenient Possible World for themselves—and insist that their interlocutors address it, brushing aside every argument they make by inventing more and more Conveniences. The more you add to the scenario, the smaller the sliver of potential reality you are talking about. The endpoint of this is the world in which the desired conclusion has been made true by definition, at which point the claim no longer refers to anything at all.
If I, this hypothetical judge, am willing to have the innocent punished and the guilty set free, to preserve confidence that the guilty are punished and the innocent are set free, I must be willing that I and my fellow judges do the same in every such case. Call this the Categorical Imperative, call it TDT, that is where it leads, at the speed of thought, not the speed of time: to take one step is to have travelled the whole way. I would have decided to blow with the mob and call it justice. It cannot be done.
The categorical imperative ignores the possibility of mixed strategies—it may be that doing X all the time is bad, doing Y all the time is bad, but doing a mixture of X and Y is not. For instance, if everyone only had sex with someone of the same sex, that would destroy society by lack of children. (And if everyone only had sex with someone of the opposite sex, gays would be unsatisfied, of course.) The appropriate thing to do, is to allow everyone to have sex with the type of partner that fits their preferences. Or to put it another way, “doing the same thing” and “in the same kind of case” depend on exactly what you count as the same—is the “same” thing “having only gay sex” or “having either type of sex depending on one’s preference”?
In the punishment case, it may be that we’re better off with a mixed strategy of sometimes killing innocent people and sometimes not; if you always kill innocent people, the justice system is worthless, but if you never kill innocent people, people have no confidence in the justice system and it also ends up being worthless. The optimal thing to do may be to kill innocent people a certain percentage of the time, or only in high profile public cases, or whatever. Asking “would you be willing to kill innocent people all the time” would be as inappropriate as asking “would you be willing to be in a society where people (when having sex) have gay sex all the time”. You might be willing to do the “same thing” all the time where the “same thing” means “follow the public’s preference, which sometimes leads to killing the innocent” (not “always kill the innocent ”) just like in the gay sex example it means “follow someone’s sexual preference, which sometimes leads to gay sex” (not “always have gay sex”).
Yes, the categorical imperative has the problem of deciding on the reference class, as do TDT, the outside view, and every attempt to decide what precedent will be set by some action, or what precedent the past has set for some decision. Eliezer coined the phrase “reference class tennis” to refer to the broken sort of argumentation that consists of choosing competing reference classes in order to reach desired conclusions.
So how do you decide on the right reference class, rather than the one that lets you conclude what you already wanted to for other reasons? TDT, being more formalised (or intended to be, if MIRI and others ever work out exactly what it is) suggests a computational answer to this question. The class that your decision sets a precedent for is the class that shares the attributes that you actually used in making your decision—the class that you would, in fact, make the same decision for.
This is not a solution to the reference class problem, or even an outline of a solution; it is only a pointer in a direction where a solution might be found. And even if TDT is formalised and gives a mathematical solution to the reference class problem, we may be in the same situation as we are with Bayesian reasoning: we can, and statisticians do, actually apply Bayes theorem in cases where the actual numbers are available to us, but “deep” Bayesianism can only be practiced by heuristic approximation.
“Would you like it if everyone did X” is just a bad idea, because there are some things whose prevalences I would prefer to be neither 0% nor 100%, but somewhere inbetween. That’s really an objection to the categorical imperative, period. I can always say that I’m not really objecting to the categorical imperative in such a situation by rephrasing it in terms of a reference class “would you like it if everyone performed some algorithm that produced X some of the time”, but that gets far away from what most people mean when they use the categorical imperative, even if technically it still fits.
An average person not from this site would not even comprehend “would you like it if everyone performed some algorithm with varying results” as a case of the golden rule, categorical imperative, or whatever, and certainly wouldn’t think of it as an example of everyone doing the “same thing”. In most people’s minds, doing the same thing means to perform a simple action, not an algorithm.
“Would you like it if everyone did X” is just a bad idea, because there are some things whose prevalences I would prefer to be neither 0% nor 100%, but somewhere inbetween. That’s really an objection to the categorical imperative, period.
In that case, the appropriate X is to perform the action with whatever probability you would wish to be the case. It still fits the CI.
but that gets far away from what most people mean when they use the categorical imperative, even if technically it still fits.
Or more briefly, it still fits. But you have to actually make the die roll. What “an average person not from this site” would or would not comprehend by a thing is not relevant to discussions of the thing itself.
In that case, the appropriate X is to perform the action with whatever probability you would wish to be the case. It still fits the CI.
In that case, you can fit anything whatsoever into the categorical imperative by defining an appropriate reference class and action. For instance, I could justify robbery with “How would I like it, if everyone were to execute ‘if (person is Jiro) then rob else do nothing’”. The categorical imperative ceases to have meaning unless some actions and some reference classes are unacceptable.
Or more briefly, it still fits
That’s too brief. Because :”what do most people mean when they say this” actually matters. They clearly don’t mean for it to include “if (person is Jiro) then rob else do nothing” as a single action that can be universalized by the rule.
For instance, I could justify robbery with “How would I like it, if everyone were to execute ‘if (person is Jiro) then rob else do nothing’”.
The reason that doesn’t work is that people who are not Jiro would not like it if everyone were to execute ‘if (person is Jiro) then rob else do nothing’, so they couldn’t justify you robbing that way. The fact that the rule contains a gerrymandered reference class isn’t by itself a problem.
I’ve always heard it, the Golden Rule, and other variations to be some form of “would you like it if everyone were to do that?” I’ve never heard of it as “would everyone like it if everyone were to do that?”. I don’t know where army1987 is getting the second version from.
In that case, you can fit anything whatsoever into the categorical imperative by defining an appropriate reference class and action.
Doing which is reference class tennis, as I said. The solution is to not do that, to not write the bottom line of your argument and then invent whatever dishonest string of reasoning will end there.
The categorical imperative ceases to have meaning unless some actions and some reference classes are unacceptable.
No kidding. And indeed some are not, as you clearly understand, from your ability to make up an example of one. So what’s the problem?
What principle determines what actions are unacceptable apart from “they lead to a bottom line I don’t like”? That’s the problem. Without any prescription for that, the CI fails to constrain your actions, and you’re reduced to simply doing whatever you want anyway.
This asserts a meta-meta-ethical proposition that you must have explicit principles to prescribe all your actions, without which you are lost in a moral void. Yet observably there are good and decent people in the world who do not reflect on such things much, or at all.
If to begin to think about ethics immediately casts you into a moral void where for lack of yet worked out principles you can no longer discern good from evil, you’re doing it wrong.
Look, I have no problem with basing ethics on moral intuitions, and what we actually want. References to right and wrong are after all stored only in our heads.
But in the specific context of a discussion of the Categorical Imperative—which is supposed to be a principle forbidding “categorically” certain decisions—there needs to be some rule explaining what “universalizable” actions are not permitted, for the CI to make meaningful prescriptions. If you simply decide what actions are permitted based on whether you (intuitively) approve of the outcome, then the Imperative is doing no real work whatsoever.
If, like most people, you don’t want to be murdered, the CI will tell you not to murder. If you don’t want to be robbed, it will tell you not to rob. Etc. It does work for the normal majority, and the abnornmal minority are probably going to be a problem under any system.
Please read the above thread and understand the problem before replying.
But for your benefit, I’ll repeat it: explain to me, in step-by-step reasoning, how the categorical imperative forbids me from taking the action “if (I am nshepperd) then rob else do nothing”. It certainly seems like it would be very favourable to me if everyone did “if (I am nshepperd) then rob else do nothing”.
The way nshepperd just described. It is, after all, a universal law, applied in every situation. It just returns different results for a specific individual. We can call a situation-sensitive law like this a piecewise law.
Most people would probably not want to live in a society with a universal law not to steal unless you are a particular person, if they didn’t know in advance whether or not the person would be them, so it’s a law one is unlikely to support from behind a veil of ignorance.
However, some piecewise laws do better behind veils of ignorance than non-piecewise universal laws. For instance, laws which distinguish our treatment of introverts from extroverts stand to outperform ones which treat both according to the same standard.
You can rescue non piecewise categorical imperatives by raising them to a higher level of abstraction, but in order to keep them from being outperformed by piecewise imperatives, you need levels of abstraction higher than, for example “Don’t steal.” At a sufficient level of abstraction, categorical imperatives stop being actionable guides, and become something more like descriptions of our fundamental values.
No, there isn’t any real difference from that, which is why the example demonstrates a flaw in the Categorical Imperative. Any non-universal law can be expressed as a universal law. “The law is ‘you can rob’, but the law should only be applied to Jiro” is a non-universal law, but “The law is ‘if (I am Jiro) then rob else do nothing’ and this law is applied to everyone” is a universal law that has the same effect. Because of this ability to express one in terms of the other, saying “you should only do things if you would like for them to be universally applied” fails to provide any constraints at all, and is useless.
Of course, most people don’t consider such universal laws to be universal laws, but on the other hand I’m not convinced that they are consistent when they say so—for instance “if (I am convicted of robbery) then put me in jail else nothing” is a law that is of similar form but which most people would consider a legitimate universalizable law.
If the law gives different results for different people doing the same thing, it isn’t universal jn the intended sense, which is pretty much the .same as fairness.
“In the intended sense” is not a useful description compared to actually writing down a description. It also may not necessarily even be consistent.
Furthermore, it’s clear that most people consider “if (I am convicted of robbery) then put me in jail else nothing” to be a universal law in the intended sense, yet that gives different results for different people (one result for robbers, another result for non-robbers) doing the same thing (nothing, in either case).
Furthermore, it’s clear that most people consider “if (I am convicted of robbery) then put me in jail else nothing” to be a universal law in the intended sense, yet that gives different results for different people (one result for robbers, another result for non-robbers) doing the same thing (nothing, in either case).
The “fairness” part. Falling back on another insufficiently specified intuitive concept doesn’t help explain this one. Is it fair to jail a man who steals a loaf of bread from a rich man so his nephew won’t starve? A simple yes or no isn’t enough here, we don’t all have identical intuitive senses of fairness, so what we need isn’t the output for any particular question, but the process that generates the outputs.
I don’t think “all robbers should be jailed except TheAncientGeek” is a fair rule, but that doesn’t advance the discussion from where we were already.
Where a universal rule would be one that anyone could check any time for relevant output (both “never steal” and “if nsheppard, steal, if else, do nothing) would be examples, one which only produces output for a specific individual or in a specific instance (for example “nsheppard can steal,” or “on January 3rd, 2014, it is okay to steal.”) These would be specific case rules.
The “fairness” part. Falling back on another insufficiently specified intuitive concept doesn’t help explain this on.
It is not an intuitiojn about what is true, it is a concept that helps to explain another concept.,If you let it.
I don’t think “all robbers should be jailed except TheAncientGeek” is a fair rule, but that doesn’t advance the discussion from where we were already.
Then why do you think you can build explicit exceptions into rules and still deem them universal? I think you can’t because I think, roughly speaking, universal=fair.
Where a universal rule would be one that anyone could check any time for relevant output (both “never steal” and “if nsheppard, steal, if else, do nothing) would be examples, one which only produces output for a specific individual or in a specific instance (for example “nsheppard can steal,” or “on January 3rd, 2014, it is okay to steal.”) These would be specific case rules.
Such a rule is useless for moral guidance. But intelligent people think the CI is useful for moral guidance. That should have told you that your guess about what “universal” means, in this context, is wrong. You should have discarded that interpretation and sought one that does not make the CI obviously foolish.
Such a rule is useless for moral guidance. But intelligent people think the CI is useful for moral guidance.
“Intelligent people” also think you shouldn’t switch in the common version of the Monty Hall problem. The whole point of this argument is to point out that the CI doesn’t make sense as given and therefore, that “intelligent people” are wrong about it.
That should have told you that your guess about what “universal” means, in this context, is wrong.
No, it tells me that people who think the CI is useful have not thought through the implications. It’s easy to say that rules like the ones given above can’t be made “universal”, but the same people who wouldn’t think such rules can be made universal are willing to make other rules of similar form universal (why is a rule that says that only Jiro can rob not “universal”, but one which says that only non-minors can drink alcohol is?)
I don’t think there is, but then, I don’t think that classifying things as universal law or not is usually very useful in terms of moral guidelines anyway. I consider the Categorical Imperative to be a failed model.
Why is it failed? A counterexample was put forward that isn’t a universal law. That doesn’t prove the .CI to be wrong. So what does?
We already adjust rules by reference classes, since we have different rules for minors and the insane. Maybe we just need rules that are apt to the reference class and impartial within it.
When you raise it to high enough levels of abstraction that the Categorical Imperative stops giving worse advice than other models behind a veil of ignorance, it effectively stops giving advice at all due to being too abstract to apply to any particular situation with human intelligence.
You can fragment the Categorical Imperative into vast numbers of different reference classes, but when you do it enough to make it ideally favorable from behind a veil of ignorance, you’ve essentially defeated any purpose of treating actions as if they were generalizable to universal law.
I think I’ve already made that implicit in my earlier comments; I’m judging based on the ability of a society run on such a model to appeal to people from behind a veil of ignorance
You can fragment the Categorical Imperative into vast numbers of different reference classes, but when you do it enough to make it ideally favorable from behind a veil of ignorance, you’ve essentially defeated any purpose of treating actions as if they were generalizable to universal law.
I think that is a false dichotomy. One rule for everybody may well fail: Everybody has their own rule may well fai. However, there is till the tertium datur of N>1 rules for M>1 people. Which is kind of how legal systems work in the real world.
Legal systems that were in place before any sort of Categorical Imperative formulation, and did not particularly change in response to it.
I think our own legal systems could be substantially improved upon, but that’s a discussion of its own. Do you think that the Categorical Imperative formulation has helped us, morally speaking, and if so how?
I would suggest that the Categorical Imperative has been considered at some length by many, if not all members of Less Wrong, but doesn’t have much currency because in general nobody here is particularly impressed with it. That is, they don’t think that it either improves upon or accurately describes our native morality.
If you think that people on Less Wrong ought to take it seriously, demonstrating that it does one of those would be the way to go.
I was deliberately not playing along with your framing that the CI is wrong by default unless elaborately defended.
I would suggest that the Categorical Imperative has been considered at some length by many, if not all members of Less Wrong, but doesn’t have much currency because in general nobody here is particularly impressed with it.
I see no evidence of that. If it had been considered at length: if it had been people would be able to understand it (you keep complaining that you do not), and they would be able to write relevant critiques that address what it is actually about.
If you think that people on Less Wrong ought to take it seriously, demonstrating that it does one of those would be the way to go.
Again, I don’t have to put forward a steelmanned version of a theory to demonstrate that it should not be lightly dismissed. That is a false dichotomy.
I’m not complaining that I don’t understand it, I’m complaining that your explanations do not make sense to me. Your formulation seems to differ substantially from Kant’s (for instance, the blanket impermissibility of stealing was a case he was sufficiently confident in to use as an example, whereas you do not seem attached to that principle.)
You haven’t explained anything solid enough to make a substantial case that it should not be lightly dismissed; continuing to engage at all is more a bad habit of mine than a sign that you’re presenting something of sufficient use to merit feedback. If you’re not going to bother explaining anything with sufficient clarity to demonstrate both crucially that you have a genuinely coherent idea of what you yourself are talking about, and that it is something that we should take seriously, I am going to resolve not to engage any further as I should have done well before now.
I’m not complaining that I don’t understand it, I’m complaining that your explanations do not make sense to me.
If you understand, why do you need me to explain?
for instance, the blanket impermissibility of stealing was a case he was sufficiently confident in to use as an example, whereas you do not seem attached to that principle
I have no idea what you are referring to.
You haven’t explained anything solid enough to make a substantial case that it should not be lightly dismissed;
Because I think you don’t have a coherent idea of what you’re talking about, and if you tried to formulate it rigorously you’d either have to develop one, or realize that you don’t know how to express what you’re proposing as a workable system. Explaining things to others is how we solidify or confirm our own understanding, and if you resist taking that step, you should not be assured of your own understanding.
Now you know why I was bothering to participate in the first place, and it is time, unless you’re prepared to actually take that step, for me to stop.
Why should I repeat what is in the literature on the CI, instead of you reading it? It is clear from your other comments that you don’t in fact understand it. It is not as if you had read some encyclopedia article and said “I don’t get this bit”—a perfectly ordinary kind and level of misunderstanding. Instead, you have tried to shoe-horn it into some weird computing-programming metaphor which is entirely inappropriate.It is that layer of “let’s translate this into some entirely different discipline” that is is causing the problem for you and others.
Okay, I’m being really bad here, and I encourage anyone who’s following along to downvote me for my failure to disengage, but I might as well explain myself here to a point where you actually know what you’re arguing with.
I have already read Kant, and I wasn’t impressed; some intelligent people take the CI seriously, most, including most philosophers, do not. I think Kant was trying too hard to find ways he could get his formulation to seem like it worked, and not looking hard enough for ways he could get it to break down, and failed to grasp that he had insufficiently specified his core concepts in order to create a useful system (and also that he failed to prove that objective morality enters into the system on any level, but more or less took it for granted.)
I don’t particularly expect you to agree that piecewise rules like the ones I described qualify as “universal,” but I don’t think you or Kant have sufficiently specified the concept of “universal,” such that one can rigorously state what does or does not qualify, and I think that by trying to so specify, for an audience prepared to point out failures of rigor in the formulation, would lead you to the conclusion that it’s much, much harder to develop a moral framework which is rigorous and satisfying and coherent than you or Kant have made it out to be.
I think that the Categorical Imperative fails to describe our intuitive sense of morality (I can offer explanations as to why if you wish, but I would be much more amenable to doing so if you would actually offer explanations for your positions when asked, rather than claiming it’s not your responsibility to do so,) fails to offer improvements of desirability over our intuitive morality on a society that runs on it from behind a veil of ignorance, and that there is not sound reason to think that it is somehow, in spite of these things, a True Objective Description of Morality, and absent such reason we should assume, as with any other hypothetical framework lacking such reason, that it’s not.
You may try to change my mind, but hopefully you will now have a better understanding of what it would take to do so, and why admonishments to go read the original literature are not going to further engage my interest.
I have already read Kant, and I wasn’t impressed; some intelligent people take the CI seriously, most, including most philosophers, do not.
Could that have been based on misunderstanding on your part?
he failed to prove that objective morality enters into the system on any level, but more or less took it for granted.
Was he supposed to prove that? Some think he is a constructivist.
I don’t think you or Kant have sufficiently specified the concept of “universal,” such that one can rigorously state what does or does not qualify,
I don;’t think he did either, and I don’t think that’s a good reason to give such trivial counterexamples. All the stuff you like started out non-rigourous as well.
I think that the Categorical Imperative fails to describe our intuitive sense of morality
And physics fails to describe folk-physics.
The problem is that you are rejecting one theory for being non-rigourous whilst tacitly accepting others that are also non-rigourous. Yoru intuitions being an extreme example.
Could that have been based on misunderstanding on your part?
Yes, but I don’t think I have more reason to believe so now than I did when this conversation began; I would need input of a rather different sort to start taking it more seriously.
Was he supposed to prove that? Some think he is a constructivist.
He made it rather clear that he intended to, although if you wish to offer your own explanation as to why I should believe otherwise, you are free to do so; referring me back to the original text is naturally not going to help here.
If you’re planning to refer me to some other philosopher offering a critique on him, I’d appreciate an explanation of why I should take this philosopher’s position seriously; as I’ve already said, I was unimpressed with Kant, and for that matter, with most philosophers whose work I’ve read (in college, I started out with a double major in philosophy, but eventually dropped it because it required me to spend so much time on philosophers whose work I felt didn’t deserve it, so I’m very much not predisposed to spring into more philosophers’ work without good information to narrow down someone I’m likely to find worth taking seriously.)
I don;’t think he did either, and I don’t think that’s a good reason to give such trivial counterexamples. All the stuff you like started out non-rigourous as well.
What stuff do you think I like? The reason I was giving “trivial counterexamples” was to try and encourage you to offer a formulation that would make it clear what should or should not qualify as a counterexample. I don’t think the problem with the Categorical Imperative is that there are clear examples where it’s wrong, so much as I think that it’s not formulated clearly enough that one could even say whether something qualifies as a counterexample or not.
And physics fails to describe folk-physics.
The problem is that you are rejecting one theory for being non-rigourous whilst tacitly accepting others that are also non-rigourous. Yoru intuitions being an extreme example.
I don’t accept my moral intuitions as an acceptable moral framework. What do you think it is that I tacitly accept which is not rigorous?
If the distinction between physics and folk physics is that the former is an objective description of reality while the latter is a rough intuitive approximation of it, what reason do we have to suspect that the distinction between the Categorical Imperative and intuitive morality is in any way analogous to this?
The reason I was giving “trivial counterexamples” was to try and encourage you to offer a formulation that would make it clear what should or should not qualify as a counterexample.
Makes it clear to whom? The points you are missing are so basic, it can only be that you don’t want to understand.
I don’t think the problem with the Categorical Imperative is that there are clear examples where it’s wrong, so much as I think that it’s not formulated clearly enough that one could even say whether something qualifies as a counterexample or not.
Would you accept a law—an actual legal law—that exempts a named individual for no particular reason, as being a fair and just law? Come on, this is just common-sense reasoning.
Would you accept a law—an actual legal law—that exempts a named individual for no particular reason, as being a fair and just law? Come on, this is just common-sense reasoning.
If it’s “just common sense reasoning,” then your common sense is doing all the work, which is awfully unhelpful when you run into an agent whose common sense says differently.
Let’s say I think it would be a good law. Can you explain to me why I should think otherwise, while tabooing “fair” and “common sense?”
People have been falling back on “common sense” for thousands of years, and it made for lousy science and lousy philosophy. It’s when we can deconstruct our intuitions that we start to make progress.
ETA: Since you’ve not been inclined to actually follow along and offer arguments for your positions so far, I’ll make it clear that this is not a position I’m putting forward out of sheer contrarianism, I have an actual moral philosophy in mind which has been propounded by real people, under which I think that such a law could be a positive good.
Let’s say I think it would be a good law. Can you explain to me why I should think otherwise, while tabooing “fair” and “common sense?”
I’ll take a crack at this.
Laws are essentially code that gets executed by an enforcement and judicial system. Each particular law/statute is a module or subroutine within that code; its implementation will have consequences for the implementation of other modules / subroutines within that system.
So, let’s say we insert a specific exception into our legal system for a particular person. Which person? Why that person, rather than another? Why only one person?
Projecting myself into the mindset of someone who wants a specific exception for themselves, let’s go with the simplest answers first:
“Me. Because I’m that person. Because I don’t want competition.”
Now, remember that laws are just code; they still have to be executed by the people who make up the enforcement and judicial systems of the society they’re passed for. What’s in it for those people, to enforce your law?
If you can provide an incentive for people to make a privileged exception for you, then you de facto have your own law, even if it isn’t on the books. If you CAN’T provide such an incentive, then you de facto don’t have your own law, even if you DO get it written into the law books.
Now, without any “particular reason”, why would people adopt and execute such a law?
If there IS such a reason—say, the privileged entity has a private army, or mind-control lasers, or wild popular support—then the actual law isn’t “Such-and-such entity is privileged”, even if that’s what’s written in the law books. The ACTUAL law is “Any entity with a private army larger than the state can comfortably disarm is privileged”, or “any entity with mind-control lasers is privileged”, or “any entity with too much popular support is privileged”, all of which are circumstances that might change. And the moment they do, the reality will change, regardless of what laws might be on the books.
It’s really the same with personal ethics. When you say, “I should steal and people shouldn’t punish me for it, even though most people should be punished for stealing”, you’re actually (at least partially) encoding “I think I can get away with stealing”. Most primate psychology has rather specific conditions for when that belief is true or not.
If I want to increase the chance that “I can get away with stealing” is true, setting a categorical law of “If Brent Dill, then cheat, otherwise don’t cheat” won’t actually help me Win nearly as much as wild popular support, or a personal army, or mind control lasers would.
And no, I am not bypassing the original question of “should I have such a law?”—I’m distilling it down, while tabooing “fair” and “common sense”, to the only thing that’s left—“can I get away with having such a law?”
The ACTUAL law is “Any entity with a private army larger than the state can comfortably disarm is privileged”,
Which explain, albeit in a weird and disturbing way, the principle at work. There is a difference between having universal (fair, impartial) laws for multiple reference classes, and laws that apply to a reference class, but make exceptions. There is a difference between “minors should have different laws” and “the law shouldn’t apply to me”. The difference is that reference classes are defined by shared properties—which can rationally justify the use of different rules—but individuals aren’t. What’s is it about mean that means I can be allowed to steal?
This is a familiar idea. For instance, in physics, we expect different laws to apply to, eg, charged and uncharged particles. But we don’t expect electron #34568239 to follow some special laws of its own.
The difference is that reference classes are defined by shared properties—which can rationally justify the use of different rules—but individuals aren’t.
I’m pretty sure I can define a set of properties which specifies a particular individual.
What’s is it about [me] that means I can be allowed to steal?
That you’re in a class and the class is a class for which the rule spits out “is allowed to steal”.
It may not be rule that you expect the CI to apply to, but it’s certainly a rule.
What you’re doing is adding extra qualifications which define good rules and bad rules. The “shared property” one doesn’t work well, but I’m sure that eventually you could come up with something which adequately describes what rules we should accept and what rules we shouldn’t.
The trouble with doing this is that your qualifications would be doing all the work of the Categorical Imperative—you’re not using the CI to distinguish between good and bad rules, you have a separate list that essentially does the same thing independently and the CI is just tacked on. The CI is about as useful as a store sign which says “Prices up to 50% off or more!”
I’m pretty sure I can define a set of properties which specifies a particular individual.
I think you will find that defining a set of properties that picks out only one individual, and always defines the same individual under any circumstances is extremely difficult.
What’s is it about [me] that means I can be allowed to steal?
That you’re in a class and the class is a class for which the rule spits out “is allowed to steal”.
And if I stop being in that class, or other people join it, there is nothing (relevantly) special about me. But that is not what you are supposed to be defending. You are supposed to be defending the claim that:
″ is allowed to steal”
is equivalent to
″ are allowed to steal”.
I say they are not because there is no rigid relationship between names and properties (and, therefore, class membership).
The trouble with doing this is that your qualifications would be doing all the work of the Categorical I
imperative—you’re not using the CI to distinguish between good and bad rules,
No, I can still say that rules that do not apply impartially to all members of a class are bad.
I say they are not because there is no rigid relationship between names and properties (and, therefore, class membership).
Being “the person named by ___” is itself a property.
I can still say that rules that do not apply impartially to all members of a class are bad.
Then you’re shoving all the nuance into your definitions of “impartially” or “class” (depending on what grounds you exclude the examples you want to exclude) and the CI itself still does nothing meaningful. Otherwise I could say that “people who are Jiro” is a class or that applying an algorithm that spits out a different result for different people is impartial.
Being “the person named by _” is itself a property.
What instrument do you use to detect it? Do an entitiy’s properties change when you rename it?
Then you’re shoving all the nuance into your definitions of “impartially” or “class” (depending on what grounds you exclude the examples you want to exclude) and the CI itself still does nothing meaningful.
If I expand out the CI in terms of “impartiality” and “class” it is doing something meaningful.
A property does not mean something that is (nontrivially) detectable by an instrument.
If I expand out the CI in terms of “impartiality” and “class” it is doing something meaningful.
No it’s not. It’s like saying you shouldn’t do bad things and claiming that that’s a useful moral principle. It isn’t one unless you define “bad things”, and then all the meaningful content is really in that, not in the original principle. Likewise for the CI. All its useful meaning is in the clarifications, not in the principle.
A property does not mean something that is (nontrivially) detectable by an instrument.
That’s a matter of opinion. IMO, the usual alternative, treating any predicate as a property, is a source of map-territory confusions.
No it’s not. It’s like saying you shouldn’t do bad things and claiming that that’s a useful moral principle. It isn’t one unless you define “bad things”, and then all the meaningful content is really in that, not in the original principle. Likewise for the CI.
Clearly that could apply to any other abstract term … so much for reductionism, physicalism, etc.
I can’t see how my appeals to common sense are worse than your appeals to intuition. And it is not a case of my defending the C I but of my explaining to you how to understand it. You can understand it by assuming it is saying something commonsensical. You keep trying to read it as though it is a rigorous specification of something arbitrary and unguessable , like an acontextual line of program code. It’s not rigorous, and that doesn’t matter because it’s non arbitrary and it is understandable in terms of non rigorous nations you already have.e
There’s some chance that Derstopa is mistaken about absolutely anything. What evidence do you have to persuade Derstopa is misunderstanding the categorical imperative?
If we have different rules for minors and the insane, why can’t we have different rules for Jiro? “Jiro” is certainly as good a reference class as “minors”.
It’s not like the issue has never been noticed or addressed:
“Hypothetical imperatives apply to someone dependent on them having certain ends to the meaning:
if I wish to quench my thirst, I must drink something;
if I wish to acquire knowledge, I must learn.
A categorical imperative, on the other hand, denotes an absolute, unconditional requirement that asserts its authority in all circumstances, both required and justified as an end in itself. It is best known in its first formulation:
Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.[1] “—WP
If that’s what makes the world least convenient, sure. You’re trying for a reductio ad absurdum, but the LCPW is allowed to be pretty absurd. It exists only to push philosophies to their extremes and to prevent evasions.
I think you replied before my ETA. The LCPW is, in fact, not allowed to be pretty absurd. When pushed on one’s interlocutors, it does not prevent evasions, it is an evasion.
You’re kind of missing the point here. I probably should have clarified my position more
The reason I want people to trust the justice system is so that people wil not be inclined to commit crimes, because it would then more likely (from their point of view) that, if they did, they would get caught. I suppose there is the issue of precedent to worry about, but the ultimate purpose of the justice system, from the consequentialist viewpoint, is to deter crimes (by either the offender it is dealing with or potential others), not to punish criminals. As the offender is, by assumption, unlikely to reoffend, the everyone else’s criminal behaviors are the main factor here, and these are minimised through the justice system’s reputation. (I also should have added the assumption that attempts to convince people of the truth have failed). By prosecuting X you are acheiving this purpose. The Least Convenient Possible World is the one where there’s not a third way, or additional factor (I hadn’t thought of) that lets you get out of this.
Rationality is not about maximising the accuracy of your beliefs, nor the accuracy of others. It is about winning!
EDIT: Grammer
EDIT: The point is, if you would punish a guilty person for a stabler society, you ought to to the same to an innocent person, for the some benefit.
The point is, if you would punish a guilty person for a stabler society, you ought to to the same to an innocent person, for the some benefit.
This ignores the causal relationships. How is punishing the innocent supposed to create a stabler society? Because, in your scenario, it’s just this once and no-one will ever know. But it’s never just this once, and people (the judge, X, and Y at least) will know. As one might observe from a glance at the news from time to time. All you’re doing is saying, “But what if it really was just this once and no-one would ever know?” To which the answer is, “How will you know?” To which the LCPW replies “But what if you did know?”, engulfing the objection and Borgifying it into an extra hypothesis of your own.
You might as well jump straight to your desired conclusion and say “But what if it really was Good, not Bad?” and you are no longer talking about anything in reality. Reality itself is the Least Convenient Possible World.
Possibly I used it out of context, What I mean is that utility (less crime)> utility(society has inaccurate view of justice system) when the latter has few other consequences, and rationaliy is about maximising utility. Also, in the Least Convenient World, overall this trial will not affect any others, hence negating the point about the accuracy of the justice system. Here knowledge is not an end, it is a means to an end.
Test for Consequentialism:
Suppose you are a judge in deciding whether person X or Y commited a murder. Let’s also assume your society has the death penalty. A supermajority of society (say, encouraged by the popular media) has come to think that X committed the crime, which would decrease their confidence in the justice system if he is set free, but you know (e.g. because you know Bayes) that Y was responsible. We also assume you know that Y won’t reoffend if set free because (say) they have been too spooked by this episode. Will you condemn X or Y? (Before you quibble your way out of this, read The Least Convenient Possible World)
If you said X, you pass.
Just a response to “Saddam Hussein doesn’t deserve so much as a stubbed toe.”
N.B. This does not mean I’m against consequentialism.
By condemning X, I uphold the people’s trust in the justice system, while making it unworthy of that trust. By condemning Y, I reduce the people’s trust in the justice system, while making the system worthy of their trust. But what is their trust worth, without the reality that they trust in?
If I intend the justice system to be worthy of confidence, I desire to act to make it worthy of confidence. If I intend it to be unworthy of confidence, I desire to act to make it unworthy of confidence. Let me not become unattached to my desires, nor attached to what I do not desire.
Also, there is no Least Convenient Possible World. The Least Convenient Possible World for your interlocutors is the Most Convenient Possible World for yourself, the one where you get to just say “Suppose that such and such, which you think is Bad, were actually Good. Then it would be Good, wouldn’t it?”
In the least convenient possible world, condemning an innocent in this one case will not make the system generally less worthy of confidence. Maybe you know it will never happen again.
Maybe everyone would have a pony.
ETA: It is not for the proponent of an argument to fabricate a Least Convenient Possible World—that is, a Most Convenient Possible World for themselves—and insist that their interlocutors address it, brushing aside every argument they make by inventing more and more Conveniences. The more you add to the scenario, the smaller the sliver of potential reality you are talking about. The endpoint of this is the world in which the desired conclusion has been made true by definition, at which point the claim no longer refers to anything at all.
The discipline of the Least Convenient Possible World is a discipline for oneself, not a weapon to point at others.
If I, this hypothetical judge, am willing to have the innocent punished and the guilty set free, to preserve confidence that the guilty are punished and the innocent are set free, I must be willing that I and my fellow judges do the same in every such case. Call this the Categorical Imperative, call it TDT, that is where it leads, at the speed of thought, not the speed of time: to take one step is to have travelled the whole way. I would have decided to blow with the mob and call it justice. It cannot be done.
The categorical imperative ignores the possibility of mixed strategies—it may be that doing X all the time is bad, doing Y all the time is bad, but doing a mixture of X and Y is not. For instance, if everyone only had sex with someone of the same sex, that would destroy society by lack of children. (And if everyone only had sex with someone of the opposite sex, gays would be unsatisfied, of course.) The appropriate thing to do, is to allow everyone to have sex with the type of partner that fits their preferences. Or to put it another way, “doing the same thing” and “in the same kind of case” depend on exactly what you count as the same—is the “same” thing “having only gay sex” or “having either type of sex depending on one’s preference”?
In the punishment case, it may be that we’re better off with a mixed strategy of sometimes killing innocent people and sometimes not; if you always kill innocent people, the justice system is worthless, but if you never kill innocent people, people have no confidence in the justice system and it also ends up being worthless. The optimal thing to do may be to kill innocent people a certain percentage of the time, or only in high profile public cases, or whatever. Asking “would you be willing to kill innocent people all the time” would be as inappropriate as asking “would you be willing to be in a society where people (when having sex) have gay sex all the time”. You might be willing to do the “same thing” all the time where the “same thing” means “follow the public’s preference, which sometimes leads to killing the innocent” (not “always kill the innocent ”) just like in the gay sex example it means “follow someone’s sexual preference, which sometimes leads to gay sex” (not “always have gay sex”).
Yes, the categorical imperative has the problem of deciding on the reference class, as do TDT, the outside view, and every attempt to decide what precedent will be set by some action, or what precedent the past has set for some decision. Eliezer coined the phrase “reference class tennis” to refer to the broken sort of argumentation that consists of choosing competing reference classes in order to reach desired conclusions.
So how do you decide on the right reference class, rather than the one that lets you conclude what you already wanted to for other reasons? TDT, being more formalised (or intended to be, if MIRI and others ever work out exactly what it is) suggests a computational answer to this question. The class that your decision sets a precedent for is the class that shares the attributes that you actually used in making your decision—the class that you would, in fact, make the same decision for.
This is not a solution to the reference class problem, or even an outline of a solution; it is only a pointer in a direction where a solution might be found. And even if TDT is formalised and gives a mathematical solution to the reference class problem, we may be in the same situation as we are with Bayesian reasoning: we can, and statisticians do, actually apply Bayes theorem in cases where the actual numbers are available to us, but “deep” Bayesianism can only be practiced by heuristic approximation.
“Would you like it if everyone did X” is just a bad idea, because there are some things whose prevalences I would prefer to be neither 0% nor 100%, but somewhere inbetween. That’s really an objection to the categorical imperative, period. I can always say that I’m not really objecting to the categorical imperative in such a situation by rephrasing it in terms of a reference class “would you like it if everyone performed some algorithm that produced X some of the time”, but that gets far away from what most people mean when they use the categorical imperative, even if technically it still fits.
An average person not from this site would not even comprehend “would you like it if everyone performed some algorithm with varying results” as a case of the golden rule, categorical imperative, or whatever, and certainly wouldn’t think of it as an example of everyone doing the “same thing”. In most people’s minds, doing the same thing means to perform a simple action, not an algorithm.
In that case, the appropriate X is to perform the action with whatever probability you would wish to be the case. It still fits the CI.
Or more briefly, it still fits. But you have to actually make the die roll. What “an average person not from this site” would or would not comprehend by a thing is not relevant to discussions of the thing itself.
In that case, you can fit anything whatsoever into the categorical imperative by defining an appropriate reference class and action. For instance, I could justify robbery with “How would I like it, if everyone were to execute ‘if (person is Jiro) then rob else do nothing’”. The categorical imperative ceases to have meaning unless some actions and some reference classes are unacceptable.
That’s too brief. Because :”what do most people mean when they say this” actually matters. They clearly don’t mean for it to include “if (person is Jiro) then rob else do nothing” as a single action that can be universalized by the rule.
The reason that doesn’t work is that people who are not Jiro would not like it if everyone were to execute ‘if (person is Jiro) then rob else do nothing’, so they couldn’t justify you robbing that way. The fact that the rule contains a gerrymandered reference class isn’t by itself a problem.
Does the categorical imperative require everyone to agree on what they would like or dislike? That seems brittle.
I’ve always heard it, the Golden Rule, and other variations to be some form of “would you like it if everyone were to do that?” I’ve never heard of it as “would everyone like it if everyone were to do that?”. I don’t know where army1987 is getting the second version from.
This post discusses the possibility of people “not in moral communion” with us, with the example of a future society of wireheads.
Doing which is reference class tennis, as I said. The solution is to not do that, to not write the bottom line of your argument and then invent whatever dishonest string of reasoning will end there.
No kidding. And indeed some are not, as you clearly understand, from your ability to make up an example of one. So what’s the problem?
What principle determines what actions are unacceptable apart from “they lead to a bottom line I don’t like”? That’s the problem. Without any prescription for that, the CI fails to constrain your actions, and you’re reduced to simply doing whatever you want anyway.
This asserts a meta-meta-ethical proposition that you must have explicit principles to prescribe all your actions, without which you are lost in a moral void. Yet observably there are good and decent people in the world who do not reflect on such things much, or at all.
If to begin to think about ethics immediately casts you into a moral void where for lack of yet worked out principles you can no longer discern good from evil, you’re doing it wrong.
Look, I have no problem with basing ethics on moral intuitions, and what we actually want. References to right and wrong are after all stored only in our heads.
But in the specific context of a discussion of the Categorical Imperative—which is supposed to be a principle forbidding “categorically” certain decisions—there needs to be some rule explaining what “universalizable” actions are not permitted, for the CI to make meaningful prescriptions. If you simply decide what actions are permitted based on whether you (intuitively) approve of the outcome, then the Imperative is doing no real work whatsoever.
If, like most people, you don’t want to be murdered, the CI will tell you not to murder. If you don’t want to be robbed, it will tell you not to rob. Etc. It does work for the normal majority, and the abnornmal minority are probably going to be a problem under any system.
Please read the above thread and understand the problem before replying.
But for your benefit, I’ll repeat it: explain to me, in step-by-step reasoning, how the categorical imperative forbids me from taking the action “if (I am nshepperd) then rob else do nothing”. It certainly seems like it would be very favourable to me if everyone did “if (I am nshepperd) then rob else do nothing”.
That’s a blatant cheat. How can you have a universal law that includes a specific exception for a named individual?
The way nshepperd just described. It is, after all, a universal law, applied in every situation. It just returns different results for a specific individual. We can call a situation-sensitive law like this a piecewise law.
Most people would probably not want to live in a society with a universal law not to steal unless you are a particular person, if they didn’t know in advance whether or not the person would be them, so it’s a law one is unlikely to support from behind a veil of ignorance.
However, some piecewise laws do better behind veils of ignorance than non-piecewise universal laws. For instance, laws which distinguish our treatment of introverts from extroverts stand to outperform ones which treat both according to the same standard.
You can rescue non piecewise categorical imperatives by raising them to a higher level of abstraction, but in order to keep them from being outperformed by piecewise imperatives, you need levels of abstraction higher than, for example “Don’t steal.” At a sufficient level of abstraction, categorical imperatives stop being actionable guides, and become something more like descriptions of our fundamental values.
I’m all in favour of going to higher levels of abstraction. Its much better appreach than coding in kittens-are-nice and slugs-are-nasty.
Is there anything that makes it qualitatively different from
if (subject == A) { return X }
elsif (subject==B) { return Y }
elsif (subject==C) { return Z } … etc. etc.?
No, there isn’t any real difference from that, which is why the example demonstrates a flaw in the Categorical Imperative. Any non-universal law can be expressed as a universal law. “The law is ‘you can rob’, but the law should only be applied to Jiro” is a non-universal law, but “The law is ‘if (I am Jiro) then rob else do nothing’ and this law is applied to everyone” is a universal law that has the same effect. Because of this ability to express one in terms of the other, saying “you should only do things if you would like for them to be universally applied” fails to provide any constraints at all, and is useless.
Of course, most people don’t consider such universal laws to be universal laws, but on the other hand I’m not convinced that they are consistent when they say so—for instance “if (I am convicted of robbery) then put me in jail else nothing” is a law that is of similar form but which most people would consider a legitimate universalizable law.
If the law gives different results for different people doing the same thing, it isn’t universal jn the intended sense, which is pretty much the .same as fairness.
“In the intended sense” is not a useful description compared to actually writing down a description. It also may not necessarily even be consistent.
Furthermore, it’s clear that most people consider “if (I am convicted of robbery) then put me in jail else nothing” to be a universal law in the intended sense, yet that gives different results for different people (one result for robbers, another result for non-robbers) doing the same thing (nothing, in either case).
It is possible to arrive at the intended sense by assuming that the people you are commenting on are not idiots who can be disproven with one-line comments.
Another facile point.
It’s also possible to completely fail to explain things to intelligent people by assuming that their intelligence ought to be a sufficient asset to make your explanations comprehensible to them. If people are consistently telling you that your explanations are unclear or don’t make sense, you should take very, very seriously the likelihood that, at the least in your efforts to explain yourself, you are doing something wrong.
Which bit of ” pretty much the .same as fairness” were you having trouble with?
Do you think “all robbers should be jailed except TheAncientGeek” is a fair rule?
What rule would count as non universal for you?
The “fairness” part. Falling back on another insufficiently specified intuitive concept doesn’t help explain this one. Is it fair to jail a man who steals a loaf of bread from a rich man so his nephew won’t starve? A simple yes or no isn’t enough here, we don’t all have identical intuitive senses of fairness, so what we need isn’t the output for any particular question, but the process that generates the outputs.
I don’t think “all robbers should be jailed except TheAncientGeek” is a fair rule, but that doesn’t advance the discussion from where we were already.
Where a universal rule would be one that anyone could check any time for relevant output (both “never steal” and “if nsheppard, steal, if else, do nothing) would be examples, one which only produces output for a specific individual or in a specific instance (for example “nsheppard can steal,” or “on January 3rd, 2014, it is okay to steal.”) These would be specific case rules.
It is not an intuitiojn about what is true, it is a concept that helps to explain another concept.,If you let it.
Then why do you think you can build explicit exceptions into rules and still deem them universal? I think you can’t because I think, roughly speaking, universal=fair.
Such a rule is useless for moral guidance. But intelligent people think the CI is useful for moral guidance. That should have told you that your guess about what “universal” means, in this context, is wrong. You should have discarded that interpretation and sought one that does not make the CI obviously foolish.
“Intelligent people” also think you shouldn’t switch in the common version of the Monty Hall problem. The whole point of this argument is to point out that the CI doesn’t make sense as given and therefore, that “intelligent people” are wrong about it.
No, it tells me that people who think the CI is useful have not thought through the implications. It’s easy to say that rules like the ones given above can’t be made “universal”, but the same people who wouldn’t think such rules can be made universal are willing to make other rules of similar form universal (why is a rule that says that only Jiro can rob not “universal”, but one which says that only non-minors can drink alcohol is?)
None of the comments have not anywhere near the .CI as given. Kant did not define the .CI as an accessible function.
I have already answered your second point.
I don’t think there is, but then, I don’t think that classifying things as universal law or not is usually very useful in terms of moral guidelines anyway. I consider the Categorical Imperative to be a failed model.
Why is it failed? A counterexample was put forward that isn’t a universal law. That doesn’t prove the .CI to be wrong. So what does?
We already adjust rules by reference classes, since we have different rules for minors and the insane. Maybe we just need rules that are apt to the reference class and impartial within it.
When you raise it to high enough levels of abstraction that the Categorical Imperative stops giving worse advice than other models behind a veil of ignorance, it effectively stops giving advice at all due to being too abstract to apply to any particular situation with human intelligence.
You can fragment the Categorical Imperative into vast numbers of different reference classes, but when you do it enough to make it ideally favorable from behind a veil of ignorance, you’ve essentially defeated any purpose of treating actions as if they were generalizable to universal law.
I’d lovely know the meta model you are using to judge between models.
Universal isn’t really universal, since you can’t prove mathematial theorem to stones.
Fairness within a reference class counts.
I think I’ve already made that implicit in my earlier comments; I’m judging based on the ability of a society run on such a model to appeal to people from behind a veil of ignorance
I think that is a false dichotomy. One rule for everybody may well fail: Everybody has their own rule may well fai. However, there is till the tertium datur of N>1 rules for M>1 people. Which is kind of how legal systems work in the real world.
Legal systems that were in place before any sort of Categorical Imperative formulation, and did not particularly change in response to it.
I think our own legal systems could be substantially improved upon, but that’s a discussion of its own. Do you think that the Categorical Imperative formulation has helped us, morally speaking, and if so how?
The planets managed to stay in their orbits before Newton, as well.
So far I have only been pointing out that the arguments against it barely scratch the surface.
So do you think that it either improves or accurately describes our morality, and if so, can you provide any argument for this?
I think it is a feasible approach which is part of a family of arguments which have never been properly considered on LW.
That doesn’t answer my question.
I would suggest that the Categorical Imperative has been considered at some length by many, if not all members of Less Wrong, but doesn’t have much currency because in general nobody here is particularly impressed with it. That is, they don’t think that it either improves upon or accurately describes our native morality.
If you think that people on Less Wrong ought to take it seriously, demonstrating that it does one of those would be the way to go.
I was deliberately not playing along with your framing that the CI is wrong by default unless elaborately defended.
I see no evidence of that. If it had been considered at length: if it had been people would be able to understand it (you keep complaining that you do not), and they would be able to write relevant critiques that address what it is actually about.
Again, I don’t have to put forward a steelmanned version of a theory to demonstrate that it should not be lightly dismissed. That is a false dichotomy.
I’m not complaining that I don’t understand it, I’m complaining that your explanations do not make sense to me. Your formulation seems to differ substantially from Kant’s (for instance, the blanket impermissibility of stealing was a case he was sufficiently confident in to use as an example, whereas you do not seem attached to that principle.)
You haven’t explained anything solid enough to make a substantial case that it should not be lightly dismissed; continuing to engage at all is more a bad habit of mine than a sign that you’re presenting something of sufficient use to merit feedback. If you’re not going to bother explaining anything with sufficient clarity to demonstrate both crucially that you have a genuinely coherent idea of what you yourself are talking about, and that it is something that we should take seriously, I am going to resolve not to engage any further as I should have done well before now.
If you understand, why do you need me to explain?
I have no idea what you are referring to.
Again: that is not the default.
Because I think you don’t have a coherent idea of what you’re talking about, and if you tried to formulate it rigorously you’d either have to develop one, or realize that you don’t know how to express what you’re proposing as a workable system. Explaining things to others is how we solidify or confirm our own understanding, and if you resist taking that step, you should not be assured of your own understanding.
Now you know why I was bothering to participate in the first place, and it is time, unless you’re prepared to actually take that step, for me to stop.
Why should I repeat what is in the literature on the CI, instead of you reading it? It is clear from your other comments that you don’t in fact understand it. It is not as if you had read some encyclopedia article and said “I don’t get this bit”—a perfectly ordinary kind and level of misunderstanding. Instead, you have tried to shoe-horn it into some weird computing-programming metaphor which is entirely inappropriate.It is that layer of “let’s translate this into some entirely different discipline” that is is causing the problem for you and others.
Okay, I’m being really bad here, and I encourage anyone who’s following along to downvote me for my failure to disengage, but I might as well explain myself here to a point where you actually know what you’re arguing with.
I have already read Kant, and I wasn’t impressed; some intelligent people take the CI seriously, most, including most philosophers, do not. I think Kant was trying too hard to find ways he could get his formulation to seem like it worked, and not looking hard enough for ways he could get it to break down, and failed to grasp that he had insufficiently specified his core concepts in order to create a useful system (and also that he failed to prove that objective morality enters into the system on any level, but more or less took it for granted.)
I don’t particularly expect you to agree that piecewise rules like the ones I described qualify as “universal,” but I don’t think you or Kant have sufficiently specified the concept of “universal,” such that one can rigorously state what does or does not qualify, and I think that by trying to so specify, for an audience prepared to point out failures of rigor in the formulation, would lead you to the conclusion that it’s much, much harder to develop a moral framework which is rigorous and satisfying and coherent than you or Kant have made it out to be.
I think that the Categorical Imperative fails to describe our intuitive sense of morality (I can offer explanations as to why if you wish, but I would be much more amenable to doing so if you would actually offer explanations for your positions when asked, rather than claiming it’s not your responsibility to do so,) fails to offer improvements of desirability over our intuitive morality on a society that runs on it from behind a veil of ignorance, and that there is not sound reason to think that it is somehow, in spite of these things, a True Objective Description of Morality, and absent such reason we should assume, as with any other hypothetical framework lacking such reason, that it’s not.
You may try to change my mind, but hopefully you will now have a better understanding of what it would take to do so, and why admonishments to go read the original literature are not going to further engage my interest.
Could that have been based on misunderstanding on your part?
Was he supposed to prove that? Some think he is a constructivist.
I don;’t think he did either, and I don’t think that’s a good reason to give such trivial counterexamples. All the stuff you like started out non-rigourous as well.
And physics fails to describe folk-physics.
The problem is that you are rejecting one theory for being non-rigourous whilst tacitly accepting others that are also non-rigourous. Yoru intuitions being an extreme example.
Yes, but I don’t think I have more reason to believe so now than I did when this conversation began; I would need input of a rather different sort to start taking it more seriously.
He made it rather clear that he intended to, although if you wish to offer your own explanation as to why I should believe otherwise, you are free to do so; referring me back to the original text is naturally not going to help here.
If you’re planning to refer me to some other philosopher offering a critique on him, I’d appreciate an explanation of why I should take this philosopher’s position seriously; as I’ve already said, I was unimpressed with Kant, and for that matter, with most philosophers whose work I’ve read (in college, I started out with a double major in philosophy, but eventually dropped it because it required me to spend so much time on philosophers whose work I felt didn’t deserve it, so I’m very much not predisposed to spring into more philosophers’ work without good information to narrow down someone I’m likely to find worth taking seriously.)
What stuff do you think I like? The reason I was giving “trivial counterexamples” was to try and encourage you to offer a formulation that would make it clear what should or should not qualify as a counterexample. I don’t think the problem with the Categorical Imperative is that there are clear examples where it’s wrong, so much as I think that it’s not formulated clearly enough that one could even say whether something qualifies as a counterexample or not.
I don’t accept my moral intuitions as an acceptable moral framework. What do you think it is that I tacitly accept which is not rigorous?
If the distinction between physics and folk physics is that the former is an objective description of reality while the latter is a rough intuitive approximation of it, what reason do we have to suspect that the distinction between the Categorical Imperative and intuitive morality is in any way analogous to this?
Everyone likes soemething.
Makes it clear to whom? The points you are missing are so basic, it can only be that you don’t want to understand.
Would you accept a law—an actual legal law—that exempts a named individual for no particular reason, as being a fair and just law? Come on, this is just common-sense reasoning.
If it’s “just common sense reasoning,” then your common sense is doing all the work, which is awfully unhelpful when you run into an agent whose common sense says differently.
Let’s say I think it would be a good law. Can you explain to me why I should think otherwise, while tabooing “fair” and “common sense?”
People have been falling back on “common sense” for thousands of years, and it made for lousy science and lousy philosophy. It’s when we can deconstruct our intuitions that we start to make progress.
ETA: Since you’ve not been inclined to actually follow along and offer arguments for your positions so far, I’ll make it clear that this is not a position I’m putting forward out of sheer contrarianism, I have an actual moral philosophy in mind which has been propounded by real people, under which I think that such a law could be a positive good.
I’ll take a crack at this.
Laws are essentially code that gets executed by an enforcement and judicial system. Each particular law/statute is a module or subroutine within that code; its implementation will have consequences for the implementation of other modules / subroutines within that system.
So, let’s say we insert a specific exception into our legal system for a particular person. Which person? Why that person, rather than another? Why only one person?
Projecting myself into the mindset of someone who wants a specific exception for themselves, let’s go with the simplest answers first:
“Me. Because I’m that person. Because I don’t want competition.”
Now, remember that laws are just code; they still have to be executed by the people who make up the enforcement and judicial systems of the society they’re passed for. What’s in it for those people, to enforce your law?
If you can provide an incentive for people to make a privileged exception for you, then you de facto have your own law, even if it isn’t on the books. If you CAN’T provide such an incentive, then you de facto don’t have your own law, even if you DO get it written into the law books.
Now, without any “particular reason”, why would people adopt and execute such a law?
If there IS such a reason—say, the privileged entity has a private army, or mind-control lasers, or wild popular support—then the actual law isn’t “Such-and-such entity is privileged”, even if that’s what’s written in the law books. The ACTUAL law is “Any entity with a private army larger than the state can comfortably disarm is privileged”, or “any entity with mind-control lasers is privileged”, or “any entity with too much popular support is privileged”, all of which are circumstances that might change. And the moment they do, the reality will change, regardless of what laws might be on the books.
It’s really the same with personal ethics. When you say, “I should steal and people shouldn’t punish me for it, even though most people should be punished for stealing”, you’re actually (at least partially) encoding “I think I can get away with stealing”. Most primate psychology has rather specific conditions for when that belief is true or not.
If I want to increase the chance that “I can get away with stealing” is true, setting a categorical law of “If Brent Dill, then cheat, otherwise don’t cheat” won’t actually help me Win nearly as much as wild popular support, or a personal army, or mind control lasers would.
And no, I am not bypassing the original question of “should I have such a law?”—I’m distilling it down, while tabooing “fair” and “common sense”, to the only thing that’s left—“can I get away with having such a law?”
Which explain, albeit in a weird and disturbing way, the principle at work. There is a difference between having universal (fair, impartial) laws for multiple reference classes, and laws that apply to a reference class, but make exceptions. There is a difference between “minors should have different laws” and “the law shouldn’t apply to me”. The difference is that reference classes are defined by shared properties—which can rationally justify the use of different rules—but individuals aren’t. What’s is it about mean that means I can be allowed to steal?
This is a familiar idea. For instance, in physics, we expect different laws to apply to, eg, charged and uncharged particles. But we don’t expect electron #34568239 to follow some special laws of its own.
I’m pretty sure I can define a set of properties which specifies a particular individual.
That you’re in a class and the class is a class for which the rule spits out “is allowed to steal”.
It may not be rule that you expect the CI to apply to, but it’s certainly a rule.
What you’re doing is adding extra qualifications which define good rules and bad rules. The “shared property” one doesn’t work well, but I’m sure that eventually you could come up with something which adequately describes what rules we should accept and what rules we shouldn’t.
The trouble with doing this is that your qualifications would be doing all the work of the Categorical Imperative—you’re not using the CI to distinguish between good and bad rules, you have a separate list that essentially does the same thing independently and the CI is just tacked on. The CI is about as useful as a store sign which says “Prices up to 50% off or more!”
I think you will find that defining a set of properties that picks out only one individual, and always defines the same individual under any circumstances is extremely difficult.
And if I stop being in that class, or other people join it, there is nothing (relevantly) special about me. But that is not what you are supposed to be defending. You are supposed to be defending the claim that:
″ is allowed to steal”
is equivalent to
″ are allowed to steal”.
I say they are not because there is no rigid relationship between names and properties (and, therefore, class membership).
No, I can still say that rules that do not apply impartially to all members of a class are bad.
Being “the person named by ___” is itself a property.
Then you’re shoving all the nuance into your definitions of “impartially” or “class” (depending on what grounds you exclude the examples you want to exclude) and the CI itself still does nothing meaningful. Otherwise I could say that “people who are Jiro” is a class or that applying an algorithm that spits out a different result for different people is impartial.
What instrument do you use to detect it? Do an entitiy’s properties change when you rename it?
If I expand out the CI in terms of “impartiality” and “class” it is doing something meaningful.
A property does not mean something that is (nontrivially) detectable by an instrument.
No it’s not. It’s like saying you shouldn’t do bad things and claiming that that’s a useful moral principle. It isn’t one unless you define “bad things”, and then all the meaningful content is really in that, not in the original principle. Likewise for the CI. All its useful meaning is in the clarifications, not in the principle.
That’s a matter of opinion. IMO, the usual alternative, treating any predicate as a property, is a source of map-territory confusions.
Clearly that could apply to any other abstract term … so much for reductionism, physicalism, etc.
I can’t see how my appeals to common sense are worse than your appeals to intuition. And it is not a case of my defending the C I but of my explaining to you how to understand it. You can understand it by assuming it is saying something commonsensical. You keep trying to read it as though it is a rigorous specification of something arbitrary and unguessable , like an acontextual line of program code. It’s not rigorous, and that doesn’t matter because it’s non arbitrary and it is understandable in terms of non rigorous nations you already have.e
There’s some chance that Derstopa is mistaken about absolutely anything. What evidence do you have to persuade Derstopa is misunderstanding the categorical imperative?
If we have different rules for minors and the insane, why can’t we have different rules for Jiro? “Jiro” is certainly as good a reference class as “minors”.
Remember the “apt”. You would need to explain why you need those particular rules.
Explain to who? And do I just have to explain it, or do they have to agree?
In rationality land, one rational agent is as good as another
A qualitative difference is a quantitative difference that is large enough.
Sometimes. Not always.
It’s not like the issue has never been noticed or addressed:
“Hypothetical imperatives apply to someone dependent on them having certain ends to the meaning:
if I wish to quench my thirst, I must drink something; if I wish to acquire knowledge, I must learn.
A categorical imperative, on the other hand, denotes an absolute, unconditional requirement that asserts its authority in all circumstances, both required and justified as an end in itself. It is best known in its first formulation:
Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.[1] “—WP
If that’s what makes the world least convenient, sure. You’re trying for a reductio ad absurdum, but the LCPW is allowed to be pretty absurd. It exists only to push philosophies to their extremes and to prevent evasions.
Your tone is getting unpleasant.
EDIT: yes, this was before the ETA.
I think you replied before my ETA. The LCPW is, in fact, not allowed to be pretty absurd. When pushed on one’s interlocutors, it does not prevent evasions, it is an evasion.
You’re kind of missing the point here. I probably should have clarified my position more The reason I want people to trust the justice system is so that people wil not be inclined to commit crimes, because it would then more likely (from their point of view) that, if they did, they would get caught. I suppose there is the issue of precedent to worry about, but the ultimate purpose of the justice system, from the consequentialist viewpoint, is to deter crimes (by either the offender it is dealing with or potential others), not to punish criminals. As the offender is, by assumption, unlikely to reoffend, the everyone else’s criminal behaviors are the main factor here, and these are minimised through the justice system’s reputation. (I also should have added the assumption that attempts to convince people of the truth have failed). By prosecuting X you are acheiving this purpose. The Least Convenient Possible World is the one where there’s not a third way, or additional factor (I hadn’t thought of) that lets you get out of this.
Rationality is not about maximising the accuracy of your beliefs, nor the accuracy of others. It is about winning!
EDIT: Grammer EDIT: The point is, if you would punish a guilty person for a stabler society, you ought to to the same to an innocent person, for the some benefit.
This ignores the causal relationships. How is punishing the innocent supposed to create a stabler society? Because, in your scenario, it’s just this once and no-one will ever know. But it’s never just this once, and people (the judge, X, and Y at least) will know. As one might observe from a glance at the news from time to time. All you’re doing is saying, “But what if it really was just this once and no-one would ever know?” To which the answer is, “How will you know?” To which the LCPW replies “But what if you did know?”, engulfing the objection and Borgifying it into an extra hypothesis of your own.
You might as well jump straight to your desired conclusion and say “But what if it really was Good, not Bad?” and you are no longer talking about anything in reality. Reality itself is the Least Convenient Possible World.
I don’t think you understand what “rationality is about winning” means. It is explained here, here, and here.
Possibly I used it out of context, What I mean is that utility (less crime)> utility(society has inaccurate view of justice system) when the latter has few other consequences, and rationaliy is about maximising utility. Also, in the Least Convenient World, overall this trial will not affect any others, hence negating the point about the accuracy of the justice system. Here knowledge is not an end, it is a means to an end.
See my reply to Roxolan.