I’m actually a big fan of the Categorical Imperative. In the least, I find it morally illuminating, if not definitive, because it gets people to think about the moral principles behind their actions and avoid contradiction in their moral views, particularly hypocritical or self-serving contradiction. I suspect that any rational ethics (whatever that is) would have Categorical Imperative-like thinking involved.
I don’t think that the Categorical Imperative, at least the way I understand it, requires radical honesty.
If I lie to the Nazis, I “make it my maxim” that it is justified to lie to authorities to save an innocent person from death (when I can provide a reasonable argument that the person is in fact innocent and the authorities are wrong to try to kill them). Can I, at the “same time”, “will” that this maxim “become a universal law” without engaging in “contradiction”? Yes, I can.
My maxim is not that I think I have the right to decide, in general, who deserves to know the truth and who doesn’t. Rather, my maxim that is that potential murderers of innocent people don’t deserve to know the truth, when I can provide reasonable argument that they are truly innocent and that giving them up would lead to unjust harm to them.
Now, say that WWII is over, and Hitler himself is hiding out in Germany with his sympathizers, when Allied soldiers come knocking. Would the family hiding him be justified in lying, according to my maxim above? If they were justified according to my maxim, while I maintain that they would be unjustified in protecting Hitler, then I would engage in contradiction if I acted according to that maxim. Yet I hold that Hitler’s benefactors are not justified by my maxim, because they cannot provide any reasonable argument showing that Hitler is innocent and that he does not deserve to be captured.
The problem isn’t Kant’s Categorical Imperative, the problem is that he was sometimes incorrect about what it implies.
P.S. I agree with your main point of avoiding straw men in discussions simply because they were advanced by famous, but discredited, philosophical arguments, unless the author thinks that there is something particularly illuminating about doing so.
The problem isn’t Kant’s Categorical Imperative, the problem is that he was sometimes incorrect about what it implies.
The problem with the Categorical Imperative is that it is sufficiently vague that it implies anything you want it to. You can (almost?) always make the “maxim” of your action specific enough to make your action permissible, for example:
I want to kill my professor for giving me a bad grade, so here’s my maxim: If you were born on November 1, 1985, are white, have short brown hair, are wearing a black Tool t-shirt and Simpson’s pajama pants, and got a D in your world lit class due to attendance despite acing the tests, papers, and finals, you can kill your professor.
Can this be willed as a universal law without contradiction? I certainly can’t find a contradiction.
I remember in my advanced logic class, taught by the philosophy department, a latter section of the book formalized the golden rule into a logical system, i.e., do unto others as you would have them do unto you in the same situation. In other words, be consistent. I never worked through that chapter, but I read through the setup and the whole system suffered from a vagueness similar to Kant’s: when does a situation count as “the same?” As far as I could tell, everything was moral because no two real life situations could be the same—surely something in the universe moved somewhere. Maybe just an atom.
Btw, yes I really did get a D in world lit because of attendance, and no, I’m not really that upset about it. It was a couple of years ago, after all.
I want to kill my professor for giving me a bad grade, so here’s my maxim: If you were born on November 1, 1985 … you can kill your professor.
The answer I heard somewhere was that this line of reasoning was an application of the meta-maxim ‘I will invent highly specific maxims to allow me to do whatever I want’, which itself cannot be willed as a universal law without contradiction.
Edit: alternatively, the CI was intended to be a necessary condition, not a sufficient one. (disclaimer: I haven’t read much Kant in the origional)
In the Robin Hanson tradition, whenever I think that I have figured out a flaw in Kant’s reasoning, I halt, recognize that he lived until he was 79 and spent everyday of his life thinking about these sorts of things and taking long walks. It is good to question him, but also to be humble and research any extant rebuttals to one’s own argument.
and more at the Stanford Encyclopedia of Philosophy.
Kant had a peculiar obsession with what rational and reasoning actors would choose to do and what would happen if all people say rationality and reason as definitive tools. Why there is so much resistance to delving into Kant in the Less Wrong community is beyond me.
Why there is so much resistance to delving into Kant in the Less Wrong community is beyond me.
For one, Kant wasn’t relevant to the original topic of discussion—no one was arguing from Kant’s position. Also, I think most people on here agree that Kant was wrong. In more ways than one. Thus debating Kant is pretty much a dead end.
There’s a proverb I failed to Google, which runs something like, “Once someone is known to be a liar, you might as well listen to the whistling of the wind.” You wouldn’t want others to expect you to lie, if you have something important to say to them; and this issue cannot be wholly decoupled from the issue of whether you actually tell the truth. If you’ll lie when the fate of the world is at stake, and others can guess that fact about you, then, at the moment when the fate of the world is at stake, that’s the moment when your words become the whistling of the wind.
I don’t know if you read the entire body of my comment bringing up Kant, but it rests on asking if there was a similarity in Eliezer’s argument and Kants with a question mark at the end.
Both Eliezer and Kant seem to think that this abstract thing called “trust” suffers when individuals choose to lie for their own purposes. Both of them suggest that individuals who believe this would benefit from adopting a maxim that they should not lie.
Eliezer states in the comments that you can lie to people who aren’t part of your community of rational or potentially rational individuals.
Kant says that you can’t lie to people, even if they aren’t part of your club.
You don’t need the CI to reach either of these conclusions; the comment points out that you could do this on Utilitarian grounds. Utilitarian reasoning might even support Kants “don’t like to anyone ever” over Eliezer’s conceptions.
As for arguing Kant leading to a dead end, there is plenty of contemporary philosophy that still uses a lot of Kant and even NPOV Wikipedia has a section detailing Kant in contemporary philosophy.
Also, I think most people on here agree that Kant was wrong. In more ways than one. Thus debating Kant is pretty much a dead end.
I am always of the mind that saying that someone’s assumptions are wrong doesn’t lead to their argument having no value ever for any future discussion. In this particular case we got to use a Kantian thought experiment to talk about what looks like a variation on Kantian logic. I’m sorry I used the K word.
The idea of everyone on LW believing that Kant was almost totally wrong and that we should completely discard him is a little unsettling to me. There is a much larger community out there that accepts elements of Kant’s arguments and methods and still applies them; I would again push a Robin Hanson line by suggesting that most rationalists are elsewhere and we should work harder to find them.
With regard to whether Kant was relevant, I quote the article we are commenting on:
The problem with bringing up Kant here is that he simply doesn’t belong. “Don’t [lie] to anyone unless you’d also slash their tires, because they’re Nazis or whatever,” is very different from Kant saying (paraphrasing), “Never lie, ever, or else you’re a bad person.” An argument against the former by conflating it with the latter doesn’t accomplish anything. Further, there’s no mention of all the stuff Kant has to assume in order to argue for the Categorical Imperative and, finally, the value of radical honesty.
In other words, I agree with Andrew’s criticism of you (among others) for bring up Kant in the first place. He simply didn’t belong.
...there is plenty of contemporary philosophy that still uses a lot of Kant and even NPOV Wikipedia has a section detailing Kant in contemporary philosophy.
First, I should be clear, I was only talking about Kant’s ethics. My fault for not making that more clear to begin with. However, I don’t think this counts much in Kant’s favor because there is plenty of contemporary philosophy still using ideas from Plato or Aquinas, even when they are pretty clearly wrong (metaphysical realism, anyone? How about agent causation?).
There may be something worthwhile in Kant, but I’m rather skeptical. Given what Kant I’ve already read, I think my efforts are better spent elsewhere. If you think there is something worthwhile in Kant, then by all means, tell us about it. It may make a good post here.
It’s not Kant everyone’s chucking out—it’s deontological ethics, in favour of consequentialism. If I could only get the world to pick up one rationalist lesson, I would like them to shut up and multiply.
In matters of morality (as opposed to law, In matters of morality (as opposed to law), the important thing is to follow correct principles, not to find technicalities. As soon as someone writes on the bottom line “X is a moral act”, where X is what he/she happens to want to do at the moment, any further “moral reasoning” is just self-deception. Any reasonable person can tell that such an incredibly specific situation is useless for forming a categorical imperative. The fact that the idea of a categorical imperative breaks down when it is so vaguely specified is strong evidence against that particular implementation of it, but only weak evidence against the concept as a whole. It would require more work to define a standard of reasonableness for what situations can and can’t be generalized before one can say whether the categorical imperative does or doesn’t make sense.
That said, I suspect that if one starts with a naive categorical imperative like Matt expresses above and iteratively finds and patches flaws, one will eventually converge towards consequentialism. I could be proven wrong about this, though.
The problem is that Kant lies about his approach’s implications and that no-one else can agree on anyone else as to what they are in any useful manner.
Kant’s Categorical Imperative, the classical Golden Rule, and Hofstadter’s superrationality all seem to me to be reflections of the same observation: Ethics rests on an algebraic symmetry among agents.
(I don’t have the philosophical or mathematical skill to formalize this. I recognize that this may make me sound like — or be — a crank on the subject. Sorry about that.)
The concept of morality doesn’t make sense without multiple agents. If your model of the world doesn’t include other entities of the same kind as you — but who are not you — then moral reasoning leads quite logically to sociopathy. If you are the only real agent, or for that matter if the universe is a dialogue between your unique soul and Almighty God, then reasoning about morality is nonsense. It is only because there are multiple agents who each are capable of influencing the others’ outcomes that morality makes any sense at all to talk about.
Possibly the most dramatic example of this symmetry I’ve seen is Eliezer’s True Prisoners’ Dilemma which shows that the symmetry can exist even between agents that do not share any object-level values. If you believe the paperclip-maximizer in the True Prisoners’ Dilemma is a rational agent that models the world as containing other rational agents symmetric to itself (but with different values), then you cooperate, because you’re not deciding between four possible outcomes; the symmetry means you’re deciding between (C,C) and (D,D).
(It’s not a matter of judging whether you implement the same algorithm as the other guy. It’s a matter of judging whether you’re in the same situation as the other guy, and that you correctly appraise this, and recognize that the other guy correctly appraises it, and so on recursively.)
Kant’s approach seems to be partly based on the idea of an equilibrium: acting on a rule that treats others as mere means is self-undermining; treating others as ends is the only winning choice if those others are also rational. It also seems to me that reflexive decision theories aim at a more axiomatic reflection of this same principle, by explicitly incorporating the notion that agents model other agents’ behavior.
Evolution has encoded into humankind an instinct for recognizing agentiness. This instinct is buggy as hell; it is much more sensitive than specific. It sees agentiness in non-agenty crap like the weather — “Hey you! Rain agent! Here, have a chicken … now, come rain on my crops, please!” — and if you draw two dots and a horizontal line beneath them, it sees the face of an agent. However, it is by dint of recognizing that the world contains other agents like ourselves, who also in turn recognize this fact, that humans are able to cooperate for mutual benefit in a way which other apes and mammals are not.
I’m actually a big fan of the Categorical Imperative. In the least, I find it morally illuminating, if not definitive, because it gets people to think about the moral principles behind their actions and avoid contradiction in their moral views, particularly hypocritical or self-serving contradiction. I suspect that any rational ethics (whatever that is) would have Categorical Imperative-like thinking involved.
I don’t think that the Categorical Imperative, at least the way I understand it, requires radical honesty.
If I lie to the Nazis, I “make it my maxim” that it is justified to lie to authorities to save an innocent person from death (when I can provide a reasonable argument that the person is in fact innocent and the authorities are wrong to try to kill them). Can I, at the “same time”, “will” that this maxim “become a universal law” without engaging in “contradiction”? Yes, I can.
My maxim is not that I think I have the right to decide, in general, who deserves to know the truth and who doesn’t. Rather, my maxim that is that potential murderers of innocent people don’t deserve to know the truth, when I can provide reasonable argument that they are truly innocent and that giving them up would lead to unjust harm to them.
Now, say that WWII is over, and Hitler himself is hiding out in Germany with his sympathizers, when Allied soldiers come knocking. Would the family hiding him be justified in lying, according to my maxim above? If they were justified according to my maxim, while I maintain that they would be unjustified in protecting Hitler, then I would engage in contradiction if I acted according to that maxim. Yet I hold that Hitler’s benefactors are not justified by my maxim, because they cannot provide any reasonable argument showing that Hitler is innocent and that he does not deserve to be captured.
The problem isn’t Kant’s Categorical Imperative, the problem is that he was sometimes incorrect about what it implies.
P.S. I agree with your main point of avoiding straw men in discussions simply because they were advanced by famous, but discredited, philosophical arguments, unless the author thinks that there is something particularly illuminating about doing so.
The problem with the Categorical Imperative is that it is sufficiently vague that it implies anything you want it to. You can (almost?) always make the “maxim” of your action specific enough to make your action permissible, for example:
Can this be willed as a universal law without contradiction? I certainly can’t find a contradiction.
I remember in my advanced logic class, taught by the philosophy department, a latter section of the book formalized the golden rule into a logical system, i.e., do unto others as you would have them do unto you in the same situation. In other words, be consistent. I never worked through that chapter, but I read through the setup and the whole system suffered from a vagueness similar to Kant’s: when does a situation count as “the same?” As far as I could tell, everything was moral because no two real life situations could be the same—surely something in the universe moved somewhere. Maybe just an atom.
Btw, yes I really did get a D in world lit because of attendance, and no, I’m not really that upset about it. It was a couple of years ago, after all.
The answer I heard somewhere was that this line of reasoning was an application of the meta-maxim ‘I will invent highly specific maxims to allow me to do whatever I want’, which itself cannot be willed as a universal law without contradiction.
Edit: alternatively, the CI was intended to be a necessary condition, not a sufficient one. (disclaimer: I haven’t read much Kant in the origional)
In the Robin Hanson tradition, whenever I think that I have figured out a flaw in Kant’s reasoning, I halt, recognize that he lived until he was 79 and spent everyday of his life thinking about these sorts of things and taking long walks. It is good to question him, but also to be humble and research any extant rebuttals to one’s own argument.
There is a good overview of Kant here: http://www.trinity.edu/cbrown/intro/Kant_ethics.html
and more at the Stanford Encyclopedia of Philosophy.
Kant had a peculiar obsession with what rational and reasoning actors would choose to do and what would happen if all people say rationality and reason as definitive tools. Why there is so much resistance to delving into Kant in the Less Wrong community is beyond me.
For one, Kant wasn’t relevant to the original topic of discussion—no one was arguing from Kant’s position. Also, I think most people on here agree that Kant was wrong. In more ways than one. Thus debating Kant is pretty much a dead end.
-from Eliezer’s quoted article Here
I don’t know if you read the entire body of my comment bringing up Kant, but it rests on asking if there was a similarity in Eliezer’s argument and Kants with a question mark at the end.
Both Eliezer and Kant seem to think that this abstract thing called “trust” suffers when individuals choose to lie for their own purposes. Both of them suggest that individuals who believe this would benefit from adopting a maxim that they should not lie.
Eliezer states in the comments that you can lie to people who aren’t part of your community of rational or potentially rational individuals.
Kant says that you can’t lie to people, even if they aren’t part of your club.
You don’t need the CI to reach either of these conclusions; the comment points out that you could do this on Utilitarian grounds. Utilitarian reasoning might even support Kants “don’t like to anyone ever” over Eliezer’s conceptions.
As for arguing Kant leading to a dead end, there is plenty of contemporary philosophy that still uses a lot of Kant and even NPOV Wikipedia has a section detailing Kant in contemporary philosophy.
I am always of the mind that saying that someone’s assumptions are wrong doesn’t lead to their argument having no value ever for any future discussion. In this particular case we got to use a Kantian thought experiment to talk about what looks like a variation on Kantian logic. I’m sorry I used the K word.
The idea of everyone on LW believing that Kant was almost totally wrong and that we should completely discard him is a little unsettling to me. There is a much larger community out there that accepts elements of Kant’s arguments and methods and still applies them; I would again push a Robin Hanson line by suggesting that most rationalists are elsewhere and we should work harder to find them.
With regard to whether Kant was relevant, I quote the article we are commenting on:
In other words, I agree with Andrew’s criticism of you (among others) for bring up Kant in the first place. He simply didn’t belong.
First, I should be clear, I was only talking about Kant’s ethics. My fault for not making that more clear to begin with. However, I don’t think this counts much in Kant’s favor because there is plenty of contemporary philosophy still using ideas from Plato or Aquinas, even when they are pretty clearly wrong (metaphysical realism, anyone? How about agent causation?).
There may be something worthwhile in Kant, but I’m rather skeptical. Given what Kant I’ve already read, I think my efforts are better spent elsewhere. If you think there is something worthwhile in Kant, then by all means, tell us about it. It may make a good post here.
It’s not Kant everyone’s chucking out—it’s deontological ethics, in favour of consequentialism. If I could only get the world to pick up one rationalist lesson, I would like them to shut up and multiply.
In matters of morality (as opposed to law, In matters of morality (as opposed to law), the important thing is to follow correct principles, not to find technicalities. As soon as someone writes on the bottom line “X is a moral act”, where X is what he/she happens to want to do at the moment, any further “moral reasoning” is just self-deception. Any reasonable person can tell that such an incredibly specific situation is useless for forming a categorical imperative. The fact that the idea of a categorical imperative breaks down when it is so vaguely specified is strong evidence against that particular implementation of it, but only weak evidence against the concept as a whole. It would require more work to define a standard of reasonableness for what situations can and can’t be generalized before one can say whether the categorical imperative does or doesn’t make sense.
That said, I suspect that if one starts with a naive categorical imperative like Matt expresses above and iteratively finds and patches flaws, one will eventually converge towards consequentialism. I could be proven wrong about this, though.
The problem is that Kant lies about his approach’s implications and that no-one else can agree on anyone else as to what they are in any useful manner.
“Cooperate in the Prisoner’s Dilemma” is probably one of them, although it’s hard to apply it directly to anything other than game theory problems.
See also: Superrationality
Kant’s Categorical Imperative, the classical Golden Rule, and Hofstadter’s superrationality all seem to me to be reflections of the same observation: Ethics rests on an algebraic symmetry among agents.
(I don’t have the philosophical or mathematical skill to formalize this. I recognize that this may make me sound like — or be — a crank on the subject. Sorry about that.)
The concept of morality doesn’t make sense without multiple agents. If your model of the world doesn’t include other entities of the same kind as you — but who are not you — then moral reasoning leads quite logically to sociopathy. If you are the only real agent, or for that matter if the universe is a dialogue between your unique soul and Almighty God, then reasoning about morality is nonsense. It is only because there are multiple agents who each are capable of influencing the others’ outcomes that morality makes any sense at all to talk about.
Possibly the most dramatic example of this symmetry I’ve seen is Eliezer’s True Prisoners’ Dilemma which shows that the symmetry can exist even between agents that do not share any object-level values. If you believe the paperclip-maximizer in the True Prisoners’ Dilemma is a rational agent that models the world as containing other rational agents symmetric to itself (but with different values), then you cooperate, because you’re not deciding between four possible outcomes; the symmetry means you’re deciding between (C,C) and (D,D).
(It’s not a matter of judging whether you implement the same algorithm as the other guy. It’s a matter of judging whether you’re in the same situation as the other guy, and that you correctly appraise this, and recognize that the other guy correctly appraises it, and so on recursively.)
Kant’s approach seems to be partly based on the idea of an equilibrium: acting on a rule that treats others as mere means is self-undermining; treating others as ends is the only winning choice if those others are also rational. It also seems to me that reflexive decision theories aim at a more axiomatic reflection of this same principle, by explicitly incorporating the notion that agents model other agents’ behavior.
Evolution has encoded into humankind an instinct for recognizing agentiness. This instinct is buggy as hell; it is much more sensitive than specific. It sees agentiness in non-agenty crap like the weather — “Hey you! Rain agent! Here, have a chicken … now, come rain on my crops, please!” — and if you draw two dots and a horizontal line beneath them, it sees the face of an agent. However, it is by dint of recognizing that the world contains other agents like ourselves, who also in turn recognize this fact, that humans are able to cooperate for mutual benefit in a way which other apes and mammals are not.