I’ve been thinking about the categorical imperative lately and how it obscures more than it illuminates. I’ll make the case that the categorical imperative tries to pull a fast one by taking something for granted, and the thing it takes for granted is the thing we care about.
Disclaimer: I’m sure someone has already made this argument elsewhere in more detailed, elegant, and formal terms. Since I’m not a working academic philosopher, I don’t really mind that I might be rediscovering something people already know. What I find valuable is thinking about these things for myself. If you think it might be interesting to read my thinking out loud, continue on.
One common English translation of Kant’s summary of the categorical imperative is
Act as if the maxims of your action were to become through your will a universal law of nature.
My own translation into more Less Wrong friendly terms:
Act as if you were following norms that should apply universally.
I think an argument could be made—and probably already has—that this is pointing in the same direction as timeless decision theory.
The categorical imperative aims to solve the problem of what norms to pick, but goes on to try to claim universality. I think that’s the tricky bit where it’s attempting an end-run around the hard part of picking good norms.
My (admittedly loose) translation tries to make this attempted end-run explicit by including the word “should”, although the typical translation that includes “law of nature” serves the same purpose, though framed to fit a moral realist worldview. The trick being attempted is to assume the thing we want to prove, namely universality, by assuming that satisfying our judgment of what’s best will lead to universality.
Let me give an example to demonstrate the problem.
Suppose a Babyeater tries to apply the categorical imperative. Since they think eating babies is good, they will act in accordance with the norm of baby-eating and be happy to see others adopt their baby-eating ways.
You, a human, might object that you don’t like this so it can’t be universally true, yet a Babyeater would object that you not eating babies is an outrageous norm violation that will lead to terrible outcomes.
The trouble is that the categorical imperative tries to smuggle in every moral agents’ values without actually doing the hard work of aggregating and deciding between them. It’s straightforward so long as everyone has the same underlying values, but one iota of difference in what people (or animals, or AIs, or thermostats, or electrons) care about and any attempt to follow the categorical imperative is not substantially different from simply following moral intuition.
The categorical imperative does force one to refine one’s intuitions and better optimize norms for working with others who share similar values. That’s not nothing, but it’s also only going so far as to coordinate around a metaethics grounded in “my favorite theory”.
Recent events have some folks rethinking their commitments to consequentialism, and some of those folks are looking closer at deontological ethics. I think that’s valuable, but it’s also worth keeping in mind that deontology has its own blind spots. A deontologist might not draw any repugnant conclusions, but they are more likely to ignore what people unlike themselves care about.
The Categorical Imperative Obscures
I’ve been thinking about the categorical imperative lately and how it obscures more than it illuminates. I’ll make the case that the categorical imperative tries to pull a fast one by taking something for granted, and the thing it takes for granted is the thing we care about.
Disclaimer: I’m sure someone has already made this argument elsewhere in more detailed, elegant, and formal terms. Since I’m not a working academic philosopher, I don’t really mind that I might be rediscovering something people already know. What I find valuable is thinking about these things for myself. If you think it might be interesting to read my thinking out loud, continue on.
One common English translation of Kant’s summary of the categorical imperative is
My own translation into more Less Wrong friendly terms:
I think an argument could be made—and probably already has—that this is pointing in the same direction as timeless decision theory.
The categorical imperative aims to solve the problem of what norms to pick, but goes on to try to claim universality. I think that’s the tricky bit where it’s attempting an end-run around the hard part of picking good norms.
My (admittedly loose) translation tries to make this attempted end-run explicit by including the word “should”, although the typical translation that includes “law of nature” serves the same purpose, though framed to fit a moral realist worldview. The trick being attempted is to assume the thing we want to prove, namely universality, by assuming that satisfying our judgment of what’s best will lead to universality.
Let me give an example to demonstrate the problem.
Suppose a Babyeater tries to apply the categorical imperative. Since they think eating babies is good, they will act in accordance with the norm of baby-eating and be happy to see others adopt their baby-eating ways.
You, a human, might object that you don’t like this so it can’t be universally true, yet a Babyeater would object that you not eating babies is an outrageous norm violation that will lead to terrible outcomes.
The trouble is that the categorical imperative tries to smuggle in every moral agents’ values without actually doing the hard work of aggregating and deciding between them. It’s straightforward so long as everyone has the same underlying values, but one iota of difference in what people (or animals, or AIs, or thermostats, or electrons) care about and any attempt to follow the categorical imperative is not substantially different from simply following moral intuition.
The categorical imperative does force one to refine one’s intuitions and better optimize norms for working with others who share similar values. That’s not nothing, but it’s also only going so far as to coordinate around a metaethics grounded in “my favorite theory”.
Recent events have some folks rethinking their commitments to consequentialism, and some of those folks are looking closer at deontological ethics. I think that’s valuable, but it’s also worth keeping in mind that deontology has its own blind spots. A deontologist might not draw any repugnant conclusions, but they are more likely to ignore what people unlike themselves care about.