I cannot (it is usually assumed) coherently will promise-breaking to be a universalisable maxim.
This move is hiding a lot of work within your similarity clustering algorithm. “promise keeping” and “promise breaking” both describe a wide set of different actions taken in a wide set of situations. Within the Kantian imperative scheme, you are forced to make a single decision over all these different situations. So, what chose this set of actions, and why this set, not some other set.
Suppose a particularly nasty gang all have gang tattoos, the gang works based on promises to kill people. Meanwhile, nice people promise to do nice things. The maxim, “if you have a gang tattoo, break your promises, otherwise keep them” might have nicer consequences than everyone always breaking their promises, or always keeping them. But then introduce a gang that doesn’t have tattoos, and a few reformed gang members promising to do nice things. Soon the ideal maxim becomes an enumeration of the ethical action in every conceivable situation. You get a giant lookup table of ethics, and while you can express ethics in the form of a giant lookup table, you can express anything in that form. Saying “the decisions this agent makes can be described in terms of a giant lookup table over all conceivable situations” is true for all agents, so doesn’t distinguish a particular subset of agents.
I think that actual human Kantians are offloading a lot of work to the brains invisible black boxes, to properly say what a Kantian agent is, you need to figure out what the black box is doing. (This is a problem of coming up with a sensible technical definition that is similar to common usage)
I agree with you that choosing the appropriate set of actions is a non-trivial task, and I’ve said nothing here about how Kantians would choose an appropriate class of actions.
I am unclear on the point of your gang examples. You point out that the ideal maxim changes depending on features of the world. The Kantian claim, as I understand it, says that we should implement a particular decision-theoretic strategy, by focusing on maxims rather than acts. This a distinctively normative claim. The fact that, as we gain more information, the maxims might become increasingly specific seems true, but unproblematic. Likewise, I think it’s true that we can describe any agent’s decisions in terms of a lookup table over all conceivable situations. However, this just seems to indicate that we are looking at the wrong level of resolution. It’s also true that I can describe all agents’ behaviour (in principle) terms of fundamental physics. But this isn’t to say that there are no useful higher-level descriptions of different agents.
When you say that actual human Kantians offload work to invisble black boxes, do you mean that Kantians, when choosing an appropriate set of actions to make into a maxim, are offloading that clustering of acts into a black box? If so, then I think I agree, and would also like a more formal account of what’s going on this case. However, I think a good first step towards such an formal account is looking at more qualitative instances of behaviour from Kantians, so we know what it is we’re trying to capture more formally.
This move is hiding a lot of work within your similarity clustering algorithm. “promise keeping” and “promise breaking” both describe a wide set of different actions taken in a wide set of situations. Within the Kantian imperative scheme, you are forced to make a single decision over all these different situations. So, what chose this set of actions, and why this set, not some other set.
Suppose a particularly nasty gang all have gang tattoos, the gang works based on promises to kill people. Meanwhile, nice people promise to do nice things. The maxim, “if you have a gang tattoo, break your promises, otherwise keep them” might have nicer consequences than everyone always breaking their promises, or always keeping them. But then introduce a gang that doesn’t have tattoos, and a few reformed gang members promising to do nice things. Soon the ideal maxim becomes an enumeration of the ethical action in every conceivable situation. You get a giant lookup table of ethics, and while you can express ethics in the form of a giant lookup table, you can express anything in that form. Saying “the decisions this agent makes can be described in terms of a giant lookup table over all conceivable situations” is true for all agents, so doesn’t distinguish a particular subset of agents.
I think that actual human Kantians are offloading a lot of work to the brains invisible black boxes, to properly say what a Kantian agent is, you need to figure out what the black box is doing. (This is a problem of coming up with a sensible technical definition that is similar to common usage)
I agree with you that choosing the appropriate set of actions is a non-trivial task, and I’ve said nothing here about how Kantians would choose an appropriate class of actions.
I am unclear on the point of your gang examples. You point out that the ideal maxim changes depending on features of the world. The Kantian claim, as I understand it, says that we should implement a particular decision-theoretic strategy, by focusing on maxims rather than acts. This a distinctively normative claim. The fact that, as we gain more information, the maxims might become increasingly specific seems true, but unproblematic. Likewise, I think it’s true that we can describe any agent’s decisions in terms of a lookup table over all conceivable situations. However, this just seems to indicate that we are looking at the wrong level of resolution. It’s also true that I can describe all agents’ behaviour (in principle) terms of fundamental physics. But this isn’t to say that there are no useful higher-level descriptions of different agents.
When you say that actual human Kantians offload work to invisble black boxes, do you mean that Kantians, when choosing an appropriate set of actions to make into a maxim, are offloading that clustering of acts into a black box? If so, then I think I agree, and would also like a more formal account of what’s going on this case. However, I think a good first step towards such an formal account is looking at more qualitative instances of behaviour from Kantians, so we know what it is we’re trying to capture more formally.