I think I totally failed to communicate to you what I was trying to say, so I’m gonna try saying it a different way instead of responding on the object level to your questions.
It seems like you’re trying to do something sort of like derive object-level life guidance from the Categorical Imperative. The Categorical Imperative actually doesn’t have enough content to do that. It’s a constraint on moral systems, not a sufficient basis for one. In particular, there’s a bunch of contextual information it doesn’t include. So you end up shifting the burden of sensitivity to context to the individual, in a way that’s not actually sustainable, and for related reasons, very difficult to formalize. Another way of saying this is that it seems like you’re committing something sort of like a Geek Social Fallacy, assuming that no good community should reject anyone who’s a good member of some good community.
What I’m proposing as an alternative, is a way to define the attributes of a class of desirable norms, as a generalization on the desiderate you seem to be optimizing for. I tried to explain the class of norm I mean in the humility argument for honesty and against neglectedness considerations.
Different well-intentioned institutions will try different sets of norms within this class. Each particular institution will have object-level standards suitable for some ways of life but not for others. As long as these different institutions can coexist, it’s not all that bad that some of them reject people who would make perfectly good members of other institutions, or punish behavior that is not intrinsically wrong, or accept people or behaviors that would be perceived as damaging elsewhere.
It seems like you’re trying to do something sort of like derive object-level life guidance from the Categorical Imperative.
Could you say more about what gives you this impression?
(It seems like you’re trying to offer a correction to or refinement of Sniffnoy’s post. But I’m having trouble making sense of it, I think because I don’t yet understand the problem that you’re pointing out. I suspect Sniffnoy might be in the same position.)
Things like this seem like they’re infinitely recursive:
An important factor that doesn’t seem to go much remarked upon, but which to my mind is crucial, is, well, how the person being influenced feels about it!
How the person feels about it is of course a function of what the existing norms of interaction are. But Sniffnoy is trying to define a norm that takes how the other person feels into account. This seems like the kind of generalization you get if you aren’t willing to use concepts like “local norms” as an intermediate abstraction.
If you are willing to use that abstraction, then the thing is to start thinking about whether they’re unjust enough to rebel against, or just enough to cooperate with, or something else. This heuristic should satisfy the Categorical Imperative, but there’s a range of local norms that can be just, and a different range that are unjust, such that the object-level correct decision in an otherwise identical interaction will often be different depending on context.
I’m very confused how the categorical imperative is supposed to be relevant here. I don’t see how the bit you’ve highlighted relates to it at all.
I think you’ve misread what I’m saying. I am not trying to define that as a norm. I am pointing it out as an important consideration, not a definition.
More generally, I’m not trying to define anything as a norm. As I stated above, what I’m trying to do is not define new norms—certainly not from any sort of first principles—but to make some tiny initial progress towards making explicit the norms that already exist. Which, as you say, vary, but I can at least speak to what I’ve seen. The numbered points above are, as I said, considerations that I think need to be accounted for, and I think failing to account for those points is a big reason previous attempts have failed and ended up somewhere near “classical liberal” or “nerd”, neither of which is at all close to the actual norms anywhere.
I think I totally failed to communicate to you what I was trying to say, so I’m gonna try saying it a different way instead of responding on the object level to your questions.
It seems like you’re trying to do something sort of like derive object-level life guidance from the Categorical Imperative. The Categorical Imperative actually doesn’t have enough content to do that. It’s a constraint on moral systems, not a sufficient basis for one. In particular, there’s a bunch of contextual information it doesn’t include. So you end up shifting the burden of sensitivity to context to the individual, in a way that’s not actually sustainable, and for related reasons, very difficult to formalize. Another way of saying this is that it seems like you’re committing something sort of like a Geek Social Fallacy, assuming that no good community should reject anyone who’s a good member of some good community.
What I’m proposing as an alternative, is a way to define the attributes of a class of desirable norms, as a generalization on the desiderate you seem to be optimizing for. I tried to explain the class of norm I mean in the humility argument for honesty and against neglectedness considerations.
Different well-intentioned institutions will try different sets of norms within this class. Each particular institution will have object-level standards suitable for some ways of life but not for others. As long as these different institutions can coexist, it’s not all that bad that some of them reject people who would make perfectly good members of other institutions, or punish behavior that is not intrinsically wrong, or accept people or behaviors that would be perceived as damaging elsewhere.
Could you say more about what gives you this impression?
(It seems like you’re trying to offer a correction to or refinement of Sniffnoy’s post. But I’m having trouble making sense of it, I think because I don’t yet understand the problem that you’re pointing out. I suspect Sniffnoy might be in the same position.)
Things like this seem like they’re infinitely recursive:
How the person feels about it is of course a function of what the existing norms of interaction are. But Sniffnoy is trying to define a norm that takes how the other person feels into account. This seems like the kind of generalization you get if you aren’t willing to use concepts like “local norms” as an intermediate abstraction.
If you are willing to use that abstraction, then the thing is to start thinking about whether they’re unjust enough to rebel against, or just enough to cooperate with, or something else. This heuristic should satisfy the Categorical Imperative, but there’s a range of local norms that can be just, and a different range that are unjust, such that the object-level correct decision in an otherwise identical interaction will often be different depending on context.
I’m very confused how the categorical imperative is supposed to be relevant here. I don’t see how the bit you’ve highlighted relates to it at all.
I think you’ve misread what I’m saying. I am not trying to define that as a norm. I am pointing it out as an important consideration, not a definition.
More generally, I’m not trying to define anything as a norm. As I stated above, what I’m trying to do is not define new norms—certainly not from any sort of first principles—but to make some tiny initial progress towards making explicit the norms that already exist. Which, as you say, vary, but I can at least speak to what I’ve seen. The numbered points above are, as I said, considerations that I think need to be accounted for, and I think failing to account for those points is a big reason previous attempts have failed and ended up somewhere near “classical liberal” or “nerd”, neither of which is at all close to the actual norms anywhere.