So, I don’t actually understand most of this comment. So, one thing at a time here...
This is really good and important, but I think you’re making the problem too hard by thinking about universal rather than local norms.
Well, I’d just say I failed to specify that these might just be local norms, but sure, that’s a good point—local norms vary. E.g. I’ve noticed people in the LWsphere writing about how asking twice about something might be considered pressuring, whereas to my mind asking twice is completely ordinary and it’s asking three times that’s over the line. But yes we have to account for the fact that there’s not necessarily going to be one universally applicable “theory of legitimate influence”, except possibly at a high level that’s not directly applicable.
Institutions that can produce interesting long-run technological improvements have to be optimizing for building shared maps, not exploiting existing maps for local efficiencies in ways that erode those maps.
OK, I don’t understand what you’re saying here, or at least not how it’s relevant. Could you give an example?
A norm that this is the legitimate incentive gradient to follow within such institutions—and that more generally creating shared capacity is more interesting than reallocating existing capacity—is the generator of the different legitimate influence ideologies you mentioned.
I don’t really understand what you’re saying here and to the extent that I do I find the claim confusing. Again, could you give examples of how this might occur?
For an example of why—like, I’d say that the “nerd” theory here arises from bad observation. It’s not something people actually follow, because that’s impossible, though they might sometimes try. Basically, the question of legitimate influence is one of those social micro-things that ordinary people just can’t really talk about because their common sense gets in the way; theories of legitimate influence are mostly left implicit. Attempts to make them explicit get filtered through the lens of common sense, yielding instructions that are untenable if taken literally… though nerds will try all the same. (E.g. a common thing I’ve seen recently is explicitly stating #1, but implicitly redefining “coercion” as needed to mean whatever you need it to mean. Common sense allows statement to diverge from practice heavily.)
In short #1 and #2 above were meant to be examples of theories that people state, not that people follow.
If you have closed systems for having these nice things, you don’t have to remake norms everywhere to have nice things in your community.
Indeed! But I think the important thing to recognize here is that I’m (mostly) not talking about remaking norms at all. When I say “we need a theory of legitimate influence”, I (mostly) mean “We need to learn how to make explicit the norms that we’re already following”. Or perhaps I should say the norms that normal people are already following. :P Once we understand that, then perhaps we can begin to adjust them, if adjustments are needed. Trying to do things the other way around—starting from reasoned-out theories, then trying to practice them—just leads to untenable theories like the nerd theory.
You definitely don’t have to make war on people who don’t want these nice things and demand they adopt your standards.
I think I totally failed to communicate to you what I was trying to say, so I’m gonna try saying it a different way instead of responding on the object level to your questions.
It seems like you’re trying to do something sort of like derive object-level life guidance from the Categorical Imperative. The Categorical Imperative actually doesn’t have enough content to do that. It’s a constraint on moral systems, not a sufficient basis for one. In particular, there’s a bunch of contextual information it doesn’t include. So you end up shifting the burden of sensitivity to context to the individual, in a way that’s not actually sustainable, and for related reasons, very difficult to formalize. Another way of saying this is that it seems like you’re committing something sort of like a Geek Social Fallacy, assuming that no good community should reject anyone who’s a good member of some good community.
What I’m proposing as an alternative, is a way to define the attributes of a class of desirable norms, as a generalization on the desiderate you seem to be optimizing for. I tried to explain the class of norm I mean in the humility argument for honesty and against neglectedness considerations.
Different well-intentioned institutions will try different sets of norms within this class. Each particular institution will have object-level standards suitable for some ways of life but not for others. As long as these different institutions can coexist, it’s not all that bad that some of them reject people who would make perfectly good members of other institutions, or punish behavior that is not intrinsically wrong, or accept people or behaviors that would be perceived as damaging elsewhere.
It seems like you’re trying to do something sort of like derive object-level life guidance from the Categorical Imperative.
Could you say more about what gives you this impression?
(It seems like you’re trying to offer a correction to or refinement of Sniffnoy’s post. But I’m having trouble making sense of it, I think because I don’t yet understand the problem that you’re pointing out. I suspect Sniffnoy might be in the same position.)
Things like this seem like they’re infinitely recursive:
An important factor that doesn’t seem to go much remarked upon, but which to my mind is crucial, is, well, how the person being influenced feels about it!
How the person feels about it is of course a function of what the existing norms of interaction are. But Sniffnoy is trying to define a norm that takes how the other person feels into account. This seems like the kind of generalization you get if you aren’t willing to use concepts like “local norms” as an intermediate abstraction.
If you are willing to use that abstraction, then the thing is to start thinking about whether they’re unjust enough to rebel against, or just enough to cooperate with, or something else. This heuristic should satisfy the Categorical Imperative, but there’s a range of local norms that can be just, and a different range that are unjust, such that the object-level correct decision in an otherwise identical interaction will often be different depending on context.
I’m very confused how the categorical imperative is supposed to be relevant here. I don’t see how the bit you’ve highlighted relates to it at all.
I think you’ve misread what I’m saying. I am not trying to define that as a norm. I am pointing it out as an important consideration, not a definition.
More generally, I’m not trying to define anything as a norm. As I stated above, what I’m trying to do is not define new norms—certainly not from any sort of first principles—but to make some tiny initial progress towards making explicit the norms that already exist. Which, as you say, vary, but I can at least speak to what I’ve seen. The numbered points above are, as I said, considerations that I think need to be accounted for, and I think failing to account for those points is a big reason previous attempts have failed and ended up somewhere near “classical liberal” or “nerd”, neither of which is at all close to the actual norms anywhere.
So, I don’t actually understand most of this comment. So, one thing at a time here...
Well, I’d just say I failed to specify that these might just be local norms, but sure, that’s a good point—local norms vary. E.g. I’ve noticed people in the LWsphere writing about how asking twice about something might be considered pressuring, whereas to my mind asking twice is completely ordinary and it’s asking three times that’s over the line. But yes we have to account for the fact that there’s not necessarily going to be one universally applicable “theory of legitimate influence”, except possibly at a high level that’s not directly applicable.
OK, I don’t understand what you’re saying here, or at least not how it’s relevant. Could you give an example?
I don’t really understand what you’re saying here and to the extent that I do I find the claim confusing. Again, could you give examples of how this might occur?
For an example of why—like, I’d say that the “nerd” theory here arises from bad observation. It’s not something people actually follow, because that’s impossible, though they might sometimes try. Basically, the question of legitimate influence is one of those social micro-things that ordinary people just can’t really talk about because their common sense gets in the way; theories of legitimate influence are mostly left implicit. Attempts to make them explicit get filtered through the lens of common sense, yielding instructions that are untenable if taken literally… though nerds will try all the same. (E.g. a common thing I’ve seen recently is explicitly stating #1, but implicitly redefining “coercion” as needed to mean whatever you need it to mean. Common sense allows statement to diverge from practice heavily.)
In short #1 and #2 above were meant to be examples of theories that people state, not that people follow.
Indeed! But I think the important thing to recognize here is that I’m (mostly) not talking about remaking norms at all. When I say “we need a theory of legitimate influence”, I (mostly) mean “We need to learn how to make explicit the norms that we’re already following”. Or perhaps I should say the norms that normal people are already following. :P Once we understand that, then perhaps we can begin to adjust them, if adjustments are needed. Trying to do things the other way around—starting from reasoned-out theories, then trying to practice them—just leads to untenable theories like the nerd theory.
I… never suggested that?
I think I totally failed to communicate to you what I was trying to say, so I’m gonna try saying it a different way instead of responding on the object level to your questions.
It seems like you’re trying to do something sort of like derive object-level life guidance from the Categorical Imperative. The Categorical Imperative actually doesn’t have enough content to do that. It’s a constraint on moral systems, not a sufficient basis for one. In particular, there’s a bunch of contextual information it doesn’t include. So you end up shifting the burden of sensitivity to context to the individual, in a way that’s not actually sustainable, and for related reasons, very difficult to formalize. Another way of saying this is that it seems like you’re committing something sort of like a Geek Social Fallacy, assuming that no good community should reject anyone who’s a good member of some good community.
What I’m proposing as an alternative, is a way to define the attributes of a class of desirable norms, as a generalization on the desiderate you seem to be optimizing for. I tried to explain the class of norm I mean in the humility argument for honesty and against neglectedness considerations.
Different well-intentioned institutions will try different sets of norms within this class. Each particular institution will have object-level standards suitable for some ways of life but not for others. As long as these different institutions can coexist, it’s not all that bad that some of them reject people who would make perfectly good members of other institutions, or punish behavior that is not intrinsically wrong, or accept people or behaviors that would be perceived as damaging elsewhere.
Could you say more about what gives you this impression?
(It seems like you’re trying to offer a correction to or refinement of Sniffnoy’s post. But I’m having trouble making sense of it, I think because I don’t yet understand the problem that you’re pointing out. I suspect Sniffnoy might be in the same position.)
Things like this seem like they’re infinitely recursive:
How the person feels about it is of course a function of what the existing norms of interaction are. But Sniffnoy is trying to define a norm that takes how the other person feels into account. This seems like the kind of generalization you get if you aren’t willing to use concepts like “local norms” as an intermediate abstraction.
If you are willing to use that abstraction, then the thing is to start thinking about whether they’re unjust enough to rebel against, or just enough to cooperate with, or something else. This heuristic should satisfy the Categorical Imperative, but there’s a range of local norms that can be just, and a different range that are unjust, such that the object-level correct decision in an otherwise identical interaction will often be different depending on context.
I’m very confused how the categorical imperative is supposed to be relevant here. I don’t see how the bit you’ve highlighted relates to it at all.
I think you’ve misread what I’m saying. I am not trying to define that as a norm. I am pointing it out as an important consideration, not a definition.
More generally, I’m not trying to define anything as a norm. As I stated above, what I’m trying to do is not define new norms—certainly not from any sort of first principles—but to make some tiny initial progress towards making explicit the norms that already exist. Which, as you say, vary, but I can at least speak to what I’ve seen. The numbered points above are, as I said, considerations that I think need to be accounted for, and I think failing to account for those points is a big reason previous attempts have failed and ended up somewhere near “classical liberal” or “nerd”, neither of which is at all close to the actual norms anywhere.