So, um, OK, time to step in it. Hoping this doesn’t count as a “hot-button political issue”. This is going to echo some things I’ve said over in the comments at Thing of Things, but I’m not going to go digging for those links right now...
It seems to me that some of what Duncan describes here is closely related to that infamous problem which led to such writings as Scott’s old “Meditations” series, Scott Aaronson’s “comment 171″, etc. It’s not so much a matter of “social ownership of the micro”, but rather micro-vs-macro interpretations of certain instructions. Let’s consider the situation in Vignette #1. In it, Alexis and Blake both correctly observe that how you ask a person about something may, at a micro level, affect how comfortable they feel with the possible responses to it.
(This may be a different sense of “micro”. After all—if this micro-emotional effect affects their actual decision, it then necessarily has macro effects! Which is to say, our brains’ all-or-nothing decision systems can take the micro and amplify it. The “micro” here isn’t so much in terms of results but in terms of the fact that what things hinge on here is, things that are small. The details of how a request is phrased or the tone it’s presented in.)
So, if a person X reads writings—written by people thinking in macro terms, or who don’t recognize that these micro-influences exist—about how it is wrong to ever pressure anyone, but interprets them in a micro manner… yeah. You see how it goes. That’s not a matter of “social ownership of the micro”, but it is a matter of recognizing the micro and considering it as morally relevant and attempting to account for it.
Let me step away from this narrow and hopefully not too hot-button point to make a broader one: We need what I’ll call “a theory of legitimate influence”; I think the lack of one is a big part of where Alexis/Blake situations (or the more controversial one I just mentioned) come from. (What do I mean by “we”? Um. I dunno. People. I dunno.)
What do I mean by a “theory of legitimate influence”? Well, let me start by presenting three examples:
“Classical liberal” theory of legitimate influence—People are more or less rational agents who can be trusted to act in their own interests. So force, coercion, and lying to people to get them to do what you want are wrong, but basically anything else is OK; so long as you’re honest and don’t use force it’s not like you’ll thereby get people to do things they don’t want to do.
“Nerd” theory of legitimate influence—People are kind of like rational agents, but can be biased by all sorts of arational considerations. So, influencing people via reasoned argument is OK, as you’re just helping them by providing them with good reasoning, but attempting to influence them by other, necessarily arational means is wrong.
“Sociopathic” theory of legitimate influence—Other people don’t really have preferences, or to the extent they do they don’t matter; do anything you want.
The thing here is that I don’t think any of these three theories are tenable. #3 is blatantly immoral. #1 is better but fails to rule illegitimate things such as, say, badgering a person—no, it’s not using violence or threats or lying, but it’s still wrong. Meanwhile #2 makes the opposite mistake—it’s impossible not to influence people arationally, every interaction you have with other people is full of such things, so to take this seriously leads to total paralysis.
(And to be clear I am not in any way restricting this to the sorts of situations I initially implicitly mentioned at the top here—this is a generic phenomenon, one that comes up in any request or offer or attempt to organize people to do something. That said, it obviously has an especial relevance to such situations, because clearly being charming—which is a thing that one does that influences other people’s responses! -- should be considered legitimate influence, it would seem.)
So, by a “theory of legitimate influence”, I mean a theory of which ways of influencing other people are—without having to do constant consequentialist reasoning—morally permissible (reasoned argument: definitely OK) and which are not (death threats: definitely not OK).
And I think in order to develop one that works, there are several insights one needs to take into account, that commonly expressed ones don’t. A few I can think of:
Once you go micro enough, once you get to the level where people can be influenced, you hit the problem that at such a level there often isn’t really such a thing as preexisting un-influenced preferences. If your theory assumes people always have definite preferences, or says that doing anything that shapes other people’s preferences is wrong, it’s not going to work.
On the other hand, to state the obvious, sometimes people really do act like agents and have actual preferences—on the larger scale this approximation definitely holds—and if you ignore this then you also will have a problem. (I mean, duh, but worth restating in light of #1.)
An important factor that doesn’t seem to go much remarked upon, but which to my mind is crucial, is, well, how the person being influenced feels about it! That is to say: There are forms of influence that feel like another person imposing their will upon you, and there are those that make you more likely to do something but just… don’t. Ego-dystonic vs ego-syntonic influence, one might call it. To my mind this is really morally relevant, not just in the heuristic sense I’m mostly talking about but also in a more direct consequentialist sense. But it seems like commonly-expressed theories of influence don’t really take any notice of this, even as the people expressing those theories don’t in fact condemn ego-syntonic influence as their explicitly stated theories make it seem like they would; they just seem to fail to recognize such things as influence at all, as if of course the way the request was phrased had no effect on their response because they had a fixed preexisting preference, duh. (Important note: Getting someone else to internalize some burdensome moral obligation that they feel terrible about and can’t actually enact and have to doublethink around is also not OK, just in case that was not clear! This doesn’t really fit cleanly into either of the two categories above—the obligation isn’t exactly external, no, but it still feels bad—it’s its own third thing; but morally it’s clearly on the bad side of the line.)
Maximizing autonomy—maximizing the ability of the person you’re talking to both to say “yes”, and to say “no”, and to say “I reject the question”—may not always be possible. Ideally you could accomplish all of these simultaneously, but in reality they often trade off against one another. At a macro level, if you don’t sweat the micro, you can of course accomplish all three, by just, you know, not putting people under horrible pressure or anything, and also explicitly noting the “I reject the question” option, which people often have a hard time coming up with for themselves. But if you care about the micro level, well, a that level there will likely be a tradeoff, and if you don’t account for that or just refuse to make any tradeoffs, you may encounter a problem.
I don’t know—it’s possible I just have less definite preferences / am more influenceable than other people and so think of this as more important; e.g. in previous discussions over at Thing of Things, Nita responded to my comments on the matter with, no, I really do have definite fixed preferences, none of this applies to me. So, IDK, typical mind fallacy very possible here. But, I think this is important, I think this is worth pointing out.
OK, give me some time and maybe I’ll post it, expanded with some related notions that are less relevant to the original context but which I think are worth writing about...
I want to second the need for a Theory of Legitimate Influence. Well, maybe not a need, but I certainly would like to think more clearly in that space. I think I’ve gone back and forth on this a lot.
This is really good and important, but I think you’re making the problem too hard by thinking about universal rather than local norms.
Institutions that can produce interesting long-run technological improvements have to be optimizing for building shared maps, not exploiting existing maps for local efficiencies in ways that erode those maps. A norm that this is the legitimate incentive gradient to follow within such institutions—and that more generally creating shared capacity is more interesting than reallocating existing capacity—is the generator of the different legitimate influence ideologies you mentioned.
Such institutions are necessarily exclusive, at least for now. We shouldn’t try to have accountable standards for this that we can apply to everyone—we have to actually use our judgment about who’s trying and exclude those who aren’t. Institutions like this do well if they learn to recognize and cooperate with other institutions creating surplus, even if their methods and internal norms are different—this normset scales up gracefully because you’re generally happy if other people are also creating surplus value. Such communities can then coordinate in conflict or cooperation with other institutions that don’t create surplus, depending on which stance is advantageous.
If you have closed systems for having these nice things, you don’t have to remake norms everywhere to have nice things in your community. You definitely don’t have to make war on people who don’t want these nice things and demand they adopt your standards.
So, I don’t actually understand most of this comment. So, one thing at a time here...
This is really good and important, but I think you’re making the problem too hard by thinking about universal rather than local norms.
Well, I’d just say I failed to specify that these might just be local norms, but sure, that’s a good point—local norms vary. E.g. I’ve noticed people in the LWsphere writing about how asking twice about something might be considered pressuring, whereas to my mind asking twice is completely ordinary and it’s asking three times that’s over the line. But yes we have to account for the fact that there’s not necessarily going to be one universally applicable “theory of legitimate influence”, except possibly at a high level that’s not directly applicable.
Institutions that can produce interesting long-run technological improvements have to be optimizing for building shared maps, not exploiting existing maps for local efficiencies in ways that erode those maps.
OK, I don’t understand what you’re saying here, or at least not how it’s relevant. Could you give an example?
A norm that this is the legitimate incentive gradient to follow within such institutions—and that more generally creating shared capacity is more interesting than reallocating existing capacity—is the generator of the different legitimate influence ideologies you mentioned.
I don’t really understand what you’re saying here and to the extent that I do I find the claim confusing. Again, could you give examples of how this might occur?
For an example of why—like, I’d say that the “nerd” theory here arises from bad observation. It’s not something people actually follow, because that’s impossible, though they might sometimes try. Basically, the question of legitimate influence is one of those social micro-things that ordinary people just can’t really talk about because their common sense gets in the way; theories of legitimate influence are mostly left implicit. Attempts to make them explicit get filtered through the lens of common sense, yielding instructions that are untenable if taken literally… though nerds will try all the same. (E.g. a common thing I’ve seen recently is explicitly stating #1, but implicitly redefining “coercion” as needed to mean whatever you need it to mean. Common sense allows statement to diverge from practice heavily.)
In short #1 and #2 above were meant to be examples of theories that people state, not that people follow.
If you have closed systems for having these nice things, you don’t have to remake norms everywhere to have nice things in your community.
Indeed! But I think the important thing to recognize here is that I’m (mostly) not talking about remaking norms at all. When I say “we need a theory of legitimate influence”, I (mostly) mean “We need to learn how to make explicit the norms that we’re already following”. Or perhaps I should say the norms that normal people are already following. :P Once we understand that, then perhaps we can begin to adjust them, if adjustments are needed. Trying to do things the other way around—starting from reasoned-out theories, then trying to practice them—just leads to untenable theories like the nerd theory.
You definitely don’t have to make war on people who don’t want these nice things and demand they adopt your standards.
I think I totally failed to communicate to you what I was trying to say, so I’m gonna try saying it a different way instead of responding on the object level to your questions.
It seems like you’re trying to do something sort of like derive object-level life guidance from the Categorical Imperative. The Categorical Imperative actually doesn’t have enough content to do that. It’s a constraint on moral systems, not a sufficient basis for one. In particular, there’s a bunch of contextual information it doesn’t include. So you end up shifting the burden of sensitivity to context to the individual, in a way that’s not actually sustainable, and for related reasons, very difficult to formalize. Another way of saying this is that it seems like you’re committing something sort of like a Geek Social Fallacy, assuming that no good community should reject anyone who’s a good member of some good community.
What I’m proposing as an alternative, is a way to define the attributes of a class of desirable norms, as a generalization on the desiderate you seem to be optimizing for. I tried to explain the class of norm I mean in the humility argument for honesty and against neglectedness considerations.
Different well-intentioned institutions will try different sets of norms within this class. Each particular institution will have object-level standards suitable for some ways of life but not for others. As long as these different institutions can coexist, it’s not all that bad that some of them reject people who would make perfectly good members of other institutions, or punish behavior that is not intrinsically wrong, or accept people or behaviors that would be perceived as damaging elsewhere.
It seems like you’re trying to do something sort of like derive object-level life guidance from the Categorical Imperative.
Could you say more about what gives you this impression?
(It seems like you’re trying to offer a correction to or refinement of Sniffnoy’s post. But I’m having trouble making sense of it, I think because I don’t yet understand the problem that you’re pointing out. I suspect Sniffnoy might be in the same position.)
Things like this seem like they’re infinitely recursive:
An important factor that doesn’t seem to go much remarked upon, but which to my mind is crucial, is, well, how the person being influenced feels about it!
How the person feels about it is of course a function of what the existing norms of interaction are. But Sniffnoy is trying to define a norm that takes how the other person feels into account. This seems like the kind of generalization you get if you aren’t willing to use concepts like “local norms” as an intermediate abstraction.
If you are willing to use that abstraction, then the thing is to start thinking about whether they’re unjust enough to rebel against, or just enough to cooperate with, or something else. This heuristic should satisfy the Categorical Imperative, but there’s a range of local norms that can be just, and a different range that are unjust, such that the object-level correct decision in an otherwise identical interaction will often be different depending on context.
I’m very confused how the categorical imperative is supposed to be relevant here. I don’t see how the bit you’ve highlighted relates to it at all.
I think you’ve misread what I’m saying. I am not trying to define that as a norm. I am pointing it out as an important consideration, not a definition.
More generally, I’m not trying to define anything as a norm. As I stated above, what I’m trying to do is not define new norms—certainly not from any sort of first principles—but to make some tiny initial progress towards making explicit the norms that already exist. Which, as you say, vary, but I can at least speak to what I’ve seen. The numbered points above are, as I said, considerations that I think need to be accounted for, and I think failing to account for those points is a big reason previous attempts have failed and ended up somewhere near “classical liberal” or “nerd”, neither of which is at all close to the actual norms anywhere.
So, um, OK, time to step in it. Hoping this doesn’t count as a “hot-button political issue”. This is going to echo some things I’ve said over in the comments at Thing of Things, but I’m not going to go digging for those links right now...
It seems to me that some of what Duncan describes here is closely related to that infamous problem which led to such writings as Scott’s old “Meditations” series, Scott Aaronson’s “comment 171″, etc. It’s not so much a matter of “social ownership of the micro”, but rather micro-vs-macro interpretations of certain instructions. Let’s consider the situation in Vignette #1. In it, Alexis and Blake both correctly observe that how you ask a person about something may, at a micro level, affect how comfortable they feel with the possible responses to it.
(This may be a different sense of “micro”. After all—if this micro-emotional effect affects their actual decision, it then necessarily has macro effects! Which is to say, our brains’ all-or-nothing decision systems can take the micro and amplify it. The “micro” here isn’t so much in terms of results but in terms of the fact that what things hinge on here is, things that are small. The details of how a request is phrased or the tone it’s presented in.)
So, if a person X reads writings—written by people thinking in macro terms, or who don’t recognize that these micro-influences exist—about how it is wrong to ever pressure anyone, but interprets them in a micro manner… yeah. You see how it goes. That’s not a matter of “social ownership of the micro”, but it is a matter of recognizing the micro and considering it as morally relevant and attempting to account for it.
Let me step away from this narrow and hopefully not too hot-button point to make a broader one: We need what I’ll call “a theory of legitimate influence”; I think the lack of one is a big part of where Alexis/Blake situations (or the more controversial one I just mentioned) come from. (What do I mean by “we”? Um. I dunno. People. I dunno.)
What do I mean by a “theory of legitimate influence”? Well, let me start by presenting three examples:
“Classical liberal” theory of legitimate influence—People are more or less rational agents who can be trusted to act in their own interests. So force, coercion, and lying to people to get them to do what you want are wrong, but basically anything else is OK; so long as you’re honest and don’t use force it’s not like you’ll thereby get people to do things they don’t want to do.
“Nerd” theory of legitimate influence—People are kind of like rational agents, but can be biased by all sorts of arational considerations. So, influencing people via reasoned argument is OK, as you’re just helping them by providing them with good reasoning, but attempting to influence them by other, necessarily arational means is wrong.
“Sociopathic” theory of legitimate influence—Other people don’t really have preferences, or to the extent they do they don’t matter; do anything you want.
The thing here is that I don’t think any of these three theories are tenable. #3 is blatantly immoral. #1 is better but fails to rule illegitimate things such as, say, badgering a person—no, it’s not using violence or threats or lying, but it’s still wrong. Meanwhile #2 makes the opposite mistake—it’s impossible not to influence people arationally, every interaction you have with other people is full of such things, so to take this seriously leads to total paralysis.
(And to be clear I am not in any way restricting this to the sorts of situations I initially implicitly mentioned at the top here—this is a generic phenomenon, one that comes up in any request or offer or attempt to organize people to do something. That said, it obviously has an especial relevance to such situations, because clearly being charming—which is a thing that one does that influences other people’s responses! -- should be considered legitimate influence, it would seem.)
So, by a “theory of legitimate influence”, I mean a theory of which ways of influencing other people are—without having to do constant consequentialist reasoning—morally permissible (reasoned argument: definitely OK) and which are not (death threats: definitely not OK).
And I think in order to develop one that works, there are several insights one needs to take into account, that commonly expressed ones don’t. A few I can think of:
Once you go micro enough, once you get to the level where people can be influenced, you hit the problem that at such a level there often isn’t really such a thing as preexisting un-influenced preferences. If your theory assumes people always have definite preferences, or says that doing anything that shapes other people’s preferences is wrong, it’s not going to work.
On the other hand, to state the obvious, sometimes people really do act like agents and have actual preferences—on the larger scale this approximation definitely holds—and if you ignore this then you also will have a problem. (I mean, duh, but worth restating in light of #1.)
An important factor that doesn’t seem to go much remarked upon, but which to my mind is crucial, is, well, how the person being influenced feels about it! That is to say: There are forms of influence that feel like another person imposing their will upon you, and there are those that make you more likely to do something but just… don’t. Ego-dystonic vs ego-syntonic influence, one might call it. To my mind this is really morally relevant, not just in the heuristic sense I’m mostly talking about but also in a more direct consequentialist sense. But it seems like commonly-expressed theories of influence don’t really take any notice of this, even as the people expressing those theories don’t in fact condemn ego-syntonic influence as their explicitly stated theories make it seem like they would; they just seem to fail to recognize such things as influence at all, as if of course the way the request was phrased had no effect on their response because they had a fixed preexisting preference, duh. (Important note: Getting someone else to internalize some burdensome moral obligation that they feel terrible about and can’t actually enact and have to doublethink around is also not OK, just in case that was not clear! This doesn’t really fit cleanly into either of the two categories above—the obligation isn’t exactly external, no, but it still feels bad—it’s its own third thing; but morally it’s clearly on the bad side of the line.)
Maximizing autonomy—maximizing the ability of the person you’re talking to both to say “yes”, and to say “no”, and to say “I reject the question”—may not always be possible. Ideally you could accomplish all of these simultaneously, but in reality they often trade off against one another. At a macro level, if you don’t sweat the micro, you can of course accomplish all three, by just, you know, not putting people under horrible pressure or anything, and also explicitly noting the “I reject the question” option, which people often have a hard time coming up with for themselves. But if you care about the micro level, well, a that level there will likely be a tradeoff, and if you don’t account for that or just refuse to make any tradeoffs, you may encounter a problem.
I don’t know—it’s possible I just have less definite preferences / am more influenceable than other people and so think of this as more important; e.g. in previous discussions over at Thing of Things, Nita responded to my comments on the matter with, no, I really do have definite fixed preferences, none of this applies to me. So, IDK, typical mind fallacy very possible here. But, I think this is important, I think this is worth pointing out.
IMO, the “legitimate influence” part of this comment is important and good enough to be a top-level post.
OK, give me some time and maybe I’ll post it, expanded with some related notions that are less relevant to the original context but which I think are worth writing about...
I want to second the need for a Theory of Legitimate Influence. Well, maybe not a need, but I certainly would like to think more clearly in that space. I think I’ve gone back and forth on this a lot.
This is really good and important, but I think you’re making the problem too hard by thinking about universal rather than local norms.
Institutions that can produce interesting long-run technological improvements have to be optimizing for building shared maps, not exploiting existing maps for local efficiencies in ways that erode those maps. A norm that this is the legitimate incentive gradient to follow within such institutions—and that more generally creating shared capacity is more interesting than reallocating existing capacity—is the generator of the different legitimate influence ideologies you mentioned.
Such institutions are necessarily exclusive, at least for now. We shouldn’t try to have accountable standards for this that we can apply to everyone—we have to actually use our judgment about who’s trying and exclude those who aren’t. Institutions like this do well if they learn to recognize and cooperate with other institutions creating surplus, even if their methods and internal norms are different—this normset scales up gracefully because you’re generally happy if other people are also creating surplus value. Such communities can then coordinate in conflict or cooperation with other institutions that don’t create surplus, depending on which stance is advantageous.
If you have closed systems for having these nice things, you don’t have to remake norms everywhere to have nice things in your community. You definitely don’t have to make war on people who don’t want these nice things and demand they adopt your standards.
So, I don’t actually understand most of this comment. So, one thing at a time here...
Well, I’d just say I failed to specify that these might just be local norms, but sure, that’s a good point—local norms vary. E.g. I’ve noticed people in the LWsphere writing about how asking twice about something might be considered pressuring, whereas to my mind asking twice is completely ordinary and it’s asking three times that’s over the line. But yes we have to account for the fact that there’s not necessarily going to be one universally applicable “theory of legitimate influence”, except possibly at a high level that’s not directly applicable.
OK, I don’t understand what you’re saying here, or at least not how it’s relevant. Could you give an example?
I don’t really understand what you’re saying here and to the extent that I do I find the claim confusing. Again, could you give examples of how this might occur?
For an example of why—like, I’d say that the “nerd” theory here arises from bad observation. It’s not something people actually follow, because that’s impossible, though they might sometimes try. Basically, the question of legitimate influence is one of those social micro-things that ordinary people just can’t really talk about because their common sense gets in the way; theories of legitimate influence are mostly left implicit. Attempts to make them explicit get filtered through the lens of common sense, yielding instructions that are untenable if taken literally… though nerds will try all the same. (E.g. a common thing I’ve seen recently is explicitly stating #1, but implicitly redefining “coercion” as needed to mean whatever you need it to mean. Common sense allows statement to diverge from practice heavily.)
In short #1 and #2 above were meant to be examples of theories that people state, not that people follow.
Indeed! But I think the important thing to recognize here is that I’m (mostly) not talking about remaking norms at all. When I say “we need a theory of legitimate influence”, I (mostly) mean “We need to learn how to make explicit the norms that we’re already following”. Or perhaps I should say the norms that normal people are already following. :P Once we understand that, then perhaps we can begin to adjust them, if adjustments are needed. Trying to do things the other way around—starting from reasoned-out theories, then trying to practice them—just leads to untenable theories like the nerd theory.
I… never suggested that?
I think I totally failed to communicate to you what I was trying to say, so I’m gonna try saying it a different way instead of responding on the object level to your questions.
It seems like you’re trying to do something sort of like derive object-level life guidance from the Categorical Imperative. The Categorical Imperative actually doesn’t have enough content to do that. It’s a constraint on moral systems, not a sufficient basis for one. In particular, there’s a bunch of contextual information it doesn’t include. So you end up shifting the burden of sensitivity to context to the individual, in a way that’s not actually sustainable, and for related reasons, very difficult to formalize. Another way of saying this is that it seems like you’re committing something sort of like a Geek Social Fallacy, assuming that no good community should reject anyone who’s a good member of some good community.
What I’m proposing as an alternative, is a way to define the attributes of a class of desirable norms, as a generalization on the desiderate you seem to be optimizing for. I tried to explain the class of norm I mean in the humility argument for honesty and against neglectedness considerations.
Different well-intentioned institutions will try different sets of norms within this class. Each particular institution will have object-level standards suitable for some ways of life but not for others. As long as these different institutions can coexist, it’s not all that bad that some of them reject people who would make perfectly good members of other institutions, or punish behavior that is not intrinsically wrong, or accept people or behaviors that would be perceived as damaging elsewhere.
Could you say more about what gives you this impression?
(It seems like you’re trying to offer a correction to or refinement of Sniffnoy’s post. But I’m having trouble making sense of it, I think because I don’t yet understand the problem that you’re pointing out. I suspect Sniffnoy might be in the same position.)
Things like this seem like they’re infinitely recursive:
How the person feels about it is of course a function of what the existing norms of interaction are. But Sniffnoy is trying to define a norm that takes how the other person feels into account. This seems like the kind of generalization you get if you aren’t willing to use concepts like “local norms” as an intermediate abstraction.
If you are willing to use that abstraction, then the thing is to start thinking about whether they’re unjust enough to rebel against, or just enough to cooperate with, or something else. This heuristic should satisfy the Categorical Imperative, but there’s a range of local norms that can be just, and a different range that are unjust, such that the object-level correct decision in an otherwise identical interaction will often be different depending on context.
I’m very confused how the categorical imperative is supposed to be relevant here. I don’t see how the bit you’ve highlighted relates to it at all.
I think you’ve misread what I’m saying. I am not trying to define that as a norm. I am pointing it out as an important consideration, not a definition.
More generally, I’m not trying to define anything as a norm. As I stated above, what I’m trying to do is not define new norms—certainly not from any sort of first principles—but to make some tiny initial progress towards making explicit the norms that already exist. Which, as you say, vary, but I can at least speak to what I’ve seen. The numbered points above are, as I said, considerations that I think need to be accounted for, and I think failing to account for those points is a big reason previous attempts have failed and ended up somewhere near “classical liberal” or “nerd”, neither of which is at all close to the actual norms anywhere.