Maybe it’s because behaviorist techniques like reinforcement feel like they don’t respect human agency enough. But if you aren’t treating humans more like animals than most people are, then you’re modeling humans poorly.
But treating human beings, especially adults, like animals is characteristically unethical. Applying some system of reinforcement where someone has asked you to effectively treat their behavior is innocuous enough, as is of course treating yourself.
But generally manipulating the behavior of other people by means other than convincing them that they should behave in a certain way seems to me to be almost definitional of a dark art. If that’s not controversial, then I think this article should be qualified appropriately: never do this to other people without their explicit consent.
But treating human beings, especially adults, like animals is characteristically unethical.
It seems to me like the flow is in the reverse direction: many unethical manipulations involve treating adults like animals. But people who skillfully use positive reinforcement are both more pleasant to be around and more effective- which seems like something ethical systems should point you towards, not away from.
That’s a fair point: I may have been treating a conditional like a bi-conditional. I think my sense of the matter is this: if a friend told me that he spent a lot of our time together thinking through ways to positively reinforce some of my behaviors, even to my benefit, I would become very suspicious of him. I would feel that I’d been treated as a child or a dog. His behavior would seem to me to be manipulative and dishonest, and I think I would feel this way even if I agreed that the results of his actions were on the whole good and good for me.
Do you think this sort of reaction on my part would be misguided? Or am I on to something?
I agree with you that your autonomy is threatened by the manipulations of others. But threats only sometimes turn into harm- distinguishing between manipulations you agree with and disagree with is a valuable skill.
Indeed, there’s a general point that needs to be made about human interaction, and another about status, but first a recommendation: try to view as many of your actions as manipulations as possible. This will help separate out the things that, on reflection, you want to do and the things that, on reflection, you don’t want to do. For example:
if a friend told me that he spent a lot of our time together thinking through ways to positively reinforce some of my behaviors, even to my benefit, I would become very suspicious of him. I would feel that I’d been treated as a child or a dog. His behavior would seem to me to be manipulative and dishonest,
Emphasis mine. The reaction- of calling his behavior manipulative and dishonest- feels like it punishes manipulation, which you might want to do to protect your autonomy. But it actually punishes honesty, because the trigger was your friend telling you! Now, if your friend wants to change you, they’ll need to try to do it subtly. Your reaction has manipulated your friend without his explicit consent- and probably not in the direction you wanted it to.
So, the general point: human social interaction is an incredibly thorny field, in part because there are rarely ways to learn or teach it without externalities. Parents, for example, tell their children to share- not because sharing is an objective moral principle, but because it minimizes conflict. As well, some aspects of human social interaction are zero sum games- in which people who are skilled at interaction will lose if others get better at interaction, and thus discourage discussions that raise general social interaction skills.
The status interpretation: generally, manipulation increases the status of the manipulator and decreases the status of the manipulated. Resistance to manipulation could then be a status-preserving move, and interest in manipulation could be a status-increases move. What articles like this try to do is lower the status effects of manipulation (in both directions)- Luke proudly recounts the time Eliezer manipulated him so that he could better manipulate Eliezer. If being molded like this is seen more positively, then resistance to being molded (by others in the community) will decrease, and the community will work better and be happier. As well, I suspect that people are much more comfortable with manipulations if they know how to do them themselves- if positive reinforcement is a tool used by creepy Others, it’s much easier to dislike than if it’s the way you got your roommate to finally stop annoying you.
I’m confused, not only by the beginning of this comment, but by several others as well.
I thought being a LessWronger meant you no longer thought in terms of free will. That it’s a naive theory of human behavior, somewhat like naive physics.
I thought so, anyway. I guess I was wrong? (This comment still up voted for amazing analysis.)
I thought being a LessWronger meant you no longer thought in terms of free will. That it’s a naive theory of human behavior, somewhat like naive physics.
Autonomy and philosophical free will are different things. Philosophical free will is the question “well, if physical laws govern how my body acts, and my brain is a component of my body, then don’t physical laws govern what choices I make?”, to which the answer is mu. One does not need volition on the level of atoms to have volition on the level of people- and volition on the level of people is autonomy.
(You will note that LW is very interested in techniques to increase one’s will, take more control over one’s goals, and so on. Those would be senseless goals for a fatalist.)
Thanks for clarifying that.
I should note that I am very interested in techniques for self-improvement, too. I am currently learning how to read. (Apparently, I never knew :( ) And also get everything organized, GTD-style. (It seems a far less daunting prospect now than when I first heard of the idea, because I’m pseudo-minimalist.)
I still am surprised at the average LWers reaction here. Probably because it’s not clear to me the nature of ‘volition on the level of people’. Not something to expect you to answer, clarifying the distinction was helpful enough.
You’re already being manipulated this way by your environment whether or not you realize it.
Well, I’m claiming that this kind of manipulation is often, even characteristically, unethical. Since my environment is not capable of being ethical or unethical (that would be a category mistake, I think) then that’s not relevant to my claim.
I was referring though to the case of your friend using reinforcement to alter your behavior in a way that would benefit you. I just have a hard time seeing someone trying to help you as an unethical behavior.
That’s fair. I should tone down my point and say that doing this sort of thing is disrespectful, not evil or anything. Its the sort of thing parents and teachers do with kids. With your peers, unsolicited reinforcement training is seen as disrespectful because it stands in leau of just explaing to the person what you think they should be doing.
Often it is, we agree. But it’s the ‘telling’ there that’s the problem. A respectful way to modify someone’s behavior is to convince them to do something different (which may mean convincing them to subject themselves to positive reinforcement training). The difference is often whether we appeal to someone’s rationality, or take a run at their emotions.
A respectful way to modify someone’s behavior is to convince them to do something different
I agree that there are respectful ways to convince me to do something different, thereby respectfully modifying my behavior. Many of those ways involve appealing to my rationality. Many of those ways involve appealing to my emotions.
There are also disrespectful ways to convince me to do something different. Many of those ways involve appealing to my rationality. Many of those ways involve appealing to my emotions.
There are also disrespectful ways to convince me to do something different.
Many of those ways involve appealing to my rationality.
So, by ‘appealing to someone’s rationality’ I mean, at least, arguing honestly. Perhaps I should have specified that. Do you still think there are such examples?
Sure. Suppose I believe my husband is a foolish, clumsy, unattractive oaf, and I want him to take dance lessons. Suppose I say to him, “Hey, husband! You are a foolish, clumsy, unattractive oaf. If you take dance lessons, you will be less clumsy. That’s a good thing. Go take dance lessons!” I would say, in that situation, I have presented an honest, disrespectful argument to my husband with the intention of convincing him to do something different.
I agree completely that my example is disrespectful in virtue of (in vice of?) something other than its appeal to reason.
If that makes it a poor example of what you’re asking for, I misunderstood what you were asking for. Which, given that you’re repeatedly asking me for “an example” without actually saying precisely what you want an example of, is not too surprising.
So, perhaps it’s best to back all the way out. If there’s something specific you’d like me to provide an example of, and you can tell me what it is, I’ll try to provide an example of it if I can. If there isn’t, or you can’t, that’s OK too and we can drop this here.
Well this runs into the problem of giving unsolicited advice. Most people don’t respond well to that. I think it’s probably difficult for most rationalists to remember this since we are probably more open to that.
Well this runs into the problem of giving unsolicited advice. Most people don’t respond well to that. I think it’s probably difficult for most rationalists to remember this since we are probably more open to that.
Not really. Rationalists are just open to different advice. There’s lots of advice rationalists will reject out of hand. (Some of which is actually bad advice, and some of which is not.)
Everyone believes themselves to be open-minded; the catch is that we’re all open to what we’re open to, and not open to what we’re not.
Well I agree that none of us is completely rational when it comes to accepting advice. But don’t you think rationalists are at least better at that than most people?
It’s a guess but I think it’s a fairly logical one. Think about all the stories of rationalists who’ve overcome a belief in God, or ESP or whatever. Seems to me that demonstrates an ability to suppress emotion and follow logic that should carry over into other areas.
As I mentioned in another comment, you can just read LW threads on contentious topics to observe as a matter of practice that LW rationalists at least are no different than other people in this respect: open only to what they’re not already opposed to.
This is relevant evidence: evidence directly connected to the topic (openness to unsolicited advice). Your evidence is not, because it describes situations where rationalists changed their minds on their own. This is really different—changing your own mind is in no way similar to being open to someone else changing your mind, since somebody else trying to change your mind creates internal resistance in a way that changing your own mind does not.
It’s like using people’s ability to walk on dry land as evidence of their ability to swim underwater, when an actual swimming test shows the people all drowning. ;-)
Since I remember your username being associated with various PUA discussions, I assume you at least partly have those in mind, and I can’t say much about those not having ever really been part of the discussion, but I’ll note that it’s a particularly contentious issue (my position has changed somewhat but given that I only had a vague awareness of the PUA community before and not through anyone who participated in or approved of them I don’t consider that especially remarkable,) and Less Wrongers seem to be more pliable than the norm on less contentious matters which still provoke significant resistance in much of the population, such as the safety of fireplaces.
Since I remember your username being associated with various PUA discussions, I assume you at least partly have those in mind, and I can’t say much about those not having ever really been part of the discussion, but I’ll note that it’s a particularly contentious issue
It’s not the only one. See any thread on cryonics, how well SIAI is doing on various dimensions, discussions of nutrition, exercise, and nootropics… it’s not hard to run across examples of similar instances of closed-mindedness on BOTH sides of a discussion.
Less Wrongers seem to be more pliable than the norm
My point is: not nearly enough.
on less contentious matters which still provoke significant resistance in much of the population, such as the safety of fireplaces.
As I mentioned in that thread, LWers skew young and for not already having fireplaces: that they’d be less attached to them is kind of a given.
This is really different—changing your own mind is in no way similar to being open to someone else changing your mind, since somebody else trying to change your mind creates internal resistance in a way that changing your own mind does not.
Not only is it similar, the abilities in those areas are significantly correlated.
Agreed. Wanting to be “the kind of person who changes their mind” means that when you get into a situation of someone else trying to change your mind, and you notice that you’re getting defensive and making excuses not to change your mind, the cognitive dissonance of not being the kind of person you want to be makes it more likely, at least some of the time, that you’ll make yourself be open to changing your mind.
This is a nice idea, but it doesn’t hold up that well under mindkilling conditions: i.e. any condition where you have a stronger, more concrete loyalty to some other chunk of your identity than being the kind of person who changes your mind, and you perceive that other identity to be threatened.
It also doesn’t apply when you’re blocked from even perceiving someone’s arguments, because your brain has already cached a conclusion as being so obvious that only an evil or lunatic person could think something so stupid. Under such a condition, the idea that there is even something to change your mind about will not occur to you: the other person will just seem to be irredeemably wrong, and instead of feeling cognitive dissonance at trying to rationalize, you will feel like you are just patiently trying to explain common sense to a lunatic or a troll.
IOW, everyone in this thread who’s using their own experience (inside view) as a guide to how rational rationalists are, is erring in not using the available outside-view evidence of how rational rationalists aren’t: your own experience doesn’t include the times where you didn’t notice you were being closed-minded, and thus your estimates will be way off.
Not only is it similar, the abilities in those areas are significantly correlated.
In order to use that ability, you have to realize it needs to be used. If someone is setting out to change their own mind, then they have already realized the need. If someone is being offered advice by others, they may or may not realize there is anything to change their mind about. It is this latter skill (noticing that there’s something to change your mind about) that I’m distinguishing from the skill of changing your mind. They are not at all similar, nor is there any particular reason for them to be correlated.
Really? You don’t think the sort of person why tries harder than average to actually change their mind more often will also try harder than average to examine various issues that they should change their mind about?
Really? You don’t think the sort of person why tries harder than average to actually change their mind more often will also try harder than average to examine various issues that they should change their mind about?
But that isn’t the issue: it’s noticing that there is something you need to examine in the first place, vs. just “knowing” that the other person is wrong.
Honestly, I don’t think that the skill of being able to change your mind is all that difficult. The real test of skill is noticing that there’s something to even consider changing your mind about in the first place. It’s much easier to notice when other people need to do it. ;-)
Inasmuch as internal reflective coherence, and a desire to self-modify (towards any goal) or even just the urge to signal that desire are not the same thing...yeah, it doesn’t seem to follow that these two traits would necessarily correlate.
This feels like an equivocating-shades-of-grey argument, of the form ‘nobody is perfectly receptive to good arguments, and perfectly unswayed by bad ones, therefore, everyone is equally bad at it.’ Which is, of course, unjustified. In truth, if rationalists are not at least somewhat more swayed by good arguments than bad ones (as compared to the general population), we’re doing something wrong.
Which is, of course, unjustified. In truth, if rationalists are not at least somewhat more swayed by good arguments than bad ones (as compared to the general population), we’re doing something wrong.
Not really, we’re just equally susceptible to irrational biases.
Trivial proof for LW rationalists: read any LW thread regarding a controversial self-improvement topic, including nutrition, exercise, dating advice, etc., where people are diametrically opposed in their positions, using every iota of their argumentative reasoning power in order not to open themselves to even understanding their opponents’ position, let alone reasoning about it. It is extremely improbable that all divisive advice (including diametrically-opposed divisive advice) is incorrect, and therefore the bulk of LW rationalists are correctly rejecting it.
(Side note: I didn’t say anything about receptiveness to good arguments, I said receptiveness to unsolicited advice, as did the comment I was replying to. I actually assumed that we were talking about bad arguments, since most arguments, on average, are bad. My point was more that there are many topics which rationalists will reject out of hand without even bothering to listen to the arguments, good or bad, and that in this, they are just like any other human being. The point isn’t to invoke a fallacy of the grey, the point is for rationalists not to pat ourselves on the back in thinking we’re demonstrably better at this than other human beings: demonstrably, we’re not.)
This shouldn’t be a puzzle. Reinforcement happens, consciously or subconsciously. Why in the name of FSM would you choose to relinquish the power to actually control what would otherwise happen just subconsciously?
How is that not on the face of it a paragon, a prototype of optimization? Isn’t that optimizing is, more or less-consciously changing what is otherwise unconscious?
I don’t think I would be suspicious of him, as long as I agreed with the behaviours he was trying to reinforce. (I don’t know for sure–my reactions are based only on a thought experiment.) I think I would be grateful, both that he cared enough about me to put that much time and effort in, and that he considered me emotionally mature enough to tell me honestly what he was doing.
However, I do think that being aware of his deliberate reinforcement might make it less effective. Being reinforced for Behaviour A would feel less like “wow, the world likes it when I do A, I should do it more!” and more like “Person X wants me to do A”, which is a bit less motivating.
I don’t think I would be suspicious of him, as long as I agreed with the behaviours he was trying to reinforce.
Really? So say I tell you that all those times that I smiled at you and asked how you were doing were part of a long term plan to change the way you behave. The next day I smile and ask you how you’re doing. Has my confession done nothing to change the way you think about my question?
I’m saying that things like smiles and friendly, concerned questions have a certain importance for us that is directly undermined by their being used for for the purposes of changing our behavior. I don’t think using them this way is always bad, but it seems to me that people who generally treat people this way are people we tend not to like once we discover the nature of their kindness.
Like I said, thoughts experiments about “how would I feel if X happened” are not always accurate. However, when I try to simulate that situation in my head, I find that although I would probably think about his smile and question differently (and be more likely to respond with a joke along the lines of “trying to reinforce me again, huh?”) I don’t think I would like him less.
Anyway, I think I regularly use smiles and “how are you doing?” to change the way people behave...namely, to get strangers, i.e. coworkers at a new job, to start liking me more.
Your position is that you have a certain emotional response to knowing someone is trying to modify your behaviour. My position is that I have a different emotional response. I can imagine myself having an emotional response like yours...I just don’t. (Conversely, I can imagine someone experiencing jealousy in the context of a relationship, but romantic jealousy isn’t something I really experience personally.) I don’t think that makes either of us wrong.
Well, my position is that doing things like asking how someone is doing so as to reinforce behavior rather than because you want to know the answer is ethically bad. I used the example of the friend to try to motivate and explain that position, but at some point if you are totally fine with that sort of behavior, I don’t have very much to argue with. I think you’re wrong to be fine with that, but I also don’t think I can mount a convimcing argument to that effect. So you’ve pretty much reached the bottom of my thoughts on the matter, such as they are.
I’m curious about whether your reasons for considering this kind of behaviour “unethical” are consequentialist (i.e. a world where people do X is going to be worse overall than a world where no one does X) or deontological (there are certain behaviours, like lying or stealing, that are just bad no matter what world they take place in, and using social cues to manipulate other people is a behaviour that falls into that class.)
Ah, I’m not a consequentialist or a deontologist, but I do think this is a case where intentions are parcticularly important. Doing this kind of reinforcement training to someone without their knowledge is characteristically disrespectful if you just do it to help them, but it may also be the right thing to do in some cases (I’m toning down my claim a bit). Doing it with the result that they are harmed is vicious (that is, an expression or manifestation of a vice) regardless of your intentions. So that puts me somewhere in the middle.
Doing this kind of reinforcement training to someone without their knowledge is characteristically disrespectful if you just do it to help them, but it may also be the right thing to do in some cases (I’m toning down my claim a bit).
I wouldn’t necessarily say that. Doing it when you know they don’t (or would not) want you to is disrespectful.
Doing it with the result that they are harmed is vicious (that is, an expression or manifestation of a vice) regardless of your intentions.
This definitely seems false. It is the expected result, given information that you have (or should be expected to have) that can indicate viciousness, not actual results. For example, I could reward my children such that they never Jaywalk (still not quite sure what this is) and only cross the road at official crossings. Then one of my children gets hit by a car waiting at a crossing when they would have been fine crossing the street earlier. I haven’t been vicious. My kid has been unlucky.
It the general case it is never the result that determines whether your decision was the right decision to make in the circumstance. It is the information available at the time. (The actual result can be used as a proxy by those with insufficient access to your information at the time or when differing incentives would otherwise encourage corruption with ‘plausible deniability’).
On the unlucky kid: fair enough. But using positive reinforcement to make someone violent or cowardly, even if you think you’re benefiting them, is vicious. Thats the sort of case I was thinking about.
I disagree with you about the actual vs. expected result, but thats a bigger discussion.
It depends on whether or not they should be peaceful, I guess. But if they’re not your child or student or something like that, then it’s probably disrespectful at the least.
Well, my position is that doing things like asking how someone is doing so as to reinforce behavior rather than because you want to know the answer is ethically bad.
Can you express your personal ethics explicitly and clarify where it comes from?
If you could trace your ethics backward from “it’s unethical when people consciously use punishment/reward system to modify my behavior to their liking” to some basic ideas that you hold inviolate and cannot further trace to anything deeper, I’d appreciate it.
I think there are basically two aspects to our ethical lives: the biological and habituated arrangement of our emotions and our rationality. Our lives involve two corresponding phases. As children, we (and our teachers, parents, etc.) aim at developing the right kinds of emotional responses, and as adults we aim at doing good things. Becoming an adult means having an intellectual grasp of ethics, and being able (if one is raised well) to think thought one’s actions.
When you use positive reinforcement training, you treat someone as if they were in the childhood phase of their development, even if the behavioral modification is fairly superficial. This isn’t necessarily evil or anything, but it’s often disrespectful if it stands in place of appealing to someone’s ethical rationality. I guess an analogue would be using dark arts tactics to convince someone to have the right opinions about something. Its disrespectful because it ignores or holds in contempt their ability to reason for themselves.
That’s sensible, but realize that it’s atypical. Make those expectations clear before you cry foul in a relationship.
If you make an appeal to the “adult” in most people, you’ll confuse and infuriate them (“why is he lecturing me?”). Better (by default) stick with a smile when they do right by you, and ignore/brush off when possible if they don’t.
I think I disagree with this because the brain is modular, an evolutionary hodge-podge of old and new subroutines each with different functions. Only a few of those modules are conscious, self-aware, deliberative thinkers capable of planning ahead and accurately judging the consequences of potential actions to decide what to do. The rest is composed of a series of unconscious impulses, temptations, and habits. When I say “I,” I refer to the former. When I say “my brain”, I refer to the latter.
And I am always trying to trick and manipulate my brain. If I’m on a diet, I’ll lock the refrigerator door to make it harder to get a midnight snack. I’ll go grocery shopping only when I’m full. I’ll praise myself when I eat celery, etc.
Personally, I only identify with, approve of, and demand respect for those conscious, self-reflective modules, and the various emotions and habits that are harmony with them. And if someone who loves me wants to help me trick my brain into better aligning with my values, I’m all for it. Even if a particular technique to condition my brain requires that I don’t know what they’re doing.
And when it comes to reinforcing behaviors that align with my extrapolated volition (“What is OTOH likely to want to do, but is too scared/lazy/squicked out/biased to get herself to do?”), deliberate, considered, scientifically sound manipulation is probably better than the subconscious manipulation we all engage in, because the chances of getting undesired results are lower.
My objection is basically that it’s disrespectful (to the point of being unethical) to do this sort of thing to someone without their consent. As with many such things, there are going to be cases where someone has not or cannot actually give consent, and so we have to ask whether or not they would do so if they had all the facts on the table. In these cases, it’s a tricky question whether or not you can assume someone’s consent, and it often best to err on the side of not assuming consent.
I notice that you put this in terms of someone you love manipulating your habits in accordance with your values. That sounds a lot like a case where someone is safe assuming your consent.
I was objecting, in the OP, to the lack of any discussion of what seems to me to be the central moral question in this kind of activity, as well as what I took to be the view that this kind of consent can be quite broadly assumed. With some very few exceptions, I think this is unethical.
The thing is, other people’s actions and reactions will always sway our behavior in a particular direction, and our actions will do the same to others. We evolved to speak and act in such a way as to get allies, friends, mates, etc. - ie, make people like us so we can then get them to do things for us. Those who were good at getting others to like and help them reproduced more frequently than those who were not. Even if I were to agree that influencing others’ behavior without their explicit knowledge and consent is unethical, I can’t not do that.
My every smile, frown, thank-you, sorry, and nagging criticism will do something to affect the behavior of others, and they won’t be thinking “Ah, she thanked me, this will have the effect of reinforcing this behavior.” So if I can’t avoid it, the next best thing would be to influence skillfully, not clumsily. In both cases, the other person’s behavior is being influenced, and in both cases they are not explicitly aware of this. The only difference in the second case is that I know what I’m doing.
I definitely understand where you’re coming from. I can empathize with the sense of violation and disrespect, and I agree that in a lot of situations such behavior is problematic, but I probably wouldn’t agree with you on what situations, or how often they occur. This was my biggest problem with PUA when I first heard about it. I found it horrifyingly offensive that men might take advantage of the security holes in my brain to get me to sleep with them. But...confident, suave men are attractive. If a man were “naturally” that way, then he’s “just sexy,” but if someone who didn’t initially start out that way explicitly studies how to behave in an attractive manner, that’s creepy.
Why? It’s not like no one’s ever allowed to try to get anyone to sleep with them, and it’s not like I would favor a strict rule of a complete, explicit disclaimer explaining, “Everything I say is with the sole intention of convincing you to have sex with me.” (Such a disclaimer wouldn’t even be true, necessarily. Human interaction is complex and multi-faceted, and any given conversation would have multiple motives, even if one dominates.)
So what’s the difference between a man who’s “just sexy” and a “creepy PUA” who behaves the same way? (We’ll ignore some of the blatant misogyny and unattractive bitterness among many PUA, because many women find the abstract concept itself creepy, with or without misogyny.)
I think it’s the knowledge differential, which causes a very skewed power balance. The naturally confident, extroverted man is unconsciously playing out a dance which he never really examined, and the woman he’s chatting up is doing the same. When this man is replaced with a hyper self-aware PUA, the actions are the same, but the woman is in the dark while the man can see exactly why what he says causes her to react the way she does.
It’s like a chess game between Gary Kasporov and a guy who only vaguely realizes he’s playing chess. Yes, it’s unfair. But I think the more practical solution is not making Kasporov handicap himself, but teaching the other guy how to play chess.
I think the line between conscious and unconscious influencing of behavior is thinner and more fluid than you seem to say, more like a sliding scale of social self-awareness. And the line between manipulation and self-improvement is even thinner. What if I decided to be much nicer to everyone all of a sudden because I wanted people to like me? The brain is not a perfect deceiver; soon I’ll probably fake it til I make it, and everyone’s lives would be more pleasant.
In the end, I treat emotional manipulation (which involves changing one’s emotional responses to certain behaviors, rather than telling people factual lies) the way I treat offense. It’s just not practical to ban offending people. I think it’s more useful to be aware of what offends us, and moderate our responses to it. In the same way, it’s not possible to ban influencing other people’s behavior without their explicit knowledge; the naturally sexy man does this just as much as the PUA does. It’s possible to have a norm of taking the other person’s wishes into account, and it’s possible to study the security holes in our own minds and try to patch them up.
So if I can’t avoid it, the next best thing would be to influence skillfully, not clumsily. In both cases, the other person’s behavior is being influenced, and in both cases they are not explicitly aware of this. The only difference in the second case is that I know what I’m doing.
I think there is a difference. You’re right that all our behavior has or can have a reinforcing effect on other people. But smiles, and frowns, and thank-yous and such aren’t therefore just reinforcers. When I smile at someone, I express something like affection, and if I don’t feel any affection, I smile falsely. All these kinds of behaviors are the sorts of things that can be done honestly or falsely, and we ought to do them honestly. We do this with children, but with adults it’s disrespectful.
It might be possible to smile at someone for the sake of reinforcing some behavior of theirs, and to feel affection all the while, but my sense is that either a smile is an expression of affection, or it is done for some ulterior end.
I think your initial reaction to PUA is spot on. It’s a monstrous practice.
my sense is that either a smile is an expression of affection, or it is done for some ulterior end.
Here’s where I think human thinking is more complicated, muddled, and mutually-reinforcing than you say. In the example of saying “Thank you,” is it really so inconceivable that someone might say “Thank you,” while thinking (or, more likely, wordlessly intuiting) something along the lines of “I’m grateful and happy that this person did this, and I would like them to do it again”? In fact, much of these “reinforcement” or “animal training” tips, while phrased repulsively, mostly end up advising, “Remember to consistently express the gratitude you feel , and refrain from expressing any annoyance you might feel.”
Here’s what I might think, if I were the wife in that example: “Not only does nagging and expressing annoyance when I feel my reasonable expectations were not met belittle and irritate my husband, it doesn’t even work. He still doesn’t put the damn clothes in the damn hamper! We’re both less happy, and I didn’t even get him to change.” If I understand you correctly, that last part, where I discuss the efficacy of my nagging at getting me what I want, sounds dishonestly manipulative to you.
We all expect things from others, and we all care about others. Is it always, inevitably wrong to sully considerations of caring/being a nice person with considerations of ensuring your expectations and needs get met? Or is it that the only legitimate way to get other human beings to meet your expectations is to sit them down and explain it all to them, even if they’re annoyed and made unhappy by this Talk and its lack of emotional salience means it doesn’t work?
Saying “Thank you” and ignoring the clothes that don’t get put in the hamper works. It bypasses defensive, angry, annoyed reactions to nagging. It accurately expresses that clothes-in-the-hamper make me happy—in fact, more directly than the nagging method did, because the nagging method required the husband to infer that clothes-on-floor causes irate nagging, therefore clothes-in-the-hamper must cause happiness and gratitude. He’s happy, because he feels appreciated and doesn’t feel like he’s a teenager again being prodded by his mother. I’m happy, because I don’t feel like a grumpy middle-aged mother of a teenager. The clothes are in the hamper.
Was it wrong that I started all this because I was annoyed at having to nag him and wanted a more reliable way to get him to put his clothes in the hamper? Even though the (empirically sound) advice only told me to frame the same content—“Floor bad, hamper good”—in a more positive light, expressing happiness and gratitude when things go right, rather than irritation and disappointment when things go wrong? Even though once I shook myself of the nagging mindset the happiness and gratitude was not grudgingly given, was not an inaccurate portrayal of my now-happier mental state, was not intended to belittle my husband, but only to make us both happier AND get him to put the clothes in the hamper?
Becoming an adult means having an intellectual grasp of ethics, and being able (if one is raised well) to think thought one’s actions.
Even without any feedback from others? Or are you OK with a specific kind of feedback? What kind would it be? Is explicitly telling a person what you expect of them OK? If so, when does it become not OK?
Yes, even without feedback, though its always helpful to have other people to think with. As to when telling someone what to do is okay and not, I can’t imagine there’s any general rule, but I also expect we’re all familiar with the kinds of situations when you can do then and when not.
As to when telling someone what to do is okay and not, [...] I also expect we’re all familiar with the kinds of situations when you can do then and when not.
Just to be clear: if a hundred randomly-selected humans are presented with an identical list describing, in full detail, a hundred cases where person A tells person B what to do, and those humans are asked to classify those cases into acceptable, unacceptable, and borderline, your expectation is that most or all of those humans will arrive at the same classifications?
Really? To me, it depends substantially on how the list is generated. If we try to “rip from the headlines,” I’d expect substantial disagreement. If we follow you around and watch you tell people what to do in your ordinary week, I expect more agreement.
In short, there are lots of points of disagreement about social interaction, but there are far more mundane and uncontroversial interactions than controversial ones.
Well, I certainly agree that it’s possible to generate a list of a hundred cases that 95% of people would agree on the classification of.
But if you followed me around for a week and picked samples randomly from that (both of cases where I tell people what to do, and cases where I could have told people what to do and didn’t), and you asked a hundred people, I expect you’d get <60% congruence. I work in an office full of Americans and Israelis, I am frequently amused and sometimes horrified by the spread of opinion on this sort of thing.
Of course, if you narrowed your sample to middle-class Americans, you might well get up above 90%.
Edit: I should explicitly admit, though, that I was not envisioning a randomly generated list of cases. It was a good question.
I had something a set of mundane cases in mind. My post was just meant to point out that discerning these sorts of situations is not something we use a set of rules or criteria for (at least no fixed set we could usefully enumerate), but most people are socially competant enough to tell the difference.
I agree that most people who share what you’re calling “social competence” within a given culture share a set of rules that determine acceptable utterances in that culture, and that those rules are difficult to enumerate.
Roughly, that we often respond to others’ ability to cause us harm (whether by modifying our behavior or our bank accounts or our internal organs or whatever other mechanism) as a threat, independent of their likelihood of causing us harm.
So if you demonstrate, or even just tell me about, your ability to do these things, then while depending on the specific context, my specific reaction will be somewhat different… my reaction to you knowing my bank PIN number will be different from my reaction to you knowing how to modify my behavior or how to modify the beating of my heart or how to break into my home… they will all have a common emotional component: I will feel threatened, frightened, suspicious, attacked, violated.
That all is perfectly natural and reasonable. And a common and entirely understandable response to that might be for me to declare that, OK, maybe you are able do those things, but a decent or ethical person never will do those things. (That sort of declaration is one relatively common way that I can attempt to modify your likelihood of performing those actions. I realize that you would only consider that a form of manipulation if I realize that such declarations will modify your likelihood of performing those actions. Regardless, the declaration modifies your behavior just the same whether I realize it or not, and whether it’s manipulation or not.)
But it doesn’t follow from any of that that it’s actually unethical for you to log into my bank account, modify my heartbeat, break into my home, or modify my behavior. To my mind, as I said before, the determiner of whether such behavior is ethical or not is whether the result leaves me better or worse off.
Breaking into my home to turn off the main watervalve to keep my house from flooding while I’m at work is perfectly ethical, indeed praiseworthy, and I absolutely endorse you doing so. Nevertheless, I suspect that if you told me that you spent a lot of time thinking about how to break into my home, I would become very suspicious of you.
Again, my emotional reaction to your demonstrated or claimed threat capacity is independent of my beliefs about your likely behaviors, let alone my beliefs about your likely intentions.
Roughly, that we often respond to others’ ability to cause us harm (whether by modifying our behavior or our bank accounts or our internal organs or whatever other mechanism) as a threat, independent of their likelihood of causing us harm.
This seems very implausible to me. I often encounter people with the ability to do me great harm (a police officer with a gun, say), and this rarely if ever causes me to be angry, or feel as if my dignity has been infringed upon, or anything like that. Yet these are the reactions typically associated with finding out you’ve been intentionally manipulated. Do you have some independent reason to believe this is true?
But treating human beings, especially adults, like animals is characteristically unethical.
This statement without context is clearly incorrect; there are all sorts of behaviors we can ethically execute with respect to both humans and other animals. I understand that what you and the OP both mean to connote is particular behaviors which we restrict in typical contexts only to non-human animals, but if you’re going to label them as unethical when applied to humans it helps to specify what behaviors and context those are.
manipulating the behavior of other people by means other than convincing them that they should behave in a certain way seems to me to be almost definitional of a dark art.
That’s a little more specific, but not too much, as I’m not really sure what you mean by “convincing” here.
That is, if at time T1 I don’t exhibit behavior B and don’t assert that I should exhibit B, and you perform some act A at T2 after which I exhibit B and assert that I should exhibit B, is A an act of convincing me (and therefore OK on your account) or not (and therefore unethical on your account)? How might I test that?
never do this to other people without their explicit consent
This, on the other hand, is clear. Thank you. I disagree with it strongly.
Eliezer replied: “Well, three weeks ago I was working with Anna and Alicorn, and every time I said something nice they fed me an M&M.”
That story doesn’t trouble you at all?
For most people, there’s lots of low hanging fruit from trying to recognize when they are reinforcing and punishing behaviors of others. Also, positive reinforcement is more effective at changing behavior than positive punishment.
But that doesn’t mean that we should embrace conditioning-type behavior-modification wholesale. I’m highly doubtful that conditioning responses are entirely justifiable by decision-theoretic reasons. And “not justifiable by decision theoretic reasons” is a reasonable definition of non-rational. Which implies that relying on those types of processes to change others behaviors might be unethical.
Does it trouble me at all? I suppose. Not a huge amount, but some. Had Esar said “Doing this to people without their consent is troubling” rather than “never do this to other people without their explicit consent” I likely wouldn’t have objected.
My response to the rest of this would mostly be repeating myself, so I’ll point to here instead.
More generally, “conditioning-type behavior-modification” isn’t some kind of special category of activity that is clearly separable from ordinary behavior. We modify one another’s behavior through conditioning all the time. You did it just now when you replied to my comment. Declaring it unethical across the board seems about as useful as saying “never kill a living thing.”
This statement without context is clearly incorrect...
You seem to know what I mean, so I won’t go into a buch of unnecessary qualifications.
is A an act of convincing me?
Not necessarily. Is the meaning of ‘convince’ really unclear? Threatening someone with a gun seems to satisfy your description, but it’s obviously not a case of convincing. I’m not sure what you’re unclear about.
Suppose I decide I want my coworkers to visit my desk more often at work, and therefore begin a practice of smiling at everyone who visits, keeping treats on my desk and inviting visitors to partake, being nicer to people when they visit me at my desk than I am at other times, and otherwise setting up a schedule of differential reinforcement designed to increase the incidence of desk-visiting behavior, and I do all of that without ever announcing to anyone that I’m doing it or why I’m doing it, let alone securing anyone’s consent. (Let alone securing everyone’s consent.)
Do you consider that an example of unethical behavior? I don’t.
Now, maybe you don’t either. Maybe it’s “obviously” not an example of manipulating the behavior of other people by means other than convincing them that they should behave in a certain way. I don’t really know, since you’ve declined to clarify your constraints. But it sure does seem to match what you described.
Do you consider that an example of unethical behavior? I don’t.
You’re right that this doesn’t seem quite unethical, but it is awfully creepy and I’m not sure how to pull my intuitions apart there. Sitting across from someone who is faking affection and smiles and pleasantries so as to manipulate my behavior would cause me to avoid them like the plague.
In professional environments I find this happens all the time, and when the fake friendliness is discovered as such, the effect reverses considerably. If it’s terribly important to something’s being effective that the person you’re doing it to doesn’t know what’s going on, it’s probably bad.
(nods) Absolutely. I could have also framed it to make it seem far creepier, or to make it seem significantly less creepy.
In particular, the use of loaded words like “faking” and “manipulate” ups the creepy factor of the description a lot. The difference between faking affection and choosing to be affectionate is difficult to state precisely, but boy do we respond to the difference between the words!
I agree that most activities which depend on my ignorance for their effectiveness are bad. I even agree that a higher percentage of activities which depend on my ignorance for their effectiveness are bad than the equivalent percentage of activities that don’t so depend.
That said, you seem to be going from that claim to the implicit claim that they are bad by virtue of depending on my ignorance. That’s less clear to me.
Well, I think it’s important because IMHO that negative emotional response is what underlies the (incorrect) description of the corresponding behavior as unethical. But I expect Esar would find that implausible.
‘Taboo with an eye to this question’, not ‘answer this question’. I’d already noticed the pattern that people consider finding something creepy to be sufficient reason to label it unethical, but that observation isn’t useful for very much beyond predicting other peoples’ labeling habits.
Oh, I see. Sorry, misunderstood. I could replace “creepy” everywhere it appears with “emotionally disquieting”, but I’m not sure what that would help. I figured using the same language Esar was using would be helpful, but I may well have been wrong.
I could have also framed it to make it seem far creepier
I’ll put it simply: if someone asks me about my kids, neither to be polite nor because they care, but because they want to change the way I behave, then they’re (in most cases) being manipulative and insincere. While perhaps they’re not wronging me, per se, it’s certainly not something that speaks well of them, ethically speaking. If you find this controversial, then you surprise me.
It would be bad advice, I think, to encourage people to use positive reinforcement on others when their ignorance is necessary for it to be effective. Not just practically bad advice, as people are pretty good at picking up on fake friendliness. But full stop ethically damaging advice, if taken seriously. I’m not saying that every such case is going to be unethical, but I’m not in the business of lawlike ethical principles anyway.
That said, you seem to be going from that claim to the implicit claim that they are bad by virtue of depending on my ignorance. That’s less clear to me.
No, what I said was that behaviors which depend on someone’s ignorance for their effectiveness are often also bad behaviors. I didn’t say anything one way or the other about a stricter relation between the two properties, but I’ll say now that I don’t think they’re unrelated.
I agree that asking you about your kids solely to change your behavior is manipulative. I also agree that it’s insincere. (Which is an entirely distinct thing.) I would also say that asking you about your kids solely to be polite is insincere. I would not agree that any of these are necessarily unethical.
I am not quite sure what you mean by “ethically damaging advice.” I agree with you that it’s not always unethical to positively reinforce others without their knowledge. I would agree that “Positively reinforcing others without their knowledge is a good thing to do, do it constantly” is advice that, if taken seriously, would often lead me to perform unethical acts. I can accept calling it unethical advice for that reason, I suppose. But I also think that “Positively reinforcing others without their knowledge is a bad thing to do, never do it.” is unethical advice in the same (somewhat unclear) sense.
I agree that behaviors that depend on others’ ignorance are often also bad behaviors. Behaviors that depend on others’ knowledge are also often bad behaviors.
Well, I think I’d stand by what I said originally. Though I guess I’m counting on no one reading that as the exceptionless proposition ‘for all x such that x is a case of using positive reinforcement without someone’s knowledge, x is unethical’. Likewise, if someone asked me, I’d say ‘Don’t ever shoplift, it’s unethical.’ Though I wouldn’t want or expect anyone to read that as ‘all cases of shoplifting are, without exception, unethical.’
I think it’s false to suggest that pleasantries are being outright faked. This person is probably not sitting there going, “Oh, woe is me, I have to pay the horrible price of smiling and being nice to these imbeciles in order to make them give me what I want; I would never do that otherwise.” In fact, why would he even want his coworkers to visit his desk more if he had such utter contempt for them that he had to fake affection wholesale?
Rather, like many people, there’s a part of him which would probably like to be a nicer person overall, but he can’t always bring himself to live up to the ideal. “People will visit my desk more” is a good immediate incentive to be a better person. The coworker who wants more people to visit their desk is also affected by the results of his own behavior. He’ll probably be happier because of the visitations, and his happiness would cause him to smile more, and the very act of smiling would make him even more happy. After a while the “initial motivation,” whether it was 100% selfish “I want people to visit my desk more; damn their own desires” or the 100% altruistic “I want to manipulate myself into being a nicer person,” or, more likely, a mixture of the two, has faded away, and all that remains is the slightly modified, more pleasant person.
I don’t understand how using friendly behavior to reinforce people visiting one’s desk precludes that behavior being genuine. You seem to be dismissing the possibility that the person in question feels real affection, and is smiling because they are in fact happy that their desk is being visited. Just because they are using their (real) positive response to coworkers visiting their desk as positive reinforcement doesn’t mean that their behavior is “fake” in any way.
Just like a woman who feels a surge of affection towards her husband when he puts away the laundry, and kisses or praises him.
Yes, it’s positive reinforcement, but it’s also a genuine response.
But treating human beings, especially adults, like animals is characteristically unethical. Applying some system of reinforcement where someone has asked you to effectively treat their behavior is innocuous enough, as is of course treating yourself.
But generally manipulating the behavior of other people by means other than convincing them that they should behave in a certain way seems to me to be almost definitional of a dark art. If that’s not controversial, then I think this article should be qualified appropriately: never do this to other people without their explicit consent.
It seems to me like the flow is in the reverse direction: many unethical manipulations involve treating adults like animals. But people who skillfully use positive reinforcement are both more pleasant to be around and more effective- which seems like something ethical systems should point you towards, not away from.
That’s a fair point: I may have been treating a conditional like a bi-conditional. I think my sense of the matter is this: if a friend told me that he spent a lot of our time together thinking through ways to positively reinforce some of my behaviors, even to my benefit, I would become very suspicious of him. I would feel that I’d been treated as a child or a dog. His behavior would seem to me to be manipulative and dishonest, and I think I would feel this way even if I agreed that the results of his actions were on the whole good and good for me.
Do you think this sort of reaction on my part would be misguided? Or am I on to something?
I agree with you that your autonomy is threatened by the manipulations of others. But threats only sometimes turn into harm- distinguishing between manipulations you agree with and disagree with is a valuable skill.
Indeed, there’s a general point that needs to be made about human interaction, and another about status, but first a recommendation: try to view as many of your actions as manipulations as possible. This will help separate out the things that, on reflection, you want to do and the things that, on reflection, you don’t want to do. For example:
Emphasis mine. The reaction- of calling his behavior manipulative and dishonest- feels like it punishes manipulation, which you might want to do to protect your autonomy. But it actually punishes honesty, because the trigger was your friend telling you! Now, if your friend wants to change you, they’ll need to try to do it subtly. Your reaction has manipulated your friend without his explicit consent- and probably not in the direction you wanted it to.
So, the general point: human social interaction is an incredibly thorny field, in part because there are rarely ways to learn or teach it without externalities. Parents, for example, tell their children to share- not because sharing is an objective moral principle, but because it minimizes conflict. As well, some aspects of human social interaction are zero sum games- in which people who are skilled at interaction will lose if others get better at interaction, and thus discourage discussions that raise general social interaction skills.
The status interpretation: generally, manipulation increases the status of the manipulator and decreases the status of the manipulated. Resistance to manipulation could then be a status-preserving move, and interest in manipulation could be a status-increases move. What articles like this try to do is lower the status effects of manipulation (in both directions)- Luke proudly recounts the time Eliezer manipulated him so that he could better manipulate Eliezer. If being molded like this is seen more positively, then resistance to being molded (by others in the community) will decrease, and the community will work better and be happier. As well, I suspect that people are much more comfortable with manipulations if they know how to do them themselves- if positive reinforcement is a tool used by creepy Others, it’s much easier to dislike than if it’s the way you got your roommate to finally stop annoying you.
This, with extra emphasis!
I’m confused, not only by the beginning of this comment, but by several others as well.
I thought being a LessWronger meant you no longer thought in terms of free will. That it’s a naive theory of human behavior, somewhat like naive physics.
I thought so, anyway. I guess I was wrong? (This comment still up voted for amazing analysis.)
Autonomy and philosophical free will are different things. Philosophical free will is the question “well, if physical laws govern how my body acts, and my brain is a component of my body, then don’t physical laws govern what choices I make?”, to which the answer is mu. One does not need volition on the level of atoms to have volition on the level of people- and volition on the level of people is autonomy.
(You will note that LW is very interested in techniques to increase one’s will, take more control over one’s goals, and so on. Those would be senseless goals for a fatalist.)
Thanks for clarifying that. I should note that I am very interested in techniques for self-improvement, too. I am currently learning how to read. (Apparently, I never knew :( ) And also get everything organized, GTD-style. (It seems a far less daunting prospect now than when I first heard of the idea, because I’m pseudo-minimalist.)
I still am surprised at the average LWers reaction here. Probably because it’s not clear to me the nature of ‘volition on the level of people’. Not something to expect you to answer, clarifying the distinction was helpful enough.
I think it’s misguided personally. You’re already being manipulated this way by your environment whether or not you realize it.
Well, I’m claiming that this kind of manipulation is often, even characteristically, unethical. Since my environment is not capable of being ethical or unethical (that would be a category mistake, I think) then that’s not relevant to my claim.
I was referring though to the case of your friend using reinforcement to alter your behavior in a way that would benefit you. I just have a hard time seeing someone trying to help you as an unethical behavior.
It does depend on whose definition of ‘help’ they’re using.
Good point. Do you think it would be ethical if they were helping to fulfill your preferences?
Usually, yes, though there are several qualifications and corner cases.
Agreed, there probably are.
That’s fair. I should tone down my point and say that doing this sort of thing is disrespectful, not evil or anything. Its the sort of thing parents and teachers do with kids. With your peers, unsolicited reinforcement training is seen as disrespectful because it stands in leau of just explaing to the person what you think they should be doing.
In my experience, telling other people how I think they should behave is also often seen as disrespectful.
Often it is, we agree. But it’s the ‘telling’ there that’s the problem. A respectful way to modify someone’s behavior is to convince them to do something different (which may mean convincing them to subject themselves to positive reinforcement training). The difference is often whether we appeal to someone’s rationality, or take a run at their emotions.
I agree that there are respectful ways to convince me to do something different, thereby respectfully modifying my behavior.
Many of those ways involve appealing to my rationality.
Many of those ways involve appealing to my emotions.
There are also disrespectful ways to convince me to do something different.
Many of those ways involve appealing to my rationality.
Many of those ways involve appealing to my emotions.
So, by ‘appealing to someone’s rationality’ I mean, at least, arguing honestly. Perhaps I should have specified that. Do you still think there are such examples?
Do I think there are disrespectful ways to convince me to do something different that involve arguing honestly? Sure. Do you not?
Not that I can think of, no. Can you think of an example?
Sure. Suppose I believe my husband is a foolish, clumsy, unattractive oaf, and I want him to take dance lessons. Suppose I say to him, “Hey, husband! You are a foolish, clumsy, unattractive oaf. If you take dance lessons, you will be less clumsy. That’s a good thing. Go take dance lessons!” I would say, in that situation, I have presented an honest, disrespectful argument to my husband with the intention of convincing him to do something different.
That’s not really a very good example. That in virtue of which its disrespectful is unconnected to that in virtue of which it appeals to reason.
I agree completely that my example is disrespectful in virtue of (in vice of?) something other than its appeal to reason.
If that makes it a poor example of what you’re asking for, I misunderstood what you were asking for. Which, given that you’re repeatedly asking me for “an example” without actually saying precisely what you want an example of, is not too surprising.
So, perhaps it’s best to back all the way out. If there’s something specific you’d like me to provide an example of, and you can tell me what it is, I’ll try to provide an example of it if I can. If there isn’t, or you can’t, that’s OK too and we can drop this here.
Well this runs into the problem of giving unsolicited advice. Most people don’t respond well to that. I think it’s probably difficult for most rationalists to remember this since we are probably more open to that.
Not really. Rationalists are just open to different advice. There’s lots of advice rationalists will reject out of hand. (Some of which is actually bad advice, and some of which is not.)
Everyone believes themselves to be open-minded; the catch is that we’re all open to what we’re open to, and not open to what we’re not.
Well I agree that none of us is completely rational when it comes to accepting advice. But don’t you think rationalists are at least better at that than most people?
Based on what evidence?
It’s a guess but I think it’s a fairly logical one. Think about all the stories of rationalists who’ve overcome a belief in God, or ESP or whatever. Seems to me that demonstrates an ability to suppress emotion and follow logic that should carry over into other areas.
As I mentioned in another comment, you can just read LW threads on contentious topics to observe as a matter of practice that LW rationalists at least are no different than other people in this respect: open only to what they’re not already opposed to.
This is relevant evidence: evidence directly connected to the topic (openness to unsolicited advice). Your evidence is not, because it describes situations where rationalists changed their minds on their own. This is really different—changing your own mind is in no way similar to being open to someone else changing your mind, since somebody else trying to change your mind creates internal resistance in a way that changing your own mind does not.
It’s like using people’s ability to walk on dry land as evidence of their ability to swim underwater, when an actual swimming test shows the people all drowning. ;-)
Since I remember your username being associated with various PUA discussions, I assume you at least partly have those in mind, and I can’t say much about those not having ever really been part of the discussion, but I’ll note that it’s a particularly contentious issue (my position has changed somewhat but given that I only had a vague awareness of the PUA community before and not through anyone who participated in or approved of them I don’t consider that especially remarkable,) and Less Wrongers seem to be more pliable than the norm on less contentious matters which still provoke significant resistance in much of the population, such as the safety of fireplaces.
It’s not the only one. See any thread on cryonics, how well SIAI is doing on various dimensions, discussions of nutrition, exercise, and nootropics… it’s not hard to run across examples of similar instances of closed-mindedness on BOTH sides of a discussion.
My point is: not nearly enough.
As I mentioned in that thread, LWers skew young and for not already having fireplaces: that they’d be less attached to them is kind of a given.
Not only is it similar, the abilities in those areas are significantly correlated.
Agreed. Wanting to be “the kind of person who changes their mind” means that when you get into a situation of someone else trying to change your mind, and you notice that you’re getting defensive and making excuses not to change your mind, the cognitive dissonance of not being the kind of person you want to be makes it more likely, at least some of the time, that you’ll make yourself be open to changing your mind.
This is a nice idea, but it doesn’t hold up that well under mindkilling conditions: i.e. any condition where you have a stronger, more concrete loyalty to some other chunk of your identity than being the kind of person who changes your mind, and you perceive that other identity to be threatened.
It also doesn’t apply when you’re blocked from even perceiving someone’s arguments, because your brain has already cached a conclusion as being so obvious that only an evil or lunatic person could think something so stupid. Under such a condition, the idea that there is even something to change your mind about will not occur to you: the other person will just seem to be irredeemably wrong, and instead of feeling cognitive dissonance at trying to rationalize, you will feel like you are just patiently trying to explain common sense to a lunatic or a troll.
IOW, everyone in this thread who’s using their own experience (inside view) as a guide to how rational rationalists are, is erring in not using the available outside-view evidence of how rational rationalists aren’t: your own experience doesn’t include the times where you didn’t notice you were being closed-minded, and thus your estimates will be way off.
In order to use that ability, you have to realize it needs to be used. If someone is setting out to change their own mind, then they have already realized the need. If someone is being offered advice by others, they may or may not realize there is anything to change their mind about. It is this latter skill (noticing that there’s something to change your mind about) that I’m distinguishing from the skill of changing your mind. They are not at all similar, nor is there any particular reason for them to be correlated.
Really? You don’t think the sort of person why tries harder than average to actually change their mind more often will also try harder than average to examine various issues that they should change their mind about?
But that isn’t the issue: it’s noticing that there is something you need to examine in the first place, vs. just “knowing” that the other person is wrong.
Honestly, I don’t think that the skill of being able to change your mind is all that difficult. The real test of skill is noticing that there’s something to even consider changing your mind about in the first place. It’s much easier to notice when other people need to do it. ;-)
Inasmuch as internal reflective coherence, and a desire to self-modify (towards any goal) or even just the urge to signal that desire are not the same thing...yeah, it doesn’t seem to follow that these two traits would necessarily correlate.
I hadn’t considered that. Ego does get in the way more when other people are involved.
This feels like an equivocating-shades-of-grey argument, of the form ‘nobody is perfectly receptive to good arguments, and perfectly unswayed by bad ones, therefore, everyone is equally bad at it.’ Which is, of course, unjustified. In truth, if rationalists are not at least somewhat more swayed by good arguments than bad ones (as compared to the general population), we’re doing something wrong.
Not really, we’re just equally susceptible to irrational biases.
Trivial proof for LW rationalists: read any LW thread regarding a controversial self-improvement topic, including nutrition, exercise, dating advice, etc., where people are diametrically opposed in their positions, using every iota of their argumentative reasoning power in order not to open themselves to even understanding their opponents’ position, let alone reasoning about it. It is extremely improbable that all divisive advice (including diametrically-opposed divisive advice) is incorrect, and therefore the bulk of LW rationalists are correctly rejecting it.
(Side note: I didn’t say anything about receptiveness to good arguments, I said receptiveness to unsolicited advice, as did the comment I was replying to. I actually assumed that we were talking about bad arguments, since most arguments, on average, are bad. My point was more that there are many topics which rationalists will reject out of hand without even bothering to listen to the arguments, good or bad, and that in this, they are just like any other human being. The point isn’t to invoke a fallacy of the grey, the point is for rationalists not to pat ourselves on the back in thinking we’re demonstrably better at this than other human beings: demonstrably, we’re not.)
It amuses me how readily my brain offered “I am not neither open-minded!” as a response to that.
But your environment includes people, dude.
This shouldn’t be a puzzle. Reinforcement happens, consciously or subconsciously. Why in the name of FSM would you choose to relinquish the power to actually control what would otherwise happen just subconsciously?
How is that not on the face of it a paragon, a prototype of optimization? Isn’t that optimizing is, more or less-consciously changing what is otherwise unconscious?
I don’t think I would be suspicious of him, as long as I agreed with the behaviours he was trying to reinforce. (I don’t know for sure–my reactions are based only on a thought experiment.) I think I would be grateful, both that he cared enough about me to put that much time and effort in, and that he considered me emotionally mature enough to tell me honestly what he was doing.
However, I do think that being aware of his deliberate reinforcement might make it less effective. Being reinforced for Behaviour A would feel less like “wow, the world likes it when I do A, I should do it more!” and more like “Person X wants me to do A”, which is a bit less motivating.
Really? So say I tell you that all those times that I smiled at you and asked how you were doing were part of a long term plan to change the way you behave. The next day I smile and ask you how you’re doing. Has my confession done nothing to change the way you think about my question?
I’m saying that things like smiles and friendly, concerned questions have a certain importance for us that is directly undermined by their being used for for the purposes of changing our behavior. I don’t think using them this way is always bad, but it seems to me that people who generally treat people this way are people we tend not to like once we discover the nature of their kindness.
Like I said, thoughts experiments about “how would I feel if X happened” are not always accurate. However, when I try to simulate that situation in my head, I find that although I would probably think about his smile and question differently (and be more likely to respond with a joke along the lines of “trying to reinforce me again, huh?”) I don’t think I would like him less.
Anyway, I think I regularly use smiles and “how are you doing?” to change the way people behave...namely, to get strangers, i.e. coworkers at a new job, to start liking me more.
Well, I guess I’ll tap out then. I’m not sure how to voice my position at this point.
Your position is that you have a certain emotional response to knowing someone is trying to modify your behaviour. My position is that I have a different emotional response. I can imagine myself having an emotional response like yours...I just don’t. (Conversely, I can imagine someone experiencing jealousy in the context of a relationship, but romantic jealousy isn’t something I really experience personally.) I don’t think that makes either of us wrong.
Well, my position is that doing things like asking how someone is doing so as to reinforce behavior rather than because you want to know the answer is ethically bad. I used the example of the friend to try to motivate and explain that position, but at some point if you are totally fine with that sort of behavior, I don’t have very much to argue with. I think you’re wrong to be fine with that, but I also don’t think I can mount a convimcing argument to that effect. So you’ve pretty much reached the bottom of my thoughts on the matter, such as they are.
I’m curious about whether your reasons for considering this kind of behaviour “unethical” are consequentialist (i.e. a world where people do X is going to be worse overall than a world where no one does X) or deontological (there are certain behaviours, like lying or stealing, that are just bad no matter what world they take place in, and using social cues to manipulate other people is a behaviour that falls into that class.)
Ah, I’m not a consequentialist or a deontologist, but I do think this is a case where intentions are parcticularly important. Doing this kind of reinforcement training to someone without their knowledge is characteristically disrespectful if you just do it to help them, but it may also be the right thing to do in some cases (I’m toning down my claim a bit). Doing it with the result that they are harmed is vicious (that is, an expression or manifestation of a vice) regardless of your intentions. So that puts me somewhere in the middle.
I wouldn’t necessarily say that. Doing it when you know they don’t (or would not) want you to is disrespectful.
This definitely seems false. It is the expected result, given information that you have (or should be expected to have) that can indicate viciousness, not actual results. For example, I could reward my children such that they never Jaywalk (still not quite sure what this is) and only cross the road at official crossings. Then one of my children gets hit by a car waiting at a crossing when they would have been fine crossing the street earlier. I haven’t been vicious. My kid has been unlucky.
It the general case it is never the result that determines whether your decision was the right decision to make in the circumstance. It is the information available at the time. (The actual result can be used as a proxy by those with insufficient access to your information at the time or when differing incentives would otherwise encourage corruption with ‘plausible deniability’).
On the unlucky kid: fair enough. But using positive reinforcement to make someone violent or cowardly, even if you think you’re benefiting them, is vicious. Thats the sort of case I was thinking about.
I disagree with you about the actual vs. expected result, but thats a bigger discussion.
On your account, is using positive reinforcement to make someone peaceful vicious? Virtuous? Neither?
It depends on whether or not they should be peaceful, I guess. But if they’re not your child or student or something like that, then it’s probably disrespectful at the least.
OK. Tapping out now.
Can you express your personal ethics explicitly and clarify where it comes from?
I’d be happy to try. Do you want a brief account specific to this topic, or something more general?
If you could trace your ethics backward from “it’s unethical when people consciously use punishment/reward system to modify my behavior to their liking” to some basic ideas that you hold inviolate and cannot further trace to anything deeper, I’d appreciate it.
I think there are basically two aspects to our ethical lives: the biological and habituated arrangement of our emotions and our rationality. Our lives involve two corresponding phases. As children, we (and our teachers, parents, etc.) aim at developing the right kinds of emotional responses, and as adults we aim at doing good things. Becoming an adult means having an intellectual grasp of ethics, and being able (if one is raised well) to think thought one’s actions.
When you use positive reinforcement training, you treat someone as if they were in the childhood phase of their development, even if the behavioral modification is fairly superficial. This isn’t necessarily evil or anything, but it’s often disrespectful if it stands in place of appealing to someone’s ethical rationality. I guess an analogue would be using dark arts tactics to convince someone to have the right opinions about something. Its disrespectful because it ignores or holds in contempt their ability to reason for themselves.
That’s sensible, but realize that it’s atypical. Make those expectations clear before you cry foul in a relationship.
If you make an appeal to the “adult” in most people, you’ll confuse and infuriate them (“why is he lecturing me?”). Better (by default) stick with a smile when they do right by you, and ignore/brush off when possible if they don’t.
I think I disagree with this because the brain is modular, an evolutionary hodge-podge of old and new subroutines each with different functions. Only a few of those modules are conscious, self-aware, deliberative thinkers capable of planning ahead and accurately judging the consequences of potential actions to decide what to do. The rest is composed of a series of unconscious impulses, temptations, and habits. When I say “I,” I refer to the former. When I say “my brain”, I refer to the latter.
And I am always trying to trick and manipulate my brain. If I’m on a diet, I’ll lock the refrigerator door to make it harder to get a midnight snack. I’ll go grocery shopping only when I’m full. I’ll praise myself when I eat celery, etc.
Personally, I only identify with, approve of, and demand respect for those conscious, self-reflective modules, and the various emotions and habits that are harmony with them. And if someone who loves me wants to help me trick my brain into better aligning with my values, I’m all for it. Even if a particular technique to condition my brain requires that I don’t know what they’re doing.
And when it comes to reinforcing behaviors that align with my extrapolated volition (“What is OTOH likely to want to do, but is too scared/lazy/squicked out/biased to get herself to do?”), deliberate, considered, scientifically sound manipulation is probably better than the subconscious manipulation we all engage in, because the chances of getting undesired results are lower.
My objection is basically that it’s disrespectful (to the point of being unethical) to do this sort of thing to someone without their consent. As with many such things, there are going to be cases where someone has not or cannot actually give consent, and so we have to ask whether or not they would do so if they had all the facts on the table. In these cases, it’s a tricky question whether or not you can assume someone’s consent, and it often best to err on the side of not assuming consent.
I notice that you put this in terms of someone you love manipulating your habits in accordance with your values. That sounds a lot like a case where someone is safe assuming your consent.
I was objecting, in the OP, to the lack of any discussion of what seems to me to be the central moral question in this kind of activity, as well as what I took to be the view that this kind of consent can be quite broadly assumed. With some very few exceptions, I think this is unethical.
The thing is, other people’s actions and reactions will always sway our behavior in a particular direction, and our actions will do the same to others. We evolved to speak and act in such a way as to get allies, friends, mates, etc. - ie, make people like us so we can then get them to do things for us. Those who were good at getting others to like and help them reproduced more frequently than those who were not. Even if I were to agree that influencing others’ behavior without their explicit knowledge and consent is unethical, I can’t not do that.
My every smile, frown, thank-you, sorry, and nagging criticism will do something to affect the behavior of others, and they won’t be thinking “Ah, she thanked me, this will have the effect of reinforcing this behavior.” So if I can’t avoid it, the next best thing would be to influence skillfully, not clumsily. In both cases, the other person’s behavior is being influenced, and in both cases they are not explicitly aware of this. The only difference in the second case is that I know what I’m doing.
I definitely understand where you’re coming from. I can empathize with the sense of violation and disrespect, and I agree that in a lot of situations such behavior is problematic, but I probably wouldn’t agree with you on what situations, or how often they occur. This was my biggest problem with PUA when I first heard about it. I found it horrifyingly offensive that men might take advantage of the security holes in my brain to get me to sleep with them. But...confident, suave men are attractive. If a man were “naturally” that way, then he’s “just sexy,” but if someone who didn’t initially start out that way explicitly studies how to behave in an attractive manner, that’s creepy.
Why? It’s not like no one’s ever allowed to try to get anyone to sleep with them, and it’s not like I would favor a strict rule of a complete, explicit disclaimer explaining, “Everything I say is with the sole intention of convincing you to have sex with me.” (Such a disclaimer wouldn’t even be true, necessarily. Human interaction is complex and multi-faceted, and any given conversation would have multiple motives, even if one dominates.)
So what’s the difference between a man who’s “just sexy” and a “creepy PUA” who behaves the same way? (We’ll ignore some of the blatant misogyny and unattractive bitterness among many PUA, because many women find the abstract concept itself creepy, with or without misogyny.)
I think it’s the knowledge differential, which causes a very skewed power balance. The naturally confident, extroverted man is unconsciously playing out a dance which he never really examined, and the woman he’s chatting up is doing the same. When this man is replaced with a hyper self-aware PUA, the actions are the same, but the woman is in the dark while the man can see exactly why what he says causes her to react the way she does.
It’s like a chess game between Gary Kasporov and a guy who only vaguely realizes he’s playing chess. Yes, it’s unfair. But I think the more practical solution is not making Kasporov handicap himself, but teaching the other guy how to play chess.
I think the line between conscious and unconscious influencing of behavior is thinner and more fluid than you seem to say, more like a sliding scale of social self-awareness. And the line between manipulation and self-improvement is even thinner. What if I decided to be much nicer to everyone all of a sudden because I wanted people to like me? The brain is not a perfect deceiver; soon I’ll probably fake it til I make it, and everyone’s lives would be more pleasant.
In the end, I treat emotional manipulation (which involves changing one’s emotional responses to certain behaviors, rather than telling people factual lies) the way I treat offense. It’s just not practical to ban offending people. I think it’s more useful to be aware of what offends us, and moderate our responses to it. In the same way, it’s not possible to ban influencing other people’s behavior without their explicit knowledge; the naturally sexy man does this just as much as the PUA does. It’s possible to have a norm of taking the other person’s wishes into account, and it’s possible to study the security holes in our own minds and try to patch them up.
I think there is a difference. You’re right that all our behavior has or can have a reinforcing effect on other people. But smiles, and frowns, and thank-yous and such aren’t therefore just reinforcers. When I smile at someone, I express something like affection, and if I don’t feel any affection, I smile falsely. All these kinds of behaviors are the sorts of things that can be done honestly or falsely, and we ought to do them honestly. We do this with children, but with adults it’s disrespectful.
It might be possible to smile at someone for the sake of reinforcing some behavior of theirs, and to feel affection all the while, but my sense is that either a smile is an expression of affection, or it is done for some ulterior end.
I think your initial reaction to PUA is spot on. It’s a monstrous practice.
Here’s where I think human thinking is more complicated, muddled, and mutually-reinforcing than you say. In the example of saying “Thank you,” is it really so inconceivable that someone might say “Thank you,” while thinking (or, more likely, wordlessly intuiting) something along the lines of “I’m grateful and happy that this person did this, and I would like them to do it again”? In fact, much of these “reinforcement” or “animal training” tips, while phrased repulsively, mostly end up advising, “Remember to consistently express the gratitude you feel , and refrain from expressing any annoyance you might feel.”
Here’s what I might think, if I were the wife in that example: “Not only does nagging and expressing annoyance when I feel my reasonable expectations were not met belittle and irritate my husband, it doesn’t even work. He still doesn’t put the damn clothes in the damn hamper! We’re both less happy, and I didn’t even get him to change.” If I understand you correctly, that last part, where I discuss the efficacy of my nagging at getting me what I want, sounds dishonestly manipulative to you.
We all expect things from others, and we all care about others. Is it always, inevitably wrong to sully considerations of caring/being a nice person with considerations of ensuring your expectations and needs get met? Or is it that the only legitimate way to get other human beings to meet your expectations is to sit them down and explain it all to them, even if they’re annoyed and made unhappy by this Talk and its lack of emotional salience means it doesn’t work?
Saying “Thank you” and ignoring the clothes that don’t get put in the hamper works. It bypasses defensive, angry, annoyed reactions to nagging. It accurately expresses that clothes-in-the-hamper make me happy—in fact, more directly than the nagging method did, because the nagging method required the husband to infer that clothes-on-floor causes irate nagging, therefore clothes-in-the-hamper must cause happiness and gratitude. He’s happy, because he feels appreciated and doesn’t feel like he’s a teenager again being prodded by his mother. I’m happy, because I don’t feel like a grumpy middle-aged mother of a teenager. The clothes are in the hamper.
Was it wrong that I started all this because I was annoyed at having to nag him and wanted a more reliable way to get him to put his clothes in the hamper? Even though the (empirically sound) advice only told me to frame the same content—“Floor bad, hamper good”—in a more positive light, expressing happiness and gratitude when things go right, rather than irritation and disappointment when things go wrong? Even though once I shook myself of the nagging mindset the happiness and gratitude was not grudgingly given, was not an inaccurate portrayal of my now-happier mental state, was not intended to belittle my husband, but only to make us both happier AND get him to put the clothes in the hamper?
Even without any feedback from others? Or are you OK with a specific kind of feedback? What kind would it be? Is explicitly telling a person what you expect of them OK? If so, when does it become not OK?
Yes, even without feedback, though its always helpful to have other people to think with. As to when telling someone what to do is okay and not, I can’t imagine there’s any general rule, but I also expect we’re all familiar with the kinds of situations when you can do then and when not.
Just to be clear: if a hundred randomly-selected humans are presented with an identical list describing, in full detail, a hundred cases where person A tells person B what to do, and those humans are asked to classify those cases into acceptable, unacceptable, and borderline, your expectation is that most or all of those humans will arrive at the same classifications?
Because I find that extremely unlikely.
Really? To me, it depends substantially on how the list is generated. If we try to “rip from the headlines,” I’d expect substantial disagreement. If we follow you around and watch you tell people what to do in your ordinary week, I expect more agreement.
In short, there are lots of points of disagreement about social interaction, but there are far more mundane and uncontroversial interactions than controversial ones.
Hm.
Well, I certainly agree that it’s possible to generate a list of a hundred cases that 95% of people would agree on the classification of.
But if you followed me around for a week and picked samples randomly from that (both of cases where I tell people what to do, and cases where I could have told people what to do and didn’t), and you asked a hundred people, I expect you’d get <60% congruence. I work in an office full of Americans and Israelis, I am frequently amused and sometimes horrified by the spread of opinion on this sort of thing.
Of course, if you narrowed your sample to middle-class Americans, you might well get up above 90%.
Edit: I should explicitly admit, though, that I was not envisioning a randomly generated list of cases. It was a good question.
I had something a set of mundane cases in mind. My post was just meant to point out that discerning these sorts of situations is not something we use a set of rules or criteria for (at least no fixed set we could usefully enumerate), but most people are socially competant enough to tell the difference.
I agree that most people who share what you’re calling “social competence” within a given culture share a set of rules that determine acceptable utterances in that culture, and that those rules are difficult to enumerate.
Oh, you’re definitely on to something, and it’s something important.
That said, I don’t think what you’re on to has to do with whether and when it’s ethical to manipulate people’s behavior.
So what am I on to then?
Roughly, that we often respond to others’ ability to cause us harm (whether by modifying our behavior or our bank accounts or our internal organs or whatever other mechanism) as a threat, independent of their likelihood of causing us harm.
So if you demonstrate, or even just tell me about, your ability to do these things, then while depending on the specific context, my specific reaction will be somewhat different… my reaction to you knowing my bank PIN number will be different from my reaction to you knowing how to modify my behavior or how to modify the beating of my heart or how to break into my home… they will all have a common emotional component: I will feel threatened, frightened, suspicious, attacked, violated.
That all is perfectly natural and reasonable. And a common and entirely understandable response to that might be for me to declare that, OK, maybe you are able do those things, but a decent or ethical person never will do those things. (That sort of declaration is one relatively common way that I can attempt to modify your likelihood of performing those actions. I realize that you would only consider that a form of manipulation if I realize that such declarations will modify your likelihood of performing those actions. Regardless, the declaration modifies your behavior just the same whether I realize it or not, and whether it’s manipulation or not.)
But it doesn’t follow from any of that that it’s actually unethical for you to log into my bank account, modify my heartbeat, break into my home, or modify my behavior. To my mind, as I said before, the determiner of whether such behavior is ethical or not is whether the result leaves me better or worse off.
Breaking into my home to turn off the main watervalve to keep my house from flooding while I’m at work is perfectly ethical, indeed praiseworthy, and I absolutely endorse you doing so. Nevertheless, I suspect that if you told me that you spent a lot of time thinking about how to break into my home, I would become very suspicious of you.
Again, my emotional reaction to your demonstrated or claimed threat capacity is independent of my beliefs about your likely behaviors, let alone my beliefs about your likely intentions.
This seems very implausible to me. I often encounter people with the ability to do me great harm (a police officer with a gun, say), and this rarely if ever causes me to be angry, or feel as if my dignity has been infringed upon, or anything like that. Yet these are the reactions typically associated with finding out you’ve been intentionally manipulated. Do you have some independent reason to believe this is true?
Yes, but no reasons I can readily share. And, sure, I might be wrong.
.… And here begins the debate.
What do we do? What do we think about this piece of freaking powerful magic-science?
I vote we keep it a secret. Some secrets are too dangerous and powerful to be shared.
I think the cat is out of the bag on this one.
This statement without context is clearly incorrect; there are all sorts of behaviors we can ethically execute with respect to both humans and other animals. I understand that what you and the OP both mean to connote is particular behaviors which we restrict in typical contexts only to non-human animals, but if you’re going to label them as unethical when applied to humans it helps to specify what behaviors and context those are.
That’s a little more specific, but not too much, as I’m not really sure what you mean by “convincing” here.
That is, if at time T1 I don’t exhibit behavior B and don’t assert that I should exhibit B, and you perform some act A at T2 after which I exhibit B and assert that I should exhibit B, is A an act of convincing me (and therefore OK on your account) or not (and therefore unethical on your account)? How might I test that?
This, on the other hand, is clear. Thank you.
I disagree with it strongly.
That story doesn’t trouble you at all?
For most people, there’s lots of low hanging fruit from trying to recognize when they are reinforcing and punishing behaviors of others. Also, positive reinforcement is more effective at changing behavior than positive punishment.
But that doesn’t mean that we should embrace conditioning-type behavior-modification wholesale. I’m highly doubtful that conditioning responses are entirely justifiable by decision-theoretic reasons. And “not justifiable by decision theoretic reasons” is a reasonable definition of non-rational. Which implies that relying on those types of processes to change others behaviors might be unethical.
Does it trouble me at all? I suppose. Not a huge amount, but some. Had Esar said “Doing this to people without their consent is troubling” rather than “never do this to other people without their explicit consent” I likely wouldn’t have objected.
My response to the rest of this would mostly be repeating myself, so I’ll point to here instead.
More generally, “conditioning-type behavior-modification” isn’t some kind of special category of activity that is clearly separable from ordinary behavior. We modify one another’s behavior through conditioning all the time. You did it just now when you replied to my comment. Declaring it unethical across the board seems about as useful as saying “never kill a living thing.”
You seem to know what I mean, so I won’t go into a buch of unnecessary qualifications.
Not necessarily. Is the meaning of ‘convince’ really unclear? Threatening someone with a gun seems to satisfy your description, but it’s obviously not a case of convincing. I’m not sure what you’re unclear about.
If you care to explain why, please do so.
Sure.
The easiest way to get at it is with an example.
Suppose I decide I want my coworkers to visit my desk more often at work, and therefore begin a practice of smiling at everyone who visits, keeping treats on my desk and inviting visitors to partake, being nicer to people when they visit me at my desk than I am at other times, and otherwise setting up a schedule of differential reinforcement designed to increase the incidence of desk-visiting behavior, and I do all of that without ever announcing to anyone that I’m doing it or why I’m doing it, let alone securing anyone’s consent. (Let alone securing everyone’s consent.)
Do you consider that an example of unethical behavior? I don’t.
Now, maybe you don’t either. Maybe it’s “obviously” not an example of manipulating the behavior of other people by means other than convincing them that they should behave in a certain way. I don’t really know, since you’ve declined to clarify your constraints. But it sure does seem to match what you described.
You’re right that this doesn’t seem quite unethical, but it is awfully creepy and I’m not sure how to pull my intuitions apart there. Sitting across from someone who is faking affection and smiles and pleasantries so as to manipulate my behavior would cause me to avoid them like the plague.
In professional environments I find this happens all the time, and when the fake friendliness is discovered as such, the effect reverses considerably. If it’s terribly important to something’s being effective that the person you’re doing it to doesn’t know what’s going on, it’s probably bad.
(nods) Absolutely. I could have also framed it to make it seem far creepier, or to make it seem significantly less creepy.
In particular, the use of loaded words like “faking” and “manipulate” ups the creepy factor of the description a lot. The difference between faking affection and choosing to be affectionate is difficult to state precisely, but boy do we respond to the difference between the words!
I agree that most activities which depend on my ignorance for their effectiveness are bad. I even agree that a higher percentage of activities which depend on my ignorance for their effectiveness are bad than the equivalent percentage of activities that don’t so depend.
That said, you seem to be going from that claim to the implicit claim that they are bad by virtue of depending on my ignorance. That’s less clear to me.
You and Esar both: Taboo ‘creepy’? Particularly with an eye to ‘why is it important that this situation evokes this emotion’?
Well, I think it’s important because IMHO that negative emotional response is what underlies the (incorrect) description of the corresponding behavior as unethical. But I expect Esar would find that implausible.
‘Taboo with an eye to this question’, not ‘answer this question’. I’d already noticed the pattern that people consider finding something creepy to be sufficient reason to label it unethical, but that observation isn’t useful for very much beyond predicting other peoples’ labeling habits.
Oh, I see.
Sorry, misunderstood.
I could replace “creepy” everywhere it appears with “emotionally disquieting”, but I’m not sure what that would help. I figured using the same language Esar was using would be helpful, but I may well have been wrong.
I’ll put it simply: if someone asks me about my kids, neither to be polite nor because they care, but because they want to change the way I behave, then they’re (in most cases) being manipulative and insincere. While perhaps they’re not wronging me, per se, it’s certainly not something that speaks well of them, ethically speaking. If you find this controversial, then you surprise me.
It would be bad advice, I think, to encourage people to use positive reinforcement on others when their ignorance is necessary for it to be effective. Not just practically bad advice, as people are pretty good at picking up on fake friendliness. But full stop ethically damaging advice, if taken seriously. I’m not saying that every such case is going to be unethical, but I’m not in the business of lawlike ethical principles anyway.
No, what I said was that behaviors which depend on someone’s ignorance for their effectiveness are often also bad behaviors. I didn’t say anything one way or the other about a stricter relation between the two properties, but I’ll say now that I don’t think they’re unrelated.
I agree that asking you about your kids solely to change your behavior is manipulative.
I also agree that it’s insincere. (Which is an entirely distinct thing.)
I would also say that asking you about your kids solely to be polite is insincere.
I would not agree that any of these are necessarily unethical.
I am not quite sure what you mean by “ethically damaging advice.”
I agree with you that it’s not always unethical to positively reinforce others without their knowledge.
I would agree that “Positively reinforcing others without their knowledge is a good thing to do, do it constantly” is advice that, if taken seriously, would often lead me to perform unethical acts. I can accept calling it unethical advice for that reason, I suppose.
But I also think that “Positively reinforcing others without their knowledge is a bad thing to do, never do it.” is unethical advice in the same (somewhat unclear) sense.
I agree that behaviors that depend on others’ ignorance are often also bad behaviors.
Behaviors that depend on others’ knowledge are also often bad behaviors.
Agreed on all counts. In fact, it doesn’t look like we disagree at all, judging from your comment.
Oh good!
When you started out by saying “never do this,” I concluded otherwise.
I’m pleased to discover I was wrong.
Well, I think I’d stand by what I said originally. Though I guess I’m counting on no one reading that as the exceptionless proposition ‘for all x such that x is a case of using positive reinforcement without someone’s knowledge, x is unethical’. Likewise, if someone asked me, I’d say ‘Don’t ever shoplift, it’s unethical.’ Though I wouldn’t want or expect anyone to read that as ‘all cases of shoplifting are, without exception, unethical.’
OK. I apologize for misunderstanding your original comment.
Quite alright, I’ve enjoyed the discussion.
What do you think being polite is?
I think it’s false to suggest that pleasantries are being outright faked. This person is probably not sitting there going, “Oh, woe is me, I have to pay the horrible price of smiling and being nice to these imbeciles in order to make them give me what I want; I would never do that otherwise.” In fact, why would he even want his coworkers to visit his desk more if he had such utter contempt for them that he had to fake affection wholesale?
Rather, like many people, there’s a part of him which would probably like to be a nicer person overall, but he can’t always bring himself to live up to the ideal. “People will visit my desk more” is a good immediate incentive to be a better person. The coworker who wants more people to visit their desk is also affected by the results of his own behavior. He’ll probably be happier because of the visitations, and his happiness would cause him to smile more, and the very act of smiling would make him even more happy. After a while the “initial motivation,” whether it was 100% selfish “I want people to visit my desk more; damn their own desires” or the 100% altruistic “I want to manipulate myself into being a nicer person,” or, more likely, a mixture of the two, has faded away, and all that remains is the slightly modified, more pleasant person.
I don’t understand how using friendly behavior to reinforce people visiting one’s desk precludes that behavior being genuine. You seem to be dismissing the possibility that the person in question feels real affection, and is smiling because they are in fact happy that their desk is being visited. Just because they are using their (real) positive response to coworkers visiting their desk as positive reinforcement doesn’t mean that their behavior is “fake” in any way.
Just like a woman who feels a surge of affection towards her husband when he puts away the laundry, and kisses or praises him.
Yes, it’s positive reinforcement, but it’s also a genuine response.