White Lies
Background: As can be seen from some of the comments on this post, many people in the LessWrong community take an extreme stance on lying. A few days before I posted this, I was at a meetup where we played the game Resistance, and one guy announced before the game began that he had a policy of never lying even when playing games like that. It’s such members of the LessWrong community that this post was written for. I’m not trying to encourage basically honest people with the normal view of white lies that they need to give up being basically honest.
Mr. Potter, you sometimes make a game of lying with truths, playing with words to conceal your meanings in plain sight. I, too, have been known to find that amusing. But if I so much as tell you what I hope we shall do this day, Mr. Potter, you will lie about it. You will lie straight out, without hesitation, without wordplay or hints, to anyone who asks about it, be they foe or closest friend. You will lie to Malfoy, to Granger, and to McGonagall. You will speak, always and without hesitation, in exactly the fashion you would speak if you knew nothing, with no concern for your honor. That also is how it must be.
- Rational!Quirrell, Harry Potter and the Methods of Rationality
This post isn’t about HMPOR, so I won’t comment on the fictional situation the quote comes from. But in many real-world situations, it’s excellent advice.
If you’re a gay teenager with homophobic parents, and there’s a real chance they’d throw you out on the street if they found out you were gay, you should probably lie to them about it. Even in college, if you’re still financially dependent on them, I think it’s okay to lie. The minute you’re no longer financially dependent on them, you should absolutely come out for your sake and the sake of the world. But it’s OK to lie if you need to to keep your education on-track.
Oh, maybe you could get away with just shutting up and hoping the topic doesn’t come up. When asked about dating, you could try to evade while being technically truthful: “There just aren’t any girls at my school I really like.” “What about _____? Why don’t you ask her out?” “We’re just friends.” That might work. But when asked directly “are you gay?” and the wrong answer could seriously screw-up your life, I wouldn’t bet too much on your ability to “lie with truths,” as Quirrell would say.
I start with this example because the discussions I’ve seen on the ethics of lying on LessWrong (and everywhere, actually) tend to focus on the extreme cases: the now-cliché “Nazis at the door” example, or even discussion of whether you’d lie with the world at stake. The “teen with homophobic parents” case, on the other hand, might have actually happened to someone you know. But even this case is extreme compared to most of the lies people tell on a regular basis.
Widely-cited statistics claim that the average person lies once per day. I recently saw a new study (that I can’t find at the moment) that disputed this, and claimed most people lie rather less often than that, but it still found most people lie fairly often. These lies are mostly “white lies” to, say, spare others’ feelings. Most people have no qualms about those kind of lies. So why do discussions of the ethics of lying so often focus on the extreme cases, as if those were the only ones where lying is maybe possibly morally permissible?
At LessWrong there’ve been discussions of several different views all described as “radical honesty.” No one I know of, though, has advocated Radical Honesty as defined by psychotherapist Brad Blanton, which (among other things) demands that people share every negative thought they have about other people. (If you haven’t, I recommend reading A. J. Jacobs on Blanton’s movement.) While I’m glad no one here is thinks Blanton’s version of radical honesty is a good idea, a strict no-lies policy can sometimes have effects that are just as disastrous.
A few years ago, for example, when I went to see the play my girlfriend had done stage crew for, and she asked what I thought of it. She wasn’t satisfied with my initial noncommittal answers, so she pressed for more. Not in a “trying to start a fight” way; I just wasn’t doing a good job of being evasive. I eventually gave in and explained why I thought the acting had sucked, which did not make her happy. I think incidents like that must have contributed to our breaking up shortly thereafter. The breakup was a good thing for other reasons, but I still regret not lying to her about what I thought of the play.
Yes, there are probably things I could’ve said in that situation that would have been not-lies and also would have avoided upsetting her. Sam Harris, in his book Lying, spends a lot of arguing against lying in that way: he takes situations where most people would be tempted to tell a white lie, and suggesting ways around it. But for that to work, you need to be good at striking the delicate balance between saying too little and saying too much, and framing hard truths diplomatically. Are people who lie because they lack that skill really less moral than people who are able to avoid lying because they have it?
Notice the signaling issue here: Sam Harris’ book is a subtle brag that he has the skills to tell people the truth without too much backlash. This is especially true when Harris gives examples from his own life, like the time he told a friend “No one would ever call you ‘fat,’ but I think you could probably lose twenty-five pounds.” and his friend went and did it rather than getting angry. Conspicuous honesty also overlaps with conspicuous outrage, the signaling move that announces (as Steven Pinker put it) “I’m so talented, wealthy, popular, or well-connected that I can afford to offend you.”
If you’re highly averse to lying, I’m not going to spend a lot of time trying to convince you to tell white lies more often. But I will implore you to do one thing: accept other people’s right to lie to you. About some topics, anyway. Accept that some things are none of your business, and sometimes that includes the fact that there’s something which is none of your business.
Or: suppose you ask someone for something, they say “no,” and you suspect their reason for saying “no” is a lie. When that happens, don’t get mad or press them for the real reason. Among other things, they may be operating on the assumptions of guess culture, where your request means you strongly expected a “yes” and you might not think their real reason for saying “no” was good enough. Maybe you know you’d take an honest refusal well (even if it’s “I don’t want to and don’t think I owe you that”), but they don’t necessarily know that. And maybe you think you’d take an honest refusal well, but what if you’re lying to yourself?
If it helps to be more concrete: Some men will react badly to being turned down for a date. Some women too, but probably more men, so I’ll make this gendered. And also because dealing with someone who won’t take “no” for an answer is a scarier experience with the asker is a man and the person saying “no” is a woman. So I sympathize with women who give made-up reasons for saying “no” to dates, to make saying “no” easier.
Is it always the wisest decision? Probably not. But sometimes, I suspect, it is. And I’d advise men to accept that women doing that is OK. Not only that, I wouldn’t want to be part of a community with lots of men who didn’t get things like that. That’s the kind of thing I have in mind when I say to respect other people’s right to lie to you.
All this needs the disclaimer that some domains should be lie-free zones. I value the truth and despise those who would corrupt intellectual discourse with lies. Or, as Eliezer once put it:
We believe that scientists should always tell the whole truth about science. It’s one thing to lie in everyday life, lie to your boss, lie to the police, lie to your lover; but whoever lies in a journal article is guilty of utter heresy and will be excommunicated.
I worry this post will be dismissed as trivial. I simultaneously worry that, even with the above disclaimer, someone is going to respond, “Chris admits to thinking lying is often okay, now we can’t trust anything he says!” If you’re thinking of saying that, that’s your problem, not mine. Most people will lie to you occasionally, and if you get upset about it you’re setting yourself up for a lot of unhappiness. And refusing to trust someone who lies sometimes isn’t actually very rational; all but the most prolific liars don’t lie anything like half the time, so what they say is still significant evidence, most of the time. (Maybe such declarations-of-refusal-to-trust shouldn’t be taken as arguments so much as threats meant to coerce more honesty than most people feel bound to give.)
On the other hand, if we ever meet in person, I hope you realize I might lie to you. Failure to realize a statement could be a white lie can create some terribly awkward situations.
Edits: Changed title, added background, clarified the section on accepting other people’s right to lie to you (partly cutting and pasting from this comment).
Edit round 2: Added link to paper supporting claim that the average person lies once per day.
- Notes on Honesty by 28 Oct 2020 0:54 UTC; 46 points) (
- 8 Mar 2014 16:28 UTC; 3 points) 's comment on Open Thread: March 4 − 10 by (
- 20 Jul 2016 10:08 UTC; 2 points) 's comment on The map of cognitive biases, errors and obstacles affecting judgment and management of global catastrophic risks by (
There are certain lies that I tell over and over again, where I’m 99% sure lying is the morally correct answer. Stereotypical example: my patient is lying in a lake of poop, or is ringing the call bell for the third time in 15 minutes to tell me that they’re thirsty or in pain or need a kleenex, and they’re embarrassed and upset because they’re sure I must be frustrated and mad that they’re making me do so much work. “Of course I don’t mind,” I’ve said over and over again. “This doesn’t bother me. I’ve got plenty of time. I just want you to be comfortable, that’s my job.” When it’s 4 am and I desperately want to go on break and eat something, none of these things are true. But it’s my job, and I want to want to do it, so the fact that sometimes I desperately don’t want to do it is kind of moot. But the last thing a patient in the ICU needs to hear from their nurse is “yes, I’m pissed that you shat in the bed again because I was about to go on break and now I can’t and I’m hungry and cranky.” I keep that to myself.
...Other than that, I generally don’t lie to friends, although I do lie by omission, especially when it comes to my irrational feelings of frustration or irritation with things they do. I’m generally not bothered by being very open with people about i.e. my relationships or other personal things, so I’m confused when other people want to lie or conceal information about these sorts of things. I actually have a really hard time keeping up with other people’s systems of lying; when you’re friends with two people who both have specific lists of things they don’t want you to ever tell the other person, it gets complicated. (For almost a year my best friend was dating a man without telling her ex-husband, and I was seeing her ex-husband every time I went to play with my godson, and I had to remember to lie about a whole bunch of random things like “what did you and my ex-wife do on Saturday?” I respected that it was her choice whether or not to tell him, but I still found this really, really irritating.)
When a student asks me to write her a letter of recommendation and expresses some concern that this will be a bother for me I have said “Don’t worry, that’s part of my job” to signal that the request is appropriate.
Upvoted for a rare case of lying where I find myself unable to suggest a good alternative way to not lie, even for people with high verbal SAT scores.
“Don’t worry about it.”
Imperatives are often a nice fallback.
But is that literally as good for a patient in an ICU who really, really needs to not shut up about these things? i mean, in that situation, it would probably occur to me that the nurse might still be lying… but telling a lie like that is still a kind of permission to bother her which “Don’t worry about it” isn’t.
Agreed. One of the things I think is wrong with lying in general is that it can mess up the incentives for behaviours you want to see more of (i.e. a white lie to your friend, claiming to like her awful haircut, doesn’t do anything to help your friend improve her future haircuts.) In my example, I’m lying with respect to my first-order desires, but telling the truth according to my second-order desires. I may first-order want a few more minutes to drink tea and socialize with the other nurses, but I don’t endorse myself wanting that, and I certainly don’t want to encourage my patients to not call me because they’re worried I’m too busy or tired or cranky. I second-order want to encourage the behaviour where my patients call me for all the little things and 90% of the time it’s annoying and stupid but 10% of the time it’s super important.
If I ever had a patient with a rationalist background, maybe I could explain all of that, but maybe not even then; most people aren’t at their best for following complex logic when they’re loopy on drugs or having trouble breathing or whatnot. So I go for the emotional reassurance, because that gets through. Still working on different phrasings, and I don’t always succeed; I was helping out another nurse with her patient who had diarrhea, putting her on the bedpan every half hour, and at one point she fell asleep and pooped in the bed while asleep and then cried with frustration the whole time I changed her, and I wasn’t able to reassure her.
You can expand “Don’t worry about it” to include permission to bother her. “Don’t worry about it—please never give it a second thought if you need me for anything. That’s what I’m here to do.”
I don’t think “This doesn’t bother me” gets parsed literally anyway. In either case what ever you say they are pretty sure it is annoying for you, albeit they do want reassurance that it is not so annoying that you would snap “yes this is annoying!”.
Well, that’s a good idea right there. You could tell them: “Please don’t be embarrassed, and don’t hesitate to call me. You’re in an ICU and it’s very important that you communicate with us, even if it’s just a matter of discomfort. You shouldn’t assume you can tell the difference between something trivial and something serious, or something that requires immediate attention and not.”
I would interpret that as a straightforward confirmation that it was in fact annoying. There would be no resulting awkwardness but it would definitely not make me more likely to speak up again.
“Taking care of you is my sacred duty. I care about you. It is important that you tell me if there is something wrong.”
This is true literally and in spirit.
To invoke a cheesy meme, I wish I could upvote twice, once for phrasing something that doesn’t involve telling a white lie, and the second time for consciously reinforcing that patient care is a sacred duty.
I would count it as a white lie. It’s literally accurate, but it implies a number of things. Some of those things are correct (you consider it important to care for the patient and be informed of any problems), but some of those things are incorrect (you are not annoyed). It isn’t disqualified as a lie just because you believe that your annoyance is not important.
I don’t think that the nurse is implying that he is not annoyed. Both the patient and the nurse recognise that the ‘crapping the bed’ situation is an annoying one, and the nurse is not denying that. The nurse is simply making it clear that his annoyance is a secondary concern, and that instead the welfare of the patient is the primary concern. The nurse genuinely believes that his own annoyance is relatively less important, and he is conveying that literally to the patient. This is actually the true situation, so I am confused about how you think he is lying, even implicitly.
If you go sufficiently upthread, you’ll find that it started with a post by Swimmer963 who is a nurse and is relating her own experience. In particular, she says:
Sorry, I should clarify. I was saying that:
Is precisely something that Swimmer963 could say even though she’s annoyed. She doesn’t have to deny that she’s annoyed, or even imply it. In fact it’s probably futile to try… of course she’s annoyed, and the patient suspects that. That is exactly the motivation for her lie in the first place.
The statement above nevertheless conveys her overall commitment to the patient’s wellbeing, and encourages the patient to understand that “Obviously, my nurse is annoyed about the crap in the bed, but there are more important factors at play here.”
As an extra bonus, I don’t think it’s a lie, hence providing a response to Eliezer’s implied challenge.
On the contrary, her claimed standard response:
Contains three lies, none of which will probably even be believed by the patient:
My point is that Swimmer963′s strategy probably doesn’t really achieve her goals, lying or no lying, and in my original post I was suggesting a possible (honest) alternative.
If a nurse started talking to me about her “sacred duty”, I certainly would not believe her.
What about if she just said: ‘duty’?
That’s not quite sufficient as it’s the word “sacred” which does the heavy lifting. Saying it’s her duty isn’t particularly meaningful for a nurse—it’s her job, that’s what she is paid to do. She is not doing you a favour, cleaning up shit is right there in her job description.
Would you believe them more or less than if they said they’re not annoyed that you shat the bed?
That depends. Mostly on the non-verbal clues that accompany the statement, but also on what do I know about this particular nurse.
Well the classic lie in medicine is when a sibling confides in the doctor that he doesn’t want to donate a kidney to his brother or sister and he’s just getting tested out of family pressure. I understand that in such a situation, the doctor will normally lie and say that they ran the tests and the sibling is not a compatible donor.
Actually, regardless of the reason, they just say that “no suitable donor is available.” If pressed, they say they never release potential donors’ medical information to recipients, for confidentiality and to protect donors from coercion.
That’s interesting . . . what happens if the potential donor asks for (and is willing to sign a release) so that his medical information can be released?
Depends. Different countries have different laws governing such. For the most part, if the hospital sees any legal liability at all, they’ll do the standard CYA. Signing waivers / releases often doesn’t do a whole lot, some of your rights you cannot sign away. Regarding your question, with releasing medical information, such waivers shouldn’t be a problem, although the transplant scenario may be a special case.
Regardless of the legalese, transplant doctors typically get to know you quite well, and more information slips out (implicitly and explicitly) than may be allowed by law (HIPAA be damned). Nullum ius sine actione, as they say. If noone complains, noone sues. Bit like driving without seatbelts.
I’m talking about the United States.
i.e. you don’t know either.
This is an interesting situation, after all, a simple utility calculation says that the receiver’s life is worth more than the donor’s annoyance. Then again, we’re getting close the the cases where utilitarianism fails horribly here.
Well I think most people are reasonably comfortable with the idea that every adult should have complete discretion over what—if anything—is done with his organs.
The more interesting question is what to make of people who lie to conceal decisions in this area, especially physicians.
Yes, but what do you mean by “complete discretion”? After all, the donor was in fact willing to go through with it despite the misgivings, i.e., he valued his relationship with his family more then the annoyance of donating.
And while we’re on the subject of the donor’s preferences, note that both seem to score higher than his sibling’s life. Draw your own disturbing conclusions from that.
By that reasoning if there was some situation where he had to sell himself into slavery to save his sibling’s life, similarly disturbing conclusions could be drawn from his refusal to do that.
You’re making an awful lot of assumptions, including the assumption that the person is a utilitarian and that their reasons for not wanting to donate don’t also involve life or considerations that a wide range of people consider as important as life.
I mean that a potential donor should be able to decline for pretty much any reason, no matter how trivial or silly.
I’m not sure who you are talking about here. In the hypothetical I presented, the potential donor was not willing go through with the donation.
Disturbing or not, it’s reality. A lot of people would not donate a kidney to save a sibling. Either because they hate their sibling and hope that he or she dies sooner rather than later; or because they are selfish and wouldn’t lift a finger to save a family member; or for some other reason.
Anyway, you keep trying to change the subject away from the issue of lying. Please stop it.
Well, in the example he can decline, he will simply have to deal with the consequences.
In which case, what would he do if the tests came back positive?
I’m pointing out flaws in the rationalization for lying.
Agree, but so what?
Positive for what?
What exactly is the flaw in your view? I’m not saying there is none, I’m just trying to understand your position.
So the potential donor still has complete discretion and thus there is no reason for the doctor to lie.
For compatibility as a donor.
Near as I follow your logic, the reason for lying is that the doctor is trying to protect the patient’s right to over what—if anything—is done with his organs. However, as I pointed out that right is not under threat, what is under threat is the patient’s “right” for his decision to have no consequences.
I disagree. For example, the potential donor might want to lie to spare the feelings of his sibling. Or to forestall family members from getting annoyed at him.
Lie and say he was incompatible. That’s kinda the point of this subthread.
Not exactly—the reason for the doctor lying is to prevent hurt feelings and family discord.
Sparing somebody’s feelings is a much worse reason for lying than protecting their right to bodily autonomy.
I meant what would the donor do if the person refused to lie.
I don’t disagree with you . . . have I suggested otherwise?
I don’t know, it would be up to the potential donor. But either way he gets to make his decision and nobody in this discussion is disputing that. Agreed?
ETA: Now that I have explained why medical personnel might lie about compatibility, is there any other flaw in your view? At this point, is there anything I have said which you disagree with?
For a consequentialist, having decisions have “consequences” should not be a terminal value. If decisions having consequences cause those decisions to not be made, that is good, but decisions having bad consequences is, in and of itself, bad.
But it is instrumentally useful if people’s decisions have consequences to the person doing the deciding that are correlated with the net affect of their decision.
Huh? We aren’t discussing the sibling’s decision to give or not give the kidney, we’re discussing the doctor’s decision, given that the sibling isn’t donating the kidney, to tell the patient that the sibling is a match. Are you implying that the doctor should reveal the match, so the patient will pressure the sibling into donating?
That is what the basic utility calculation shows, yes.
Your reference to SAT scores is rather odd. I suppose there is probably some correlation, but they are really quite different skill sets.
How about adding a tiny bit of ambiguity (or evasion of the direct question) and making up for it with more effusiveness, eg, “it’s not only my job but it feels really good to know that I’m helping you so I really want you to bug me about even trivial-seeming things!” All true and all she’s omitting is her immediate annoyance but that is truly secondary, as she points out below about first-order vs second-order desires.
I’m not sure there’s a lie happening… it seems to me that in said circumstances the meanings of the sentences are conventionally mapped, like:
“yes, I’m pissed that you shat in the bed again because I was about to go on break and now I can’t and I’m hungry and cranky.” → I’m incredibly angry with you and I’m going to find out a way to kill you so you don’t bother me again. (Exaggerating a bit here for effect)
“Of course I don’t mind” → of course I do mind but it is not as bad as the example above.
Sentences mean what the listener makes of them, that’s why you have to speak a foreign language when talking to a foreigner who doesn’t speak your language.
A similar argument occurred to me, but I think it does border on proving too much. It also depends on knowing what the listener will make of the sentence. I think that the concept of “lying” does depend largely on the idea that the explicit, plain meaning of a sentence having a privileged position, over implications, signalling, Bayesian updates caused by the statement, etc. If someone says “Well, the probability of me telling you that I am not having an affair, given that I am having an affair, is not much smaller than the probability given that I am not having an affair, so if you significantly updated your prior simply because of my denial, the blame is on your end, not mine”, I don’t think many people would find that a reasonable response.
I think I pinned down the distinction here.
If you tell something like this: “yes, I’m pissed that you shat in the bed again because I was about to go on break and now I can’t and I’m hungry and cranky.”, the patient is going to form a lot of important beliefs regarding the question they’re asked that are not true, more than if you say “this doesn’t bother me”. You have to say what ever sentence ends up misleading the patient the least about what they want to know.
For the affair on the other hand, it is not so, they’d form more valid beliefs if you said that you are having an affair, than if you say you don’t.
The truth is such word noises, body language, intonation, and so on, that mislead the listener the least. Usually has to be approximate due to imperfect knowledge and so on.
Having an affair is discrete, while the annoyance level is continuous. There’s simply no explicit, plain meaning possible for continuous variable like that, one has to deduce it from the tone of the voice, body language, etc etc etc etc. One could of course have friendly body language and tone while saying something like “yes, it is incredibly annoying” but that would merely confuse the listener.
My ex wife is in Geriatrics and I’ve heard a few situations from her where she, possibly appropriately, lied to patients with severe dementia by playing along with their fantasies. The most typical example would be a patient believing their dead spouse is coming that day for a visit, and asking about it every 15 minutes. I think she would usually tell the truth the first few times, but felt it was cruel to be telling someone constantly that their spouse is dead, and getting the same negative emotional reaction every time, so at that point she would start saying something like, “I heard they were stuck in traffic and can’t make it today.”
The above feels to me like a grey area, but more rarely a resident would be totally engrossed in a fantasy, like thinking they were in a broadway play or something. In these cases, where the person will never understand/accept the truth anyway, I think playing along to keep them happy isn’t a bad option.
I’m curious about how you, being a nurse, would prefer that the patient behave in situations like this? There don’t seem to be great options—is there a least-bad attitude?
...I feel like a lot of that boils down to stuff out of patients’ control, like “don’t be confused or delirious.” Assuming that my patient is totally with it and can reasonably be expected to try to behave politely, I prefer that patients tell me right away when they need something, listen to my explanation of what I’m going to do about it and when I’ll be able to do it, or why I can’t do anything about it, and then accept that and not keep bringing up the same complaint repeatedly unless it gets worse. I have had patients who rang the call bell every 5 minutes for hours to tell me that they were thirsty, when I’d already explained that I couldn’t give them anything by mouth, or that their biggest concern was being thirsty but I was more concerned that their heart rate was 180 and I really really needed to deal with that first.
I obviously prefer it when patient’s aren’t embarrassed and I can joke around with them and chat about their grandkids while cleaning their poop. But emotional reactions aren’t under most people’s control either, so it’s not a reasonable thing to ask.
Relevant recent Slate Star Codex post
Of course, there being enough nurses might make the job more consistently enjoyable so that they wouldn’t have to lie to their patients as often. Same applies to doctors. Shit in itself isn’t that bad. It’s the shit plus hurry that kills you.
Just saying “this is part of my job and I love my job” is not good enough?
I wonder if there is a better way of handling this, other than telling your best friend that you are not going to be a part of this game and risking a backlash… In a similar situation I ended up curtailing my interactions with the party I’d have to lie habitually to, which is rather suboptimal.
It sounds evasive and not like the natural response, and I’m not all that worried about my patients yelling “no, you’re a liar!” and getting mad if I tell them I don’t mind at all, and I don’t have any particular reason to want to not lie in this situation.
What’s good enough for alleviating discomfort so cheaply as with a few words if there’s still better left? Showing you care about the people instead of some abstraction called a job usually works better for making them comfortable.
I find it takes a great deal of luminosity in order to be honest with someone. If I am in a bad mood, I might feel that its my honest opinion that they are annoying when in fact what is going on in my brain has nothing to do with their actions. I might have been able to like the play in other circumstances, but was having a bad day so flaws I might have been otherwise able to overlook were magnified in my mind. etc.
This is my main fear with radical honesty, since it seems to promote thinking that negative thoughts are true just because they are negative. The reasoning going ‘I would not say this if I were being polite, but I am thinking it, therefore it is true’ without realizing that your brain can make your thoughts be more negative from the truth just as easily as it can make them more positive than the truth.
In fact, saying you enjoyed something you didnt enjoy, and signalling enjoyment with appropriate facial muscles (smiling etc) can improve your mood by itself, especially if it makes the other person smile.
Many intelligent people get lots of practice pointing out flaws, and it is possible that this trains the brain into a mode where one’s first thoughts on a topic will be critical regardless of the ‘true’ reaction. If your brain automatically looks for flaws in something and then a friend asks your honest opinion you would tell them the flaws; but if you look for things to compliment your ‘honest’ opinion might be different.
tl;dr honesty is harder than many naively think, because our brains are not perfect reporters of their state, and even if they were good luck explaining your inner feelings about something across the inferential distance. Better to just adjust all your reactions slightly in the positive direction to reap the benefits of happier interactions (but only slightly, don’t say you liked activities you loathed otherwise you’ll be asked back, say they were ok but not your cup of tea etc)
If I am honest without accuracy… if I am proud to report my results of my reasoning as they are, but my actual reasoning is sloppy… then I shouldn’t congratulate myself for giving precise info, because the info was not precise; I simply removed one source of imprecision, but ignored another.
Saying “you are annoying” feels like an extremely honest thing, and I may be motivated to stop there.
However, saying “sorry, I’m in a bad mood today; I think it’s likely that on a different day I would appreciate what you are trying to do, but today it doesn’t work this way, and it actually annoys me” is even more honest, and possibly less harmful to the listener.
A cynical explanation is that while attempting to be extremely honest, we refuse to censor the information that might hurt the listener… but we still censor the information that would hurt us. For example, the short version of “you are annoying” contains the information that may hurt my friend, but conceals the information about my own vulnerability.
Perhaps a good heuristic could be: Don’t hurt other people by your honesty, unless you are willing to hurt yourself as much (or 20 % more, to balance for your own biased perception) -- and even this only if they agreed to play by these rules. (Of course you are allowed to select your friends according to their ability and willingness to play by these rules. But sometimes you have to interact with other people, too.)
My own (very limited) observation of trying to be radically honest has been that until I first say (or at least admit to myself) the reaction of annoyance, I can’t become aware of what lies beyond it. If I’m angry at my wife because of something else that happened to me, I usually won’t know that it’s because of something else until I first express (even just to myself) that I am angry at my wife.
Until I actually tried being honest about such things, I didn’t know this, and practicing such expression seemed beneficial in increasing my general awareness of thoughts and emotions in the present or near-present moment. I don’t even remotely attempt to practice radical honesty even in my relationship with my wife, but we’ve both definitely benefited from learning to express what we feel… even if what we’re feeling often changes in the very moment we express it. That change is kind of the point of the exercise: if you’ve completely expressed what you’re resenting, it suddenly becomes much easier to notice what you appreciate.
I think that even Blanton’s philosophy kind of misses or overstates the point: the point isn’t to be honest about every damn thing, it’s to avoid the sort of emotional constipation that keeps you stuck being resentful about things because you never want to face or admit that resentment, and so can never get past it.
This made me think; I may have some luminosity privilege that needs checking...
Wow. This comment made me happy, even with the jargon. Positive reinforcement for thinking about how your experience might be atypical and other people might have needs or disabilities you hadn’t considered!
If you are interested in some more things that may distinguish your experience from ChrisHallquist’s, you might consider that his examples are mainly about lying in self-defense to hostile people or people who have deliberately asked questions that are costly to evade or answer honestly. Picture an Aikido expert who lives and works in a safe neighborhood getting angry at a janitor who lives in a violent slum for saying they reserve the right to throw a punch if the situation calls for it. I might think the poor janitor has the right to defend themself, but that doesn’t mean I’d be very likely at all to punch someone at your dinner party.
Some of his examples were like that. The part of his post that most bothered me was “accept others’ right to lie to you”, and the title has now been changed to “White Lies”, which I’ve never heard used conversationally to cover things like “no, Mom, not gay”.
I have always interpreted “white lies” as “lies I approve of” rather than “small lies,” because the size of a lie is clearly a subjective measurement. It looks like wiki mostly agrees.
“Lies I approve of” and “white lies” are overlapping sets, but aren’t quite the same. For example, if a Nazi asks you if you’re hiding any Jews (and you are), I approve of lying to them, but this isn’t a white lie. On the other hand, if your horrible racist aunt asks you if she’s racist, telling her that she’s not would be a white lie, but not one that I approve of.
Looking at Augustine’s taxonomy the terminology seems clearer, as it differentiates “lies told to please others in smooth discourse,” which is what I think Alicorn would associate with ‘white lies,’ with “lies that harm no one and that protect someone from bodily defilement.” (And note how the lies in religious teachings mirrors the discussion of lies in science!) As expected, Augustine thinks it’s better to lie to the Nazi than to lie to your aunt.
But again it seems the subjectivity shines through in the definition of harm, if you want to put the hidden Jew lie in Augustine’s last category. Isn’t the Nazi harmed when you lie to him, and he doesn’t get to catch the hidden Jew?
most people WANT the nazi to be harmed.
Indeed.
I would argue that the Nazi isn’t really harmed when you lie to him, because not having a preference satisfied is not necessarily harm.
The problem with that example is that “racist” as commonly used has several very different meanings and political forces that intentionally try to confuse them.
I went back and looked at the examples again, and in each case it seemed to me that the question, plus what we know about the asker and their relation to the answerer, when combined with an expectation that the answerer will actually tell the truth, imposes a large expected cost on the person being asked.
Am I missing something?
I wouldn’t have read “others’ right to lie to you” as implying that you personally, Alicorn, must be okay with any person, even your friends and guests, lying to you when you have specifically and credibly indicated that you prefer they would not, and can handle the truth. But I can see how you might easily read it that way.
Also maybe could you explain what’s objectionable about that while tabooing “lie” and “right”?
Somewhat tangentially, that seems like a really nice way to live in some respects. Have you written about what it’s like, and when/how you started choosing exclusively truthtellers to be your friends anywhere?
I have begun to suspect that there is some kind of misreading here, but I’m not sure what I’m supposed to read in an undisclaimed second-person pronoun in an article I’m reading if it’s not “I claim this applies to Alicorn”.
Actually, in my framework, “right to lie to you” is very nearly ungrammatical. You have the ability to lie; talking about the right to do it is approximately nonsense—if someone interferes with your ability to lie, somehow, this would usually be wrong for reasons not having anything to do with your ability to lie in particular. My objection to it is that I think making statements that you believe to be false and intend your audience to believe is morally wrong barring atypical-for-people-making-utterances cases, and I don’t think people are under any obligation to tolerate moral wrongs being done to them, and urging people to do this is… I’m gonna go with “sketchy”.
Usually when I think of “white lies” I think of things that are not primarily intended to produce a belief about the literal content of that sentence at all—they’re a totally different type of social move only loosely related to their “meaning”. I’m thinking about things like this:
I gained a lot of social effectiveness when I figured out that these things aren’t the same kind of “lying” as when you primarily intend to produce the false belief. And just because there’s a false statement involved doesn’t always make them wrong or harmful. I don’t think the overly literal person who interprets these kinds of statements as assertions of fact is a straw man here on Less Wrong, either.
ChrisHallquist’s examples seemed a little more self-defense oriented than cooperative, and in those cases I do think that some amount of harm is being done—but in each case against a person who has already put the answerer in an unnecessarily difficult position, which they might not have the verbal skill to extricate themselves from without lying.
If I threaten or corner someone so they can’t respond truthfully without taking a loss (even if I don’t mean to), and then they use the one tool they have to extricate themselves, it would be a little much for me to condemn them for it, or cut them out of my life and try to exclude them from my circle of friends (which is not a total straw man in the comments here).
That’s not to say that the liar isn’t ever in the wrong—often they are! There are lots more times when lying is appealing than when lying is right. If I can produce a desired interpersonal outcome by telling the truth, I do—this is almost always—and I am very lucky to have the kinds of resources, friends, and verbal facility to be able to live safely with that restriction. But the mere fact that someone resorted to lying in a difficult situation shouldn’t end the discussion—why they lied is important.
Another class of lying—this one I admit to—is deliberately simplifying to produce a 30-second version of a 30-minute thought. Sometimes I forget to note that this is an oversimplification. Sometimes I leave details out of stories about myself, or change immaterial parts to make them flow better. I was not very good at explaining or telling stories until I gave myself permission to do this. I don’t do this if I think it will harm someone’s ability to get the outcomes they care about.
In this comment I list some things that are not lying, which include many of your examples. I’ll add now that I think anybody can waive any right they don’t happen to want, including the right not to be lied to, and reiterate also that you have to intend to be believed to count as lying, and clarify that being mistaken—including sincere mistaken-ness about remembering to include a caveat necessary for factual accuracy—does not constitute a lie.
If Bella has successfully communicated to Alice what she’s looking for, if Charlie isn’t making an attempt to cause Doris to believe he’s fine, likewise with Edward—then that might well be fine. (Is it a coincidence that every single name you chose except Doris is a Twilight character?)
...then you may well have forfeited contextually relevant rights. I read Chris’s post, saw an undisclaimed second-person pronoun telling me to respect others’ right to lie to me, and was like: “But… I didn’t do anything.”
Sometimes people cause others to feel cornered or threatened, without knowing it. That doesn’t make them bad people, but it would explain what might otherwise be “bad behavior” on others’ parts. And if anyone finds that people seem to regularly lie to them about certain kinds of thing, they should seriously consider the hypothesis that they are misunderstanding the interaction.
I know what that feels like. I’ve had that response to a lot of things that turned out not to be about me at all. It hurts at first. I try to read those things a second time, when I’m not feeling indignant anymore, to figure out whether it’s actually about me, or things I do. I try to avoid the generic “you” and “we”, and abstract pronouncements like that, for exactly that reason—I don’t want to be misunderstood in that way.
Not intentional, I was just looking for common names in alphabetical order, but likely Alicorn → Luminosity made those names more available. :)
These are good examples. I want to add one that I’ve observed/experienced, somewhat related to this one:
Sometimes, you’re talking to a person who has some importance in your life — a relative, let us say — and they ask you a question about your life (some aspect of your life that doesn’t affect them directly). You know that, if you tell them the truth, their reaction will be to lecture you, berate you, give you unwanted advice, yell at you, or otherwise engage you in an unproductive mode of interaction. You know this because this has happened before; you are quite sure that this interaction won’t change your mind (because you have good reasons for living your life the way you do, as opposed to the way your interlocutor wants you to), nor will your protests or arguments change their mind (because of their irrationality). Neither is it likely that this person will respond to requests to drop the subject.
So, you lie. Result: continued peaceful, pleasant conversation.
What do people here think of the moral status of such lies? I am genuinely curious. I myself am somewhat torn, and I’d like opinions.
I don’t think such lies are particularly wrong, but they aren’t the best way to go about dealing with the situation. Not that telling them the truth is better, because it leads to them acting as you describe. I think it’s best to say “I don’t want to talk about that” or “That’s personal”, and shame them if they pry.
In the interests of full disclosure, I should say that I’m not close to any of my relatives.
Heh. “I don’t want to talk about that” or “That’s personal” don’t come anywhere close to working in certain cultures (by which I mean both the unique culture specific to a family, and cultural groups such as e.g. Ashkenazi Jews — the archetypal Jewish mother who says “So, are you meeting any nice girls? What do you mean it’s none of my business?? Of course it’s my business! I’m your mother!” etc. etc.).
Edit: What’s with the downvotes all over this thread...?
If it doesn’t work, I recommend ending the conversation, saying something like, “If you’re not going to respect my boundaries, I’m not going to talk to you”.
This doesn’t apply if you’re financially dependent on said relative. If so, go ahead and lie as much as you need to.
Yeah, I have heard this sort of recommendation. I… don’t think I’ve ever actually seen anyone use it. I don’t know, it could be a good one. It’s a rather harsh thing to say, though, especially to, say, one’s grandmother. I don’t think I could do it.
I guess the point is, sometimes, not lying is hard? (If you’re the type to take an absolute stance against lying, your response might be along the lines of “Yes, doing the right thing is hard. That makes it no less right.” I remain… unconvinced.)
I’ve come close to using it, and it just approaching it has been enough to get people to back down. In the long run, it teaches them not to ask you about those things, which is what you want. I can see it being rather harsh, though. I guess I have some difficulty imagining being in an interpersonal relationship in which I both feel strongly positively towards a person (enough to make me reluctant to say something like that) and at the same time having things that I have to lie about.
If someone wants to pry into your affairs and berate you about them, then you are perfectly justified in lying to avoid them getting on your case. And moreover, if they know they will get on your case if you tell them the truth, then they shouldn’t even expect you to tell the truth.
In this view, if they are clearly defecting on you by trying to get into your business, then you are justified in defecting on them. If they are laying a trap, and you walk into it, then you may encourage them to engage in that behavior in the future. Tit-for-tat.
To be extra clear, when I support lying to people who want to pry “pry into your affairs,” I am specifically referring to private information which isn’t their business. I am not referring to lying to cover up wrongdoing you’ve committed, or in ways that will cause them tangible harm. Those situations are more complicated. By “private” information, I am talking about information which is widely considered private in which culture is relevant, and which tangibly effects mostly you, not other people.
I am also following your premise that this person likely can’t be reasoned with based on a persistent pattern of their behavior, or persuaded to be more accepting of your behavior. Evasion, persuasion, and avoidance are preferable if they work.
Lying to people as tit-for-tat punishment seems valuable to me, but only within confines of a narrow set of situations involving people who have demonstrated a consistent pattern of being hostile or controlling, and where evasion is infeasible. My endorsement of this notion should not taken as a defense of lying in a broad range of situations.
Indeed, this has been my exact answer in cases where such lies have been found out.
Agreed entirely.
Also, agreed entirely.
The thing is that the prying person likely considers the private affair to potentially involve wrongdoing.
So if you were in a culture that permitted say, slavery, and considered how one treats one’s slaves a private affair, you would you still be willing to apply the above reasoning.
Maybe. There are several scenarios:
A prying person might believe that you might be engaged in actual wrongdoing.
A prying person believes that you are engages in something that they think is wrong, but actually isn’t wrong.
A prying person doesn’t believe that you are doing anything wrong. They are just trying to get on your case because they are controlling or malicious. Or they think it’s fun.
In SaidAchmiz’s example of a nosy relative, it’s not at all clear that the relative believes he might be engaging in any moral infraction, unless that relative has an incredibly expansive notion of morality, as some relatives do.
No, and I don’t think this is accurate reading of my comment, though perhaps I allowed for confusion. In my comment, I discuss multiple conditions for ethically lying to people prying into private information:
That information is considered private in the relevant culture, such that the questioner knows (or should know) they are asking for information that is culturally considered private. If they know they are potentially defecting on you, then their behavior is worse. If they don’t know they are defecting on you, then their apparent defection may have been a mistake on their part, in which case, you should be less enthusiastic to engage in tit-for-tat defection.
The “private” information does not include ” lying to cover up wrongdoing you’ve committed, or in ways that will cause them tangible harm.”
Since slavery is wrongdoing, then a slaveholder is not justified in lying about treatment of slaves, even in a past culture where slavery was considered acceptable and private.
Yes, some slaveholders may have believed that slavery was justifiable, and they were then justified in lying to cover up their treatment of slaves. But they were wrong, and they should have known better.
To conclude, I suggest there are some circumstances where it is justified to lie in response to prying questions about private information. This principle is contingent on classifying some kinds of questions as undeserving of true responses. I have not attempted a rigorous or comprehensive discussion of which questions are undeserving; that would be a much longer discussion, and you are welcome to provide your own thoughts if you consider it interesting.
I do believe that cultural notions of privacy are useful to estimate whether a questioner is being an asshole, though norms aren’t the only factor. If indeed a questioner is asking a question which should be considered unethical, abusive, or overly intrusive… and that type of question is also culturally recognized as unethical, abusive, or overly intrusive, then the questioner should know that they are being an asshole.
If it’s a morally ambiguous situation, where the other person can morally justify getting into your private business, or the ethics or intentions of their questions are unclear, then lying to them to protect your privacy is much less defensible.
What if you consider the information private, but the person asking does not (and you are both aware of the other’s views)? (That is, they know you think it should be private, but they disagree with you on that point.)
Good questions.
If you know that the other person believes that the information isn’t private, then you know that they aren’t knowingly doing something which they believe is prying. So they don’t have mens rea for being an asshole by their own standards. (Yes, I believe that sometimes people are assholes by their own standards, and these are exactly the sort of people who don’t deserve the truth about my private matters.)
If they don’t know my feelings about privacy, then they are not knowingly intruding. But if they do know my views on the privacy of that information, they are knowingly asking for information that I consider private. That could be...disrespectful. If my feelings about privacy on that matter are strong, and they ask anyway, then they may have mens rea for being an asshole by my standards. Perhaps they believe that my standards are wrong and that I should not judge them as an asshole for violating them.
If I thought there was legitimate disagreement about whether the information should be considered private, then I wouldn’t view the other person as defecting on me, and I wouldn’t feel motivated to lie to them to punish their defection. If I still felt motivated to lie, it would be for purely self-defensive reasons (for instance, I might lie to conceal health issues which don’t effect anyone else).
As examples, I think there are many questions between relationship partners, where the ethics of privacy vs. transparency are up for debate, e.g. “how many partners have you had in the past?”, “do you still have feelings for your ex?”, “have you had any same-sex partners?”
On the other hand, if I thought their view of privacy was ridiculous, and they can’t defend their view against mine, then I would be pretty annoyed if they still pressured me for information anyway. That sounds like a breakdown of cooperative communication, or the beginning of a fight. Lying might be an acceptable way to get out of this situation.
Surely there is some point where communication becomes sufficiently adversarial that you are no longer obligated to tell the truth? Especially if both people can tell there is a conflict, so they know to discount the other’s truthfulness?
For example, if your nosy aunt says “I feel that your current dating situation shouldn’t be private,” you say “I think it should be private,” and she continues to ask about your dating situation, then I think you are justified in lying. Your aunt is knowingly pushing for information that you want to keep hidden. She has no defensible argument that her view of privacy should trump yours.
Since you have stated that you think your dating situation should be private, your aunt shouldn’t even expect to get the truth out of you here, so if you lie, there is less danger of her being deceived. People are known to lie about matters that they consider private, and your aunt should take this into account if she chooses to needle you.
When I’m discussing lying to a prying person, I’m mostly imagining conversations that are non-cooperative or hostile, or which involve protecting secrets which mostly effect oneself. I am imagining nosy relatives, slanderous reporters, totalitarian judges, or ignorant coworkers who ask you why you are taking pills. Remember, my ethics generally prefers evading or refusing prying questions. If evasion doesn’t work, that suggests an uncooperative discussion or cornering has occurred.
You should also consider the possibility:
1a. A prying person believes that you are engages in something that is wrong, but that you mistakenly think isn’t wrong.
Yes, I know this is technichally a special case of 1., but it’s worth considering separately since people tend to be bad at considering the possibility that they are wrong.
I think one of the problems here is that most people just don’t agree with you on that. And given this, your treatment of people who do a thing that you consider wrong, but they do not, is (in their eyes) very not-nice.
The fact is, you could (especially since you’re a deontologist) decide that any old thing is morally wrong. Perhaps looking at one’s watch is morally wrong. Perhaps using the word “moist” is morally wrong. Perhaps wearing green socks is morally wrong. I (that is, someone interacting with you socially) just don’t know. Perhaps your declarations of what is or is not morally wrong make sense to you, but to other people, they just look arbitrary.
And so what it looks like is that you have decided, apparently somewhat arbitrarily, that a thing that most people do regularly is morally wrong; and now you’re declaring that anyone who disagrees with you is a Bad Person, and not even straightforwardly: you’re making insinuations about their character (“sketchy”). This, to observers (or at least, to me), just doesn’t seem very nice or reasonable.
Well, in one sense, no is under any obligation to behave decently and reasonably to their fellow humans. It sure would be nice, but if you protest that you don’t have a duty to do, then sure, I won’t argue.
But insofar as anyone does have an “obligation” to behave decently, I think that saying you’re not obligated to refrain from disparaging the character of anyone who violates one of your arbitrary, personal moral rules is, to use a term from your own comments… not welcoming. To say the least. (So, for example, if you decided that wearing green socks is morally wrong, I think I would say that you have an “obligation” to tolerate people wearing green socks around you.)
This is actually not an accusation I’ve had leveled at me before. Consequentialists tend to object to how rigidly I define moral rules, not which ones are on my list. I’m pretty sure this is a strawman.
This is just an uncharitable misreading of me. I don’t think I’ll engage you in particular any further on this subject unless you produce a dramatically better understanding of my position.
When people misunderstand or misread what I say — as happens sometimes, a couple of comments to this post being examples — my response is usually an attempt to clarify my position, correct the misreading, etc. Most of the people with whom I have engaged here on LessWrong do similarly.
A response to an alleged misreading that consists of saying “That’s not what I meant; I won’t explain what I meant; and I won’t talk to you about this anymore” is not a particularly honorable discussion tactic. If you think I have misread you — as is, of course, possible — please explain how.
...”Honorable”?
You’re trying to have a conversation on a completely different level from any that interests me. I’m not playing. Please stop trying to paraphrase me, you’re bad at it.
This is what “slash their tires” analogy is meant to cover.
Right—I am suggesting an alternative metaphor that is slightly but materially different.
Probably. I mean, you literally wrote the book. And the sequence. Even the name… I’m sure a floodlight is of great use up on your hill, but it doesn’t do much deep down here (wherever here is) in what might be fog and might be mud I can’t tell because I can’t see it well enough.
...what? Some kind of extended metaphor? Your meaning is totally unclear.
In the sequence Armok_GoB mentions I use light as an extended metaphor for self-knowledge.
Hmm. I haven’t read your full sequence (one of these days!), so if that’s the reference point then that explains my confusion. Thanks!
In that case a real honest answer might be: “I felt uncomfortable during that activity but I don’t know whether it’s because of the activity or because it’s I generally focus to much on the negative.”
That gives the person you are dealing with a lot of useful information to interact with you. Sharing something deeper about yourself builds trust. If the person is well intentioned they can use the information in a way that makes the interaction for both of you better.
The goal of honest communication is to give the other person useful information. Transmitting more useful information is being more honest.
If you just say your loathe the activity or you say you liked it, you might be holding something back. If you have a trustworthy friendship than knowing about your emotional state is useful information for your friend.
Your friend might be good at reading body language and be able to tell the difference between your fake smile and a real smile but it makes it so much harder for a friend to help you when you aren’t open about what you are feeling.
To me not being open about your emotions on a deep level when you are with friends or loved ones feels like defecting in a prisoner dilemma. You might get some immediate benefit but overall it’s not the path of the game tree that’s optimal. To the extend that there are people who can’t deal with me being open about what I feel I don’t want them as friends or loved ones.
Boy, I sure wouldn’t want to date a person like this (your girlfriend-at-the-time). She asked for your opinion; pressed you to actually give it, thus communicating (by any reasonable measure) that she actually wanted your opinion; and then, when you gave it honestly, was unhappy about it? That’s horrible.
I don’t think I’d ever willingly choose to be close to someone to whom I’d ever regret not lying in response to being asked for my opinion. The thought of living like that, living with the knowledge that honest communication is basically impossible because any time the person asks me (and presses me) about my opinion, I have to consider the possibility that what they actually want is lies — that this person prefers lies both to truths and to no comment — repulses me.
Demand by rational men for rational women exceeds supply, even taking into account that some of the women have harems. If you’re one of the lucky men, or a woman, be aware of your privilege and don’t criticize men who lack it.
I think the set of women you can be honest with in a relationship is much larger than the set of women who are full on CFAR style rationalists.
My experience is more like “real honesty, in or out of a relationship, only works with the upper echelon of CFAR style rationalists” though admittedly exposure to the naked, sharp gears of my own intellect may have more Lovecraftian results than it would in the population average.
Honest about carefully selected safe topics? Or about the weird ones?
I agree with the point in your first sentence, but I’m not sure I follow what your advice is in the second sentence.
Are you suggesting that my criticism comes from having rational women to date, whereas Chris (at the time of the anecdote) did not, and so was forced to date an irrational woman, for which I was criticising him?
Those are three wrong things, it seems to me:
I don’t find it to be the case that rational women occur in abundance in my dating pool;
No one (presumably) forced Chris to date the young lady in question;
I wasn’t criticising him for his dating choices; if I was criticising anything, it was his advice that we accept such behavior in our partners / friends, and expressing the view that I, personally, would not accept such behavior.
P.S.
Really?
That surprises you? Do you think rational women wouldn’t want harems?
Scott tells us that polyamory seems like a suboptimal way to get sex, and I assume this holds true even for women—technically. But sex is not fungible.
Um… sure, that surprises me a bit. Also that they have the harems, even given wanting them.
I don’t really know what you are saying in your second paragraph. Please explain?
...What?! You’re surprised that rational people who are in demand can get what they want?
I may try to explain the second part later, but in my current condition I don’t get your confusion.
Depending on what “what they want” is, yeah, I might be surprised.
I mean, clarify for me, what are we talking about here? “Polyamory is relatively common in rational circles, and poly relationships in said circles often/sometimes/commonly consist of (i.e., are circumscribed by) one woman who is dating several men”?
Harem is a bit misleading as it implies dominance and ease. Polyamory presumably requires work to keep the people around you and to prevent drama, and that situation doesn’t seem obviously preferable.
That doesn’t entitle any irrational woman to date any rational man. Men are allowed to stay single, you know.
It’s better to be single than to date someone irrational.
If everyone thought like that, I’d never get a date (and neither would anyone else, of course).
Perhaps (though I’m not sure*), but even if so, that’s no great loss, because getting a date isn’t good in itself, it’s only good if it’s with someone with whom you’re compatible, and rationality is critically important for that.
Also, this would have the effect of making rationality a more desirable trait, and irrationality a more costly one.
.*It’s definitely not true for everyone, as there are relationships in which both partners are rational.
As best I can tell, “people who sometimes ask questions they might not want to hear the answer to” are a large majority of the population. “Does this dress make me look fat” is a cliche put-you-on-the-spot question for a reason.
Sometimes is an important word here. Too often, and it might be an issue, but it’s not like this was a regular occurrence with her. (A big THANK YOU here to Pablo and hyporational for noticing they shouldn’t be making too many assumptions based on one anecdote.)
Now, another approach is to exclusively date people who value total honesty at all times. But (1) there are other qualities I value more in a mate and (2) I suspect such openness to “total honesty at all time” tends to correlate with being social inept and overly honest even with people who don’t want that, qualities I’d like to avoid.
To reiterate a point I have made several times in this post’s comments:
“Valuing total honesty at all times” and “refraining from pressing someone for an honest answer when what you actually want is a lie” are two very different things.
Correspondingly, being totally honest at all times, unprompted, is not the same as being honest when specifically pressed for an honest answer.
I try to restrict my circle of friends to people who do not ask precisely such put-you-on-the-spot questions. That, among other policies and attitudes, makes my circle of friends small.
Or, to put it another way: people worth being friends with are rare. And those are the only people I want to be friends with.
(BTW, I usually answer that with “you looked better in that other one”, so I don’t offend her but I still help her choose flattering clothes.)
You’re misunderstanding the message.
“Does this dress make me look fat?” is not really a question. It’s a request for a compliment.
If I may engage in gender generalization for a moment, men usually understand words literally. This annoys women to no end as they often prefer to communicate on the implication level and the actual words uttered don’t matter much.
In a sense, yes. But less-cliche questions sometimes get used the same way, and you have to be on guard with that.
(You can argue that giving the expected responses to such questions isn’t technically lying, but that seems like semantic hair-splitting to me.)
Depends on the details. I don’t think there’s anything necessarily unreasonable about the following sequence of events: A wants some information from B, and presses for it despite B’s reluctance. When the truth actually comes out, A finds it upsetting. (“Do you love me?” “Yes, of course.” “It sometimes doesn’t seem that way. Seriously, and honestly, do you really love me?” “Well … no, not really. I just enjoy having sex with you.” “Oh, shit.”)
Now, being upset because your boyfriend thinks the acting in a play wasn’t much good? Yeah, that seems less reasonable. So I agree that this probably wasn’t a great relationship to be in. But I really can’t endorse any general claim that it’s bad to press for someone’s opinion when one of the possible answers would upset you.
Having the truth upset you, and being angry at a person for telling you the unpleasant truth, are two very different things.
But there are times when both are appropriate. Example: “did you strangle my puppy?” It’s hardly unreasonable to expect an honest answer and then be angry at the person when the honest answer is “yes.”
More generally, it is not inherently contradictory to expect total honesty and to be occasionally angry at what that honesty reveals.
In that case, you’re not angry at the person for telling the truth, you’re angry at them for having strangled your puppy. Similarly, in the love example, the problem isn’t so much the fact that B told A the truth, the problem is that B had systematically lied to A in order to get sex before. In neither case are you actually angry at the person for telling you the truth, you’re angry at them for committing a separate moral wrong.
This seems different from “did you like my play”, since disliking a play isn’t a moral wrong by itself. In that case you really are angry at someone for telling the truth.
I personally am not so much of a saint as to only get mad at people for moral wrongs. I can absolutely see myself getting angry at a close person for not liking a book I wrote / play I directed / whatever. It still has nothing to do with truth—I want them to be honest, I just want them to honestly like my stuff! (Of course that isn’t entirely mature and fair, but people get their emotions all tied up in their artistic work).
That’s exactly my point. And I conjecture that what upset Chris’s girlfriend was the fact that her boyfriend wasn’t impressed by her friends’ acting. I could, of course, be wrong. If her problem was simply that he’d been tactless enough to tell her what she asked him to tell her, then indeed she was bring grossly unreasonable.
If that’s indeed what upset her, then she was also being unreasonable. Consider:
Chris could have been unimpressed because the acting was, in fact, bad. (Let’s not get into whether art can be objectively bad, or any such thing; that’s not the point of the discussion.)
If so, then his reaction is information that the acting is bad. Being angry at the messenger who is conveying this information to you is unreasonable.
On the other hand, Chris could have thought the acting sucked because of differing tastes, and not any objective badness of the acting.
If so, then what his girlfriend has just found out is that their tastes don’t entirely align in this arena. Being angry at Chris for this revelation is, also, unreasonable.
So, in either case, being angry at your boyfriend for not being impressed with your friends’ acting is unreasonable.
Unless, of course, you take the view (as did another poster elsewhere in the comments) that one may, and should, alter one’s opinions on the basis of what one thinks will please one’s close ones. I strongly reject such views.
It could be that she thought the most likely explanation for him not liking their acting was because he had unrealistic expectations or didn’t watch the show with an open mind.
Both of those suggestions confuse me.
“Their acting sucked. I expected it to be good!” “Well, that was unreasonable of you! Clearly, you should have expected it to suck!” “Oh, well, in that case… yep, it sucked.”
???
What on earth does that mean...?
More like:
“That show was not in the top 30% of all entertainment I have ever consumed.”
″...How was it as amateur theater goes?”
“Oh, easily top fifteen percent there.”
The open-mindedness criterion is a little harder to explain.
But unfortunately humans aren’t very good at telling them apart. (But on the other hand some humans are worse than others and you have no obligation to date one of the former.)
In that scenario lying may be better for both in the short term, but lying about being in love with someone to trick them into sleeping with you seems pretty likely to upset them more in the long term. And there are more gentle ways to put it which could make honestly explaining that it’s mostly a physical thing which would reduce the immediate negativity considerably, though the amount depends on the listener’s disposition.
I agree that it’s not necessarily unreasonable for a truth to be upsetting, but it is somewhat unreasonable to press someone for a truthful answer (especially something important), then be upset with them specifically for being honest, especially if they have indicated discomfort giving a direct answer and tried skirting around the subject (since this hints that it’s something which may be an uncomfortable truth they may want to avoid), even if it’s pretty common in many circles.
For the avoidance of doubt, in that situation I agree that one shouldn’t lie. I was commenting not on B’s behaviour and attitude but on A’s.
And, also for the avoidance of doubt, if Chris’s girlfriend was upset that he told her the truth rather than that he didn’t like her friends’ acting then she was being 100% unreasonable. (And, as I said, even if it was the latter, still pretty unreasonable. I was making a more general point.)
Human beings are complex creatures, and the decision to date a person involves weighing up the different elements that make up that complexity. At the risk of sounding presumptuous, I’d say that in your current state of almost total ignorance about the physical and psychological traits of Chris’s ex girlfriend, you are simply not in a position to know whether or not you’d want to date her. (Perhaps a focusing illusion—”nothing in life is as important as you think it is, while you are thinking about it”—was involved in causing you to believe otherwise.)
ETA: After reading the replies below, I realize I had misinterpreted Said’s comment above as making an all-things-considered claim, when it fact the claim was supposed to be subject to a ceteris paribus clause.
It seems this objection could largely be ameliorated by the inclusion of a ceteris paribus clause. Or, given the way you phrased it, perhaps a measure of how just how many units on the Craziness/Hotness scale the behavioural pattern moves her.
EDIT to remove references to mythical three headed guardians of hades.
Yeah, it seems I misunderstood the original comment.
To be fair on your reply the original comment is worded rather strongly and without care for precision. As such your reply is valid even if slightly less charitable than it could have been.
I’m pretty sure I got it wrong too.
So, essentially, this is: “yeah, sure, my boyfriend/girlfriend has this horrible aspect of their personality, but they were otherwise a good person / the sex was great / whatever”.
Ok. Sure. If your criticism would be obviated by the addition of a ceteris paribus clause to my comment, then consider it added.
You can say that about almost any undesirable personality trait, though. That doesn’t make said trait any more desirable. Many things can be very undesirable without being hard dealbreakers (especially if discovered after you’re already involved with the person). All else being equal, though, I would certainly prefer dating a person without the trait in question, than with.
Looks pretty normal to me. One incident isn’t a strong indicator of personality, I think. There are situations where a significant fraction of people want to be lied to in a reassuring way, and these situations can be recognized reliably enough if one has the necessary skills to do so.
There are skills that allow you to discern when people actually want your opinion and when they’re just asking for reassurance. Wouldn’t you rather have those?
That word always¹ sounds to me like its only point is to sneak in the connotation that what’s usual must therefore also be desirable.
“Normal is a cycle on a washing machine.”
Not literally.
My point is you mostly don’t get to choose what’s normal whether it’s good or bad, so might as well consider adapting to it*. If you come up with a less disagreeable expression of usuality that fits this case, I’ll make the switch.
*this obviously applies only if this fits your other goals
I’m torn between upvoting this comment for the footnote and upvoting this comment for the insight. Decisions, decisions.
A significant fraction of people do all sorts of things. That doesn’t mean I want to associate with them, much less data them.
Yes, I would definitely want to have those skills — and I would just as definitely want to not have to use them on someone I was dating, or otherwise close to.
Those people you’re narrowing out might have other redeeming qualities that will be less available to you because of this restriction. Why is this one so horrible that they aren’t even worth considering?
Well, I didn’t exactly say such people wouldn’t be worth considering. (See my reply to Pablo_Stafforini.)
I do think this one’s pretty bad, though. (Elaboration here.)
As for “but if you rule out X, then you won’t get the chance to potentially get Y!”, I find such arguments unconvincing, because they generalize so easily. “If you rule out serial killers as potential friends, you might miss out on some people with whom you could hold such interesting after-dinner conversation, not to mention the many other redeeming qualities that a person might have in spite of a predilection for axe murder!” Sure, maybe, but I think I can manage to find interesting friends without a history of violent crime. I don’t have to settle.
Likewise with abhorrent personality traits: my choice isn’t “accept people who are terrible in some important way” or “be alone forever”. (And even if it were, I might strongly consider option b.) There’s always “find someone who isn’t terrible in any important way”. Such people exist, it seems to me. I don’t know, maybe I’m just an optimist?
The obvious difference here is that serial killers are rare. White liars are extremely common and the kind of honesty you’re preferring is rare, so you’re ruling out a lot more people. (ETA: in those elaborated comments you seem more specific and reasonable than I thought.)
How probable do you think it is that you’re hanging out with people who are more dishonest than you think they are? Are you comfortable with your ability to discern these kinds of qualities in people? Do you acknowledge the prior?
But each of the people you’re ruling out is in turn ruling out lots of people other than you and therefore is more likely to be available.
In other words, honesty is a high-variance strategy.
That makes sense. The only problem it seems is recognizing the right individuals. The goth guy vs normal guy is much more obvious than the honesty guy vs pretending-to-be-honesty-guy. Everyone benefits from being seen as honest.
The kind of honesty where you’re willing to owning up to disliking the play your girlfriend did stage crew for doesn’t seem to me like something that many people successfully fake.
Some people seem to use honesty as an excuse for being deliberately obnoxious. Though I don’t know how often what they do would count as successfully faking anything.
Well, the OP did not specify which words he used to tell his then-girlfriend that.
On the one hand, yes. On the other hand, the number of occasions where you get to display such honesty and thereby differenciate yourself from normal moderately-honest people isn’t that large. Combine this with the low base-rate of extremely honest people, and they may easily end up never finding each other.
Whoa whoa. Who said the category of people I was referring to is as broad as white liars?
I don’t hang out with all that many people, and those I do hang out with, I’ve know for some time, so I would say: not very probable.
Comfortable enough to spot honesty after knowing someone for ten years or more, yeah.
Yeah sorry about the misunderstanding, I edited the gp accordingly.
The trouble is, you have to be really good with those skills and get things right almost all of the time before they’re worth much, since people weigh negatively-perceived interactions much more strongly than positively-perceived interactions in close relationships.
That actually reminded me of my parents. My dad is not allowed to say that he dislikes a dish prepared by my mom, even if asked for his opinion. Whenever I ask him if he liked one of my dishes, if I notice any hesitation I usually qualify it with “You can say no”.
Wow. Yeah, see, that’s exactly the kind of relationship dynamic of which the very thought horrifies me.
I, too, sometimes make similar comments to people to convey that yes, I really do want their feedback on my cooking/baking, because getting better is important to me. Empty praise is worthless to me.
Empty praise is actually useful, for absence of evidence reasons. Especially if the work you want feedback on is the type that that person should be able to effectively critique.
Once you start considering empty praise to be evidence of dislike, you may also want to fake people into thinking you think they like things, because they are probably modeling you using themselves when they decide that lying is best for you. They are not truth-obsessed rationalists, so they probably prefer to think their attempt to trick you was successful. Being asked for a critique of someone’s work can be uncomfortable, and thinking you’ve hurt their feelings is even more uncomfortable.
Ok, that’s beyond my ability to keep a chain of models-within-models straight in my head. Could you elaborate?
Actually, you know what — scratch that. The more salient point, I think, is that having to strategize basic conversation to that extent is a) much too hard for my preference, and more importantly, b) something I definitely do not want to be doing with close friends and loved ones. I mean, good god. That sounds exhausting. If someone forces me to go through such knots of reasoning when I talk to them, then I just don’t want to talk to them.
I wouldn’t want to be in that kind of relationship long-term either. But I still have to interact with normal people too, and enjoyment is often not the goal there.
Edit: also family, whose company you don’t want to discard entirely because of a few flaws like playing social games like this.
Sorry if I said it unclearly, but all I meant was, “make them think they tricked you.”
No, empty praise is still worthless, because Said’s cooking and baking not perfect, and there is with near certainty some small flaw, some awkward stylistic choice that could use improvement. Best is the gentle nitpicking of these flaws with a prepended (This is amazing, but) and with the consequent inference that the bread/food/what have you is actually already REALLY GOOD.
There is value to knowing the quality of your work apart from knowing ways to improve it.
For example, “Should I personally cook something for this upcoming potluck, or should I let my spouse do it?”
The problem is that knowing how well you cook doesn’t really affect who should cook past a certain basic point of competence, as far as I can tell.
I agree with your point but I think you may have misunderstood Mestroyer’s comment (totally understandable, as I found his comment difficult to parse, myself).
I take from your response that you interpret Mestroyer as referring to a scenario where there’s nothing in my work to criticize, and I ask for feedback and receive praise, and correctly interpret the absence of criticism as evidence for there being nothing to criticize.
I don’t actually think that’s the scenario Mestroyer had in mind, based on his second paragraph. (Or was it? If so, then he ought to adjust his terminology, because the term “empty praise” is not appropriate in that context.)
Not if it’s explicit, well-understood and is just one of the rules of how the game is played.
There are LOTS of way to convey meaning besides just blurting it out.
Mmmnope, that definitely doesn’t change the horror.
(I’m not sure how to take what looks to be a correction to a statement about my feelings about something. Regardless, it’s misplaced.)
That was just a shorthand way of saying “I am surprised that you feel this way given that I see the world in a way that...”
Fair enough. In that case, to clarify my response:
I acknowledge that your view of things is plausible in many cases; taking said view does not change my feelings about the situations in question.
Well, let me clarify, too, then :-)
I didn’t really have the particular situation of pianoforte611 in mind. I am sure there are many families where the communication between spouses is ritualized, lacks meaningful content, no one can actually say what they really feel, and is a mess in general.
My point was—and I should have phrased it better—is that, for example, a prohibition of criticizing cooking, may be a symptom of such a dysfunctional relationship, but does not necessarily have to be. Relationships tend to have many implicit rules about what means what. I can easily imagine a good, healthy, intimate relationship where you just can’t tell your girlfriend “Oh, today you look terrible” in the morning even if she, in fact, does look terrible. And that doesn’t sound horrible to me.
To make this point yet again[1], there’s a difference between not wanting (or outright forbidding) spontaneous criticism, to forbidding criticism that is provided when asked. In pianoforte611′s example, his dad is forbidden from saying the cooking’s bad even if he’s asked for his opinion.
Telling your girlfriend “Oh, today you look terrible”, apropos of nothing, seems like a reasonable thing for said girlfriend to object to. If she asks you “How do I look today? Please be honest”, and then you’re not allowed to answer honestly, lest you break the Rules Of The Relationship — that seems obviously dysfunctional to me.
[1] Sorry if I sound frustrated, but people seem to keep ignoring this distinction.
Edit: Upon a bit more consideration, pianoforte611′s example seems even more dysfunctional than at first glance. I mean, if you forbid someone from criticizing you even in response to a request for an opinion, and both parties are aware of this prohibition, what does it signify when you go ahead and ask them for their opinion anyway? It seems like a really ugly power dynamic: one person says “Well, what do you think of my cooking, honey? Hm? Be honest, now...”; all the while knowing full well that the other person can’t answer honestly, lest they break The Rules; holding this over the other person; and fully expecting, correctly, that the other person will dutifully lie, while dutifully pretending that they’re telling the truth — in other words, will submit to the first person’s display of dominance in the relationship.
Of course that could be an exaggeration in the particular case of pianoforte611′s family. But I’ve actually seen this exact dynamic play out in real life, and it’s a common enough cultural script, as offered up regularly by e.g. Hollywood.
That depends. Words are only one of many levels of communication between a couple. You should understand your girlfriend enough to know when she actually means “Please be honest” and when she doesn’t even if she says the same words and their literal meaning is “be honest”. Again—it may well be a symptom of a dysfunctional relationship but it does not automatically have to be.
A lot of communication is non-verbal. A lot of meaning flies across regardless of which words are being said. I feel it is a mistake to focus solely on the literal meaning of the words pronounced.
Well, ok. I suppose if people are ok with having relationship where communication is that complicated, and it works for them, then far be it from me to speak against that. (Not being sarcastic or passive-aggressive here; I generally genuinely don’t care how other people conduct their relationships so long as it doesn’t affect me.)
But I certainly am not interested in being with someone who would say “Please be honest”, but then expect me not to be honest, but only sometimes, and then expect me to know when is which. Nooo sir, I surely am not.
People come as complete packages :-) Some things maybe deal-breakers but some things may be compensated by other advantages.
Oh, and communication is complicated.
I refer you to this comment thread and also this comment here.
I understand the sentiment, but I’d caution that the desire to be able to express yourself freely can be seen as cover for having license to say whatever you want without regard to how it effects the other person. This is bad even if you don’t intend to use it that way: you should be spending some cycles thinking about how the other person will feel about what you say. I speak from experience: saying what’s on my mind has at times been hurtful to people I care about and I should have censored it or redirected the impulse.
Perhaps part of what you’re objecting to is not that the person prefers you to lie, but that they prefer a world that can’t exist to exist. If this were really what’s going on, that would be a severe lapse of rationality. But that world can exist: our opinions are mutable and it’s quite possible to decide to like the play. The conversation is actually about something completely different: whether you’re willing and able to emphasize the positive over the negative aspects of something for her sake, which is an essential skill in any relationship.
The conversation is also about asking for acknowledgement and approval for something she’s worked hard on and probably partially identifies with.
Please note that I’m not saying this is easy or obvious. Empathy is a difficult skill and requires training (or socialization), followed by practice and attention even for those to whom it comes easily.
(and now, the other part of my reply to your comment, with a quite intentional difference in tone)
Certainly. I’m not suggesting that you ought to just run your mouth about any opinion that pops into your head, especially without giving any thought to whether expressing that opinion would be tactful, how the other person will feel about it (especially if it’s a person you care about), etc. Often the best policy is just to shut up.
The problem comes when someone asks you for an opinion, and communicates that they really want it. If they then take offense at honesty, then I am strongly tempted to despise them immediately and without reservation. (Tempted, note; there may be mitigating factors; we all act unreasonably on occasion; but patterns of behavior are another thing.)
One of the issues with behaving like this is: so what happens when you really do want the person’s opinion? How do you communicate that? You’ve already taught your partner that they should lie, tell you the pleasing falsehood, rather than be honest; how do you put that on hold? “No, honey, I know that I usually prefer falsehood to truth despite my protestations to the contrary, but this time I really do want the truth! Honest!” It erodes communication and trust — and I can think of few more important things in a relationship.
Behavior like this also makes your partner not trust your rationality, your honesty with yourself; I don’t think I could be with a person whom I could not trust, on such a basic level, to reason honestly. I couldn’t respect them.
Yes. Certainly. Heck, I sometimes don’t want to hear the truth, or someone’s honest opinion of me. Not because I am necessarily in denial, or any such thing, but because I don’t want to think about it at the moment; or any number of reasons.
But you know what I don’t do in that case? I don’t ask them for their honest opinion! I don’t do what the girlfriend in the anecdote did, which is essentially demand that someone close to her to lie to her, and furthermore without acknowledging that this is what she was asking! To demand that your partner subvert their reason, and engage in doublethink to support your own irrationality… I really have a hard time finding words strong enough to express how much the very idea revolts me.
But then, this might be one of those “different people have different values/preferences” things.
One scary thought I had a while back is that this is essentially what friendship and especially love is, i.e., sabotaging one’s rationality, specifically one’s ability to honestly asses one’s friend’s/lover’s usefulness as an ally as a costly way to signal one’s precommitment not to defect against the friend/lover even when it would be in one’s interest to do so.
On the other hand, it’s just as easy to make up a story in the opposite direction: friendship and love are what we call having a true judgement of a person’s fundamental virtue that is unswayed by transient, day to day circumstances.
“Love is the inability to follow a logical argument concerning the object of one’s affection.”
… is a quote along those lines, from a former classmate; with which I do not actually agree.
But “usefulness as an ally” does not at all fully capture a loved one’s value to me, even absent any failures of rationality.
(Not that you said it does, I’m just pre-empting likely replies.)
Feel free to substitute whatever you feel is appropriate for “usefulness as an ally”.
The difference is that I explain it in terms of game theory.
Possible, but utterly abhorrent.
Doublespeak for “doublethink, self-deception, and lies”.
One can acknowledge hard work without lying about outcomes. Approval given regardless of worth is meaningless and devalues itself (because if I approve of what you made, even if it’s crap, then my approval is worthless, because it does not distinguish good work from bad, good results from dreck).
Perhaps, then, she should heed Paul Graham’s advice to keep her identity small; and apply the Litany of Tarski to whether the thing she worked on was good.
Sure, but something can be difficult, non-obvious, and undesirable.
I strongly disapprove of equating empathy with deception and tacit support for irrationality and emotional manipulation. They are not the same.
That is something the people do have actual conversations about, something that is, indeed, important to consider and a good reason to adopt the practice of emphasising positive things. However, it is not the kind of conversation that SaidAchmiz was talking about unless you read it extremely uncharitably.
There is a rather distinct and obvious difference between emphasizing the positive aspects of something and emphasizing something that does not exist. In fact, choosing to emphasize something to exists entails outright failing to emphasize a positive aspect of the the thing in question. Sometimes that is necessary to do, but doing so does not constitute a converstation of the type you describe.
Your reply has distinct “straw man” tendencies.
You’re right, I made some assumptions that probably don’t apply to SaidAchmiz, and I realize my comment comes off poorly. I apologize. I was trying to refer to the situation from the OP, but found it difficult to write about without using a hypothetical “you” and I’m not entirely satisfied with the result.
What I was trying to get across is that this kind of situation can be complex and that the girlfriend in the scenario can have legitimate emotional justification for behaving this way. I agree that wishing you’d lied is a bad situation to be in. I agree that the OP’s story is not a very good mode of interaction even if handled the way Sam Harris would. I agree that people should be able to have explicit conversations about emphasizing positives rather than veiled ones (which I was trying to get at when I said the conversation was “actually” about that).
I don’t mean to imply that SaidAchmiz wants to feel completely free to say anything regardless of consequences. I’m trying to say that I have felt that tendency myself and have unintentionally taken advantage of a “we should be able to say anything to each other” policy as an excuse not to think about the effects of my speech.
Hopefully this is clearer. I’m only trying to relay what I’ve learned from my experiences, but maybe I’ve failed at that.
Particularly given the replies you have prompted it is worth emphasising the ‘pressed you’ phrase. The combination of pressing for honest feedback and handling it poorly is a very different thing to handling honesty poorly without attempting to force ‘honest’ feedback be given.
(Note that the information given does not lead me to conclude that the girlfriend must have been executing that pattern but hypothetical people who do so do thereby lose some measure of want-to-date-them-ness.)
It’s possible that she learned that pressing for an honest opinion isn’t a good idea for her.
While this is true, it is also true that knowing that a given person won’t lie, that they will tell you how bad the acting in your play is, makes their praise even more valuable; because one knows that it is not a white lie.
By allowing yourself the small lies, that is what you are trading away. Whether it’s worth it or not, I can’t say for sure...
In theory, committing to not lying has some advantages, but in my experience, it doesn’t actually work. In my experience, people who commit to not lying are less accurate and less trusted than those who don’t. And I’m pretty sure the causality flows from the commitment and not from a third factor.
This runs contrary to what I would expect.
Could it be that people who commit to not lying:
Do not follow up on their commitment
Proceed to twist their words so as to be dishonest without technically lying
Or is there some other reason for this?
Failure to keep the commitment is not the problem in my experience. A person who deceives by technical truths usually gets a reputation as a pathological liar; a well-deserved reputation, I would say. Self-deception is a much bigger problem for accuracy. But that hardly scratches the surface of the problems.
You have a theory. Theories are great! But test it. Pay attention to people’s reputations for honesty, accuracy, and trustworthiness.
This will be difficult. In my limited experience, very few if any people are intentionally dishonest with me. (I may simply be very lucky in that respect). This puts me in a fairly poor position to gauge other people’s reputations for honesty.
Accuracy is another matter entirely. Accuracy is a function of intelligence and of the ability to accurately observe the environment. I can easily see accuracy being entirely independent of any commitment against lying.
This reminds me of something Mark Horstman (I think) said, that people are entitled to honest answers to questions to which they are entitled an answer. He was using it in a workplace context, for example that if one’s boss asks about one’s sex life it’s okay to lie, because she is not entitled to an answer thus she is not entitled to an honest answer. Good post.
I think an important additional concept being invoked in the above example is that the person you are lying to has social power over you. While generally abiding by a wizard’s code of speaking the literal truth, I consider there to be a blanket moral exemption on lying to the government. It is not always pragmatically wise to lie to a government official, but in a moral sense the option is at your discretion.
For example, when the TSA asks you if anything in your luggage could be used as a weapon, you just lie.
Certain interactions with the government (assuming you are behaving peacefully) seem like a special case of dealing with an adversarial or exploitative agent. When an agent has social power over you, they might easily be able to harm or inconvenience you if you answer some questions truthfully, whereas it would be hard for you to harm them if you lied. Telling the truth in that case hurts you, but lying harms nobody (aside from foiling the exploitative plans of the other agent, which doesn’t really count).
A more mundane example would be if a website form asks you for more personal information than it needs, and requires this information. For instance, let’s say the website asks for your phone number or address when there is suspiciously no reason why they should need to call you or ship you anything. If you fill in a false phone number to be able to submit the form, then you are technically lying to them, but I think it’s justified. Same thing for websites that require you to fill in a name, but where they don’t actually need it (e.g. unlike financial transactions, or social networks that deal with real identities).
The website probably isn’t trying to violate your rights, but it’s trying to profit from your private information, either for marketing to you (which you consider pointless), or selling the information (which is exploitative, and could result in other people intruding into your privacy). Gaining your info will predictably create zero sum or negative sum outcomes. Lying is an appropriate response to exploitation attempts like these.And if they aren’t trying to exploit your private information, or use it to give you a service, then they don’t really need it, so lying doesn’t hurt them at all, and you might as well do it to be safe from spam.
Telling the truth is a good default because human relationships are cooperative or neutral by default. But the ethics of lying are much more complex in adversarial or exploitative situations.
Most people are neither too dull to imagine or recall from a movie the ways to use ordinary items in their luggage as weapons, nor lying, when they say no…
Either you have included an unintended negative, or you are saying that nothing in most people’s luggage could be used as a weapon.
Or it’s just that “lying” implies an attempt to deceive.
Words are meant to communicate meaning. I wouldn’t consider it lying if someone communicates in a sense that properly answers the meaning of the question, even if the question is clumsily asked.
Likewise, I would consider it lying if someone uses words which are literally true, but does so in a manner meant to deceive the listener.
There’s no time to explain in excruciating detail that TSA wants to hear about, say, handguns that people forget to remove from their luggage, tools such as nail guns, assorted sharp pieces, etc, but not about how you can hit someone on the head with a laptop. And that if it’s here by mistake, a lot of time is saved by you telling about it and them not having to assume that you’re a bad guy trying to conceal it.
And within the limited number of sufficiently short sentences there’s not a single one that exactly describes what is meant. Words have to be used, in lieu of telepathy, such as “weapon” meaning something that is sufficiently weapon-like and effective as a weapon to be a problem.
As much as we need accessibility, there is just no practical way to accommodate for communication related disabilities in a screening line at an airport.
Eliezer would quote the relevant HPMoR scene, were he trying to be honest.
Wait, what?
You’re saying it″s never morally wrong to lie to the government? That the only possible flaw is ineffectiveness?
Either I am misreading this, you have not considered this fully, or one of us is wrong on morality.
I think there are many obvious cases in which in a moral sense, you cannot lie to the government.
Example, please?
Let’s start with basic definitions: Morality is a general rule that when followed offers a good utilitarian default. Maybe you don’t agree with all of these, but if you don’t agree with any of them, we differ:
-- Applying for welfare benefits when you make $110K per year, certifying you make no money.
Reason: You should not obtain your fellow citizens’ money via fraud.
-- “Officer Friendly, that man right there, the weird white guy, robbed/assaulted/(fill in unpleasant crime here) me..”
Reason: It is not nice to try to get people imprisoned for crimes they did not commit.
-- “Yes it is my testimony that Steve Infanticider was with me all night, and not killing babies. So you shouldn’t keep him in custody, your honor.”
Reason: Even if you dislike the criminal justice system, it seems like some respect is warranted.
-- “No, SEC investigators, I, Bernie Madoff, have a totally real way of making exactly 1.5% a month, every month, in perpetuity.”
Reason: You shouldn’t compound prior harm to your fellow humans.
-- “I suffer no sudden blackouts, Department of Motor Vehicles.”
Reason: You should not endanger your fellow drivers.
That was five off the top of my head. This is in response to SaidAchmiz, because I still think it’s possible that Eliezer meant something different than I interpreted, though I don’t understand it. I also think that in the U.S. you shouldn’t lie on your taxes, lie to get on a jury with the purpose of nullifying, lie about bank robberies you witness, lie about your qualifications to build the bridge, lie about the materials you intend to use to build the bridge, lie about the need for construction change orders, lie about the number of hours worked… you get the picture.
I understand that some disagree. I also understand that if you live in North Korea, the rules are different. But I think a blanket moral rule that lying to the government has only one flaw—you might get caught or it might not work—is a terrible moral rule.
Because the government has power over you, you get no moral demerits for lying to them? Nuh-uh.
I’m not entirely sure what “a good utilitarian default” means, but I suspect I disagree, since (I strongly suspect) I am not a utilitarian.
It’s not clear to me that deserving or needing your fellow citizens’ money is what entitles you to their money (assuming anything does), so I don’t think I entirely agree. This is one of those cases where it feels to me like I’d be doing something wrong, but trying to pin down exactly what that something is, is difficult.
“not nice” is quite an understatement, so yes, I agree.
Why is some respect warranted? What warrants it?
I neither understand finance well enough to grasp this situation, nor do I have any idea what “compound prior harm” means, so I can’t comment on this one.
Agreed.
It seems like the pattern so far is: lying to the government is clearly bad when it would clearly cause harm to your fellow humans. Otherwise, the situation is much more murky. And I think that’s consistent with the way I interpreted Eliezer’s comment, which was something to this effect:
“There’s nothing inherently wrong with lying to the government, per se (the way there might be with lying to a person, regardless of whether your lie harmed them directly and tangibly); however, lying to the government may well have other consequences, which are themselves bad, making the lie immoral on those grounds.”
That is, I don’t think Eliezer was saying that if you lie to the government, that somehow automatically counterbalances any and all negative consequences of that act merely because the act qualifies, among other things, as a lie to the government.
Let’s see if we can’t apply this principle to the rest of your examples:
I would certainly never attach my name to any suggestion that I endorse lying to the IRS.
This seems fine to me.
Depends a whole lot on the circumstances. I can’t make a blanket comment here.
Such lies might very well harm people, and so are bad on those grounds.
This does seem bad for rule-consequentialist reasons.
Seems reasonable to me, actually. You might get moral demerits for the consequences of your lie (insofar as the untruth might harm actual humans), but lying to the government is not wrong in itself.
This example particularly amuses me, since this is the first year in a while where I won’t have to lie on my federal tax return about my marital status, and I’m really happy about that.
That’s not lying. To see this try tabooing “marital status”.
No doubt! I do wonder what JRMayne would say about cases like yours, though. To me it seems obvious that you did nothing wrong in those previous years.
(nods) I think even by the government’s standards, I didn’t actually do anything wrong. Come to that, I’m not sure I was even lying, technically speaking, as I’m not sure if filing single-head-of-household is technically asserting that I’m unmarried in the first place. It just felt like it.
Assuming it is asserting that you’re not married, it’s asserting that you’re not married by the Federal tax definition. You weren’t, so it’s not a lie.
Bernie Madoff is a stockbroker who ran a famous Ponzi scheme that came to light a few years ago, at the height of the financial crisis. Judging from the Wikipedia page, the fraud wasn’t a terribly complicated one: basically, he was taking investors’ money and hanging onto it rather than investing it, while fabricating (unusually consistent) paper investment returns for his clients and paying them out of pocket if they ever wanted to cash out.
Yes, I know who Bernie Madoff is, I’m just not clear on what are the implications of the quoted statement to the government. What does it mean? How was it false? Are there legal obligations to disclose something in such a case? What are they? What are the consequences (practical, not legal) of that lie? Who is harmed by the lie? Who is harmed, on the other hand, by the actual fact which you are lying about? Etc.
I just don’t have anywhere near enough context for any of this.
It means that Madoff was claiming he’d invested his clients’ money at an annual rate of return of… let’s see… a little under 20% (Wikipedia cites 10.5 to 15) when he’d actually had it in the bank at a RoR in the low single digits. Because of that, there would have been an increasingly large gap (probably around 10% annually, compounded over the life of his fund) between the figures he’d cited to his clients and the actual money he’d have available to return to them, and if and when enough of them decided to collect, they’d have found themselves short in proportion to that gap plus whatever Madoff took out for himself (a sum in the millions).
This is straightforward fraud: Madoff promised a service, deliberately failed to deliver, and pocketed compensation for it anyway. The harm done by Madoff extracting compensation is obvious (it’s basically theft); the harm done by him not doing his job is a little more complicated, but also substantial once you take into account opportunity cost. I don’t know the exact legal requirements.
Ok, thanks. That makes sense.
If you don’t mind a bit of followup explanation: where does the lie to the government come into this? Like, clearly Madoff defrauded his clients and that’s terrible, but I’m still not clear on the role of the disclosure to government institutions (or lack thereof). Is it just that the government in this case is the channel by which one disclose information about operations to one’s clients, i.e. the government acting on behalf of the clients? Or is it something else...?
The SEC’s basically acting as an enforcement body and a standards organization in this case. Lying to them allowed Madoff to perpetuate his fraud, and perhaps more importantly to legitimize it; he wouldn’t likely have been able to manage billions of dollars if he’d been operating outside the regulatory framework. I’m not sure I’d call that intrinsically immoral, even with my deontology emulator on, but in this context I think I’d be comfortable saying that it acted to exacerbate the situation.
It looks like he’d tried to stay out of their sights as much as possible, though. Judging from Wikipedia, most of the investigation here was carried out by his competitors.
Understood. Yes, given this explanation I think I agree that lying to the SEC was immoral in this case.
This seems like a good heuristic to cover my “nosy relatives” example, as well as many others, and fits my moral intuitions. Good work, Mark Horstman (or whoever)!
Do you think it fits the girlfriend case in the OP? I mean, do you think you are wronging your partner if, when they press you for an assessment of their performance, you lie to spare their feelings? (I agree with you that they’d be wrong to then get upset of you respond honestly and negatively, but that’s a different question.) Are you wronging your partner even if you know you are fulfilling their preferences by lying? (or does that disentitle them to an answer?)
I do not. I think you are entitled to the truth about your partner’s opinion of things that are important to you. Your partner’s, note; perhaps also your close friends’; not anyone’s.
I would feel wronged, if I was said partner. I think that if you’re in a relationship with a person who values truth, then yes, you are wronging them by withholding it to spare their feelings. If your partner is someone who does not value truth, then, I think, you are not wronging them by lying to spare their feelings. I’m not sure about this. To me, it is a moot point; since I’ve noted, I would never want to be with such a person.
The question of whether they are entitled to the truth is not actually relevant, as they are not asking for the truth in such a situation; they are asking for something else (validation? support? I don’t know).
Yes. There are also questions which interviewers are legally prohibited from asking during job interviews, which probably have good moral reasons behind them, not just legal ones.
In my recent comments, I’ve been developing the concept of a “right to information,” or “undeserving questions.”
That seems right to me, though we should probably say something about what you’re then allowed to say. You can’t lie to your nosy (monamorous) boss and say “Great! I have sex with your partner all the time.” Yet if you are sleeping with your boss’ partner, maybe it’s not quite right to lie. Is she entitled to an answer in that case?
I think your case is different from the OP’s:
means that you are already doing something much more immoral and lying is not your biggest moral issue.
I don’t normally like to blather on about myself, but I feel that a bit of self-exposition might help some people with their apparent … Fundamental Attribution Error, perhaps?
I have an extremely malleable identity in certain types of social situations, to the point that I literally come to believe whatever I need to believe in order to facilitate rapport with whomever I’m talking with.
For example, I normally have a pretty strong aversion to infidelity in relationships, but on a few occasions I’ve deeply connected through prolonged conversation with friends who were engaged in relationship infidelity. It is sort of a running joke among my closest friends that I can get almost anybody to open up to me and share their deepest darkest secrets, and the way I do it is that I am genuinely nonjudgemental, and the method by which I am genuinely nonjudgemental is that I have a “core” module that has my actual beliefs and then I have my surface chameleon module which is actually talking which just says whatever it needs to say to establish the connection.
All of this babbling is to convey that if you were to interrupt me in the middle of doing this and say, “moridinamael, was that a lie?” I would answer “No.” Because although I might be saying something that isn’t in line with that “I” (whatever that is) don’t really “believe” (whatever that means) it doesn’t in that moment feel like a lie, it actually feels really good and pure and warm because I’m connecting with somebody over their pain.
Now, there are some people in this discussion thread who I feel like would think I am some kind of monster. And I think my brain probably works very, very differently than theirs, or at least the social circuitry is wired differently. But just bear in mind that people like me exist and we can’t really help the way we are … or if I could help it, I should say, it would basically cripple me.
Well, I’m not going to call you a monster or anything, but I will say that I sure would hate to find out one of my friends was the way you describe yourself. I don’t think I could continue to be friends with that person, and I sure wouldn’t choose to be close to a person if I knew in advance they were like this.
Basically, it seems like you’re saying: I am really good at self-deception, and so when I lie to you, it’s not really a lie because I’m also lying to myself! And believing that lie!
Which doesn’t change the fact that what you’re saying, in such a circumstance, isn’t the truth. Your attitude seems to boil down to: “Truth? Haha! What is truth anyway, eh? If I believe any old lie I can come up with, then it becomes my truth, doesn’t it? That’s just as good as ‘the truth’! Whatever that is!”
Furthermore and separately:
Once you decide to not care about whether your beliefs are true, almost any conversation I could have with you about any of your beliefs, or that is based on any of your beliefs, becomes pointless. Because I know that what you believe has no correlation with truth, and that you just don’t care about whether it does. If you’ll say anything to establish a rapport with me — even if you make yourself believe that thing while you’re saying it — then that rapport is worthless to me; because (however much you may protest the terminology) that rapport is based on a lie.
(However, all of that said, I do think your post is valuable, as it contributes a useful data point, as was your stated intention.)
I agree with everything you said on a personal level, but I think you’re committing the fallacy of false generalization.
You (and I) both place a very high value on truth over comfort. We feel incredibly uncomfortable—perhaps even painfully so—when we suspect that any of our beliefs might be false. Therefore, for us, finding out that a friend was lying to us (as well as to himself) is tantamount to experiencing a direct attack.
However, not everybody in the world is like us. Other people place a very high value on comfort and positive reinforcement. When they talk to their friends, they do so not in order to Bayes-adjust their beliefs, but in order to reinforce their feeling that they are valued, needed, and cared about.
Note that this does not necessarily mean that such people do not care about truth. They often do; but truth-seeking is not the reason why they engage in conversations.
So, for people who value comfort in their relationships, having a friend like moridinamael would be ideal. And I can’t state with any amount of certainty that their worldview is inferior to mine.
Well, sure. That’s why I phrased my comment the way I did, referencing what I like/prefer/feel. I agree with your assessment of how we (you and I, and others here on Lesswrong) compare to most other people.
However, I don’t entirely agree with this:
I, too, like feeling that I am valued, needed, and care about; and I don’t necessarily engage in conversations only for truth-seeking. I sometimes have conversations for the purposes of entertainment, or validation, or comfort. It’s not like truth-seeking is my only reason for talking to another human-being, ever.
But!
But. One thing I never want is to be entertained by lies[1]; to be validated with lies; to be comforted by lies. As I said in another thread, truth may be brutal, but its telling need not be. There are many ways to comfort and to validate without lying.
If I come to a friend for comfort, and they comfort me by lying, I would feel somewhat betrayed. How betrayed, to what extent — that would depend on the subject matter and magnitude of the lie, I suppose.
[1] Obvious exceptions include storytelling, hyperbole, sarcasm, performance, and all the other scenarios wherein a person says something that they don’t believe is the truth, but they correctly expect that their audience is not expecting that statement to be true, and is not going to believe it as the truth.
Yes, good point.
I agree, and I feel the same way. However, I believe that you and I see conversations somewhat differently from other people.
When you and I engage in conversation (unless I misunderstood your position, in which case I apologize), we tend to take most of the things that are said at face value. So, for example, if you were to ask “did you like my play ?”, what you are really asking is… “did you like my play ?” And, naturally, you would feel betrayed if the answer is less than honest.
However, I’ve met many people who, when asking “did you like my play ?”, really mean something like, “given my performance tonight, do you still consider me a a valuable friend whose company you’d enjoy ?” If you answer “no”, the emotional impact can be quite devastating.
The surprising thing, though (well, it was surprising to me when I figured it out) is that such people still do care very much about the truth; i.e., whether you liked the play or not. However, unlike us, they do not believe that any reliable evidence for or against the proposition can be gathered from verbal conversation. Instead, they look for non-verbal cues, as well as other behaviors (f.ex., whether you’d recommend the play to others, or attend future plays, etc.).
So, as I said above, the two types of people view the very purpose of everyday conversation very differently; and hence tend to evaluate its content quite differently, as well.
You make good points, and your assessment seems entirely correct.
This seems accurate, yes. Strangely, I remember reading/learning/realizing this before, but I seem to have forgotten it. How curious. Perhaps it is because the mode of communication you describe is so unnatural to me. (As I am on the autism spectrum.)
I am unsure how to apply all of this to the moral status of behaving the way moridinamael describes...
I have not internalized this point, either, and thus I have to continually remind myself of it during every conversation. It can be exhausting, and sometimes I fail and slip up anyway. I don’t know where I am on the autism spectrum; perhaps I’m just an introvert...
Yeah, it’s a tough call. Personally, I think his behavior is either morally neutral, or possibly morally superior, assuming that people like ourselves are in the minority (which seems likely). That is to say, if you behaved in a way that felt naturally to you; and moridinamael behaved in a way that felt naturally to him; and both of you talked to 1000 random people; then, moridinamael would hurt fewer people than you would (and, conversely, make more people feel better).
Of course, such ethics are situational. If those 1000 people were not random, but members of the hardcore rationalist community, then moridinamael would probably hurt more people than you would.
On the third hand, moridinamael indicates that he can’t help but behave the way he does, so that adds a whole new layer of complexity to the problem...
Your analysis of the ethics involved is valid if you only take harm / comfort into account, but one aspect of my own morality is that I value truth intrinsically, not just for its harm/help consequences. So I don’t think it’s as simple as counting up how many people are hurt by our utterances.
If you value truth intrinsically, then reducing your ability to approach it would hurt you, so I think my analysis is still applicable to some extent.
But you are probably right, since we are running into the issue of implicit goals. If I am a paperclip maximizer, then, from my point of view, any action that reduces the projected number of future paperclips in the world is immoral, and there’s probably nothing you can do to convince me otherwise. Similarly, if you value truth as a goal in and of itself, regardless of its instrumental value; then your morality may be completely incompatible with the morality of someone who (for example) only values truth as a means to an end (i.e., achieving his other goals).
I have to admit that don’t know how to resolve this problem, or whether it has a resolution at all.
This observation fits my model of others. Most people are not perfectionists, over-achievers, or ravenous truth-seekers above all. Consequently, I believe that people aren’t those things unless they specifically give me reasons to believe they are. And I treat them accordingly, and interpret their requests for feedback in accordance with my impression of what they are looking for.
If someone wants more critical feedback from me, or more unvarnished opinions, then they can get it by (a) acting like the type of person who values those things and who can handle them, (b) asking me explicitly.
Why? Beanbag chairs can be useful, so long as you remember not to build your entire house out of them.
I’m not entirely sure I follow your analogy. Is it: “People with personality traits you hate can be fine to have as friends, so long as not all of your friends have personality traits you hate”?
If so, then I disagree.
Not being friends with people you hate is nearly a tautology. I’m saying you shouldn’t hate and shun people just for prioritizing your comfort over their own integrity.
If your social circle consists entirely of straight-talkers, where will you go when you need to be comforted? If a putty-person wants to associate with you, but you have a well-established reputation for shunning putty-people and a relatively homogenous social circle… well, then, they’ll pretend to be a straight-talker, because blending in is what they do. Eventually the game-theory of this makes you paranoid, which means more need and less opportunity for emotional comfort, which means any remaining infiltrators get more of your social bandwidth because they’re better at providing that comfort.
Also, you seem to have missed the distinction between in-principle independently-verifiable fact and self-reported preference. If moridinamael told me, due to my apparent feelings on the issue rather than a legitimate misperception, that a particular gun had been loaded with only five bullets when in actuality it contained six, that would be a much more serious issue than inaccurately reporting how enjoyable some sort of entertainment media had been, even if the entertainment preference went on to influence purchasing decisions and the sixth bullet wasn’t aimed at anything I cared about.
Oh, and:
To the “straight-talkers”, of course. Can you find comfort only in lies?
If brutal honesty satisfied all human emotional needs the world would look very different than it does.
By “comfort” here I am referring particularly to the feeling of finding someone who agrees with you closely on some essentially subjective issue, such as taste in art or the moral worth of specific individuals. It is in principle possible to find someone who holds the ideally matched set of opinions persistently, for their own reasons, but there are search costs, and such a person might have other features inconvenient or prohibitive to long-term friendship. A less-close match provides a weaker degree of the feeling. Someone you know to be, on some level, insincere, also provides a weaker degree of the feeling, but that can be outweighed by them being effectively a closer match, and the reduced costs in other areas.
Is my reasoning flawed, or is this a matter of you experiencing the latter effect (suspension of disbelief) more strongly?
It’s easier (though still non-trivial) to find a set of someones, each of whom holds matching views on some subset of the relevant opinions, and who together cover most or all relevant opinions. It’s not easy to find people with whom you match thusly!
Finding good, true friends is not something that just happens trivially. But it’s worth it. I wouldn’t want to settle for less.
If I’m interpreting your phrasing correctly, then… um, yes. It’s a matter of that. I value truth, and honesty. If I know someone is lying to me, I’m not just going to “suspend disbelief” and pretend I don’t know they’re lying. Not to mention: how am I going to get around the fact that their lies and deceptions make it very difficult for me to respect them? More pretending? More self-deception?
No thank you.
Finally:
Who said honesty has to be brutal? The truth may be, but its telling may not. And I am not comforted by lies.
Er, what? What are you talking about? This doesn’t happen. Is that something you experience in your life? People infiltrating their way into friendships with you, when they know that their personality traits are something you hate? That must suck. :(
“You can’t prove I hate your pie, so I might as well lie and say I like it.”?
No thanks. If that’s how you (the hypothetical you, a person who wants to be my friend) behave, then, all else being equal, I don’t want to be your friend.
It is a thing which I have seen happen to people. There are known countermeasures, which I am attempting to discuss and you are discarding as repugnant.
Well, ok. Let’s posit that this is a thing that happens. What are the countermeasures?
If you want me to boil it down to three words, “business before pleasure.” Accumulate some people you can count on to cover their own specialties and communicate with you accurately and precisely, and some other people who are fun to be around. Optimize those groups separately. If someone wants to straddle the line, never let them apply leverage from one mode to the other. Never forget which mode you’re currently operating in. Business gets priority in emergencies and strategic decisions, because survival, but there should be a balance overall: it’s “before,” not “instead of.”
Wow. That sounds like a terrible life.
I thank you for the information/advice, but with respect, I am going to ignore it entirely. I will continue to have a small circle of close friends who are both fun to be around, and don’t lie to me. I will continue to avoid closeness with people who lie to me; should any infiltrate my circle of friends (for reasons that I still can’t imagine), I will cut them off utterly as soon as I discover their true nature.
Personally, I find people who lie aren’t fun to be around.
I suspect it happens to celebrities and very rich people all the time.
I don’t think being genuinely nonjudgemental is lying. If I’m having an intellectual argument it’s also not lying to agree for the sake of having a good argument with the opposing side on some points.
If I disagree with someone about A, B, C and D it’s completely fine to assume for the sake of the discussion that A, B and C are true to convince them that D is right.
If specifically asked you might say that you don’t believe A, B or C but you don’t have to be open about everything that you disagree with by default. That just leads to confusion and no effective intellectual exchange.
Any good therapist learns that he doesn’t tell his client everything that the therapist thinks but that he tells the client what’s helpful for the client. A good therapist will still honestly answer direct questions about the beliefs of the therapist.
I put much more trust into the people who have a strong core and are judgmental so that they can morph into whatever they need to connect on a deep level with another person.
All the people who I would trust to jump from a bridge if they would tell me to jump from a bridge have that quality. My first reaction would be to ask: “Do you really think that’s a great idea?” but to the extend that I know they come from a warm and pure place and are in strong empathy with me that’s why I would follow them.
I wouldn’t extend that kind of trust to someone at a lesswrong meetup who has the reputation of always telling the truth but who sometimes says things from a judgemental state and sometimes says things from a warm place.
Over the last year I developed a stronger personal identity and got more clear about what I value. On the other hand in a game of Werewolf people who could read my emotions to sometimes find out whether I’m lying can’t anymore. Knowing who I am allows me to be a lot more socially flexible to do whatever I want in the game of Werewolf in a way that’s not readable by the people I’m playing with.
I think I used to experience something like this when I was a teenager. I’d reflexively assume whatever identity was needed for rapport, not necessarily always with skill, and this seemed like lying only afterwards when I realized I had gone too far and would probably get caught. This was annoying because I didn’t really have control over my lying. At some point in my early 20s this spontaneously stopped happening. I wonder if this simply had something to with my brain maturing and whatever represents the relevant parts of my identity solidifying.
Do you think your family has anything to do with your curious cognition? In my paternal family, lying seems more like a sport than anything morally reprehensible and successful deception is considered something to be proud of. I don’t agree with them but can’t say I hate them either.
I also discovered I was like this as a teenager—that I had an extremely malleable identity. I think it was related to being very empathetic—I just accepted whichever world view the person I was speaking with came with, and I think in my case this might have been related to reading a lot growing up, so that it seemed that a large fraction of my total life experience were the different voices of the different authors that I had read. (Reading seems to require quickly assimilating the world view of whomever is first person.)
I also didn’t make much distinction between something that could be true and something that was true. I don’t know why this was. or if it is related to the first thing. But if I thought about a fact, and it didn’t feel currently jarring with anything else readily in mind, it seemed just as true as anything else and I was likely to speak it. So a few times after a conversation, I would shake my head and wonder why I had just said something so absurdly untrue, as though I had believed it.
In my early twenties, I found I needed to create a fixed world view—in fact, I felt like I was going crazy. Maybe I was, because different world views were colliding and I couldn’t hold them separate when action was required (like choosing an actual job) rather than just idle conversation.
That’s why I gravitated towards physical materialism. I needed something fixed, a territory behind all of these crazy maps. I think that the map that I have now is pretty good, and well-integrated with the territory, but it took 3-5 years. I’m still flexible with understanding other world views. For example, I was in a workshop a few days ago where we needed to defend different views, and I received one that was marginally morally reprehensible. I was the only one in my group able to defend it. (It wasn’t such a useful skill there, I think most people just assumed I had that view, which is unfortunate, but I didn’t mind—if it was important to signal correctly at this workshop I would have lied and said I couldn’t relate.)
This is interesting, particularly in connection with your grativation towards materialism—thanks for sharing.
FWIW my parents both possess aspects of what I think of as this skill of becoming whoever I need to be to fit whomever I’m talking to. I really do think of it as a bit of a superpower and I’ve intentionally developed it rather than letting it fade which it probably would have done naturally.
Perhaps you think of me as having curious cognition but my point in posting this was actually to express the converse—that I see pieces of myself in everybody, that I see everybody doing this to some degree all the time, I’m just one of the rare people with the introspective awareness to see what I’m doing and guide it.
Ever go out to lunch/coffee/whatever with your boss or some figurehead of power, and witness how everybody except the boss transforms into an unimpeachable paragon of bland monotonous virtue? Folks are always selectively showing only the parts of themselves that they think need to be seen in a given context, and this is a type of deception through guiding expectations.
I do this as well, but I don’t “lie” (from the perspective of my core values).
I empathetically accept the other person’s ethics and decisions. I allow that common connection to genuinely color my tone and physical expressions, which seems to build rapport just as well as actually verbalizing agreement. When I find myself about to verbalize agreement of something I don’t actually believe, I consciously pull back. The trick is being able to pull back without losing your empathetic connection.
Anecdotally, I find that I can verbalize disagreement, but as long as I maintain the tone and physical signals of agreement (or ‘acceptance’, perhaps, but I think ‘agreement’ is more true) that the other person remains open.
I endorse the vast majority of the post. Lying in most of those circumstances seems like an entirely appropriate choice, particularly to people you do not respect enough to expect them to respond acceptably to truth. Telling people the truth when those people are going to screw you over is unethical (according to my intuitive morality which seems to consider ’being a dumbass” abhorrent.)
People have the right to lie. People do not have the right to lie without consequences. I suggest people respond to being lied to in whatever way best meets their own goals and best facilitates their own wellbeing. Those adept at navigating a sea of social bullshit and deception may choose to never treat lies as defections or provide any negative consequences. Those less adept at that kind of thinking may be better served by being less tolerant of lies from those with a given degree of closeness to them.
I implore you to respect other’s right to treat lies, liars, and you in whatever way suits them.
I personally assume people lie all the time (or, more technically, I assume they bullshit all the time). However, speaking about other people you may encounter I hope you realize that some people do not interpret lies the way you hope. Failure to realize that your lie will create some terribly awkward situations is your behaviour and your consequence (as well as a consequence to the non-savvy recipient). As the person who is (presumably) more socially aware of the two parties and the person who has analysed the subject more you are going to be better equipped to adapt. So either don’t lie to people when it’s going to create terribly awkward situations or avoid talking to people when you expect your preferred behavioural pattern will not work with them (eg. based on apparently clumsy body language).
As is the case with all notions about how people ought to interact with each other, if you attempt to enforce your own standards and don’t adapt to the person you are interacting with you can expect things to go poorly. This applies to lying averse people interacting with liars. It applies to liars interacting with the lie-averse. It applies to ‘Guess culture’ people forcing their behaviour or interpretations on non-guessers and the reverse.
The most notable failure pattern that I observe is that of a wilful, stubborn, insistence that consequences are responsibility of the other party because “my” way is the naturally right way for the universe to be. A psychological disposition based precommitment not to swerve in a game of chicken.
Does this really serve many of them better though? Combine implicit high trust in people with judgmentality and poor lie detection in an environment where everybody lies. From an outside perspective the most extreme version of this seems like a recipe for lashing out at random people and alienating them. People openly judgmental about lying actually seem like good targets for deception, because you can expect them to be worse at spotting it.
Can lying averse people reliably spot the other nonliars?
Thanks!
Some lies should have consequences. But I think “respect other people’s right to you [about some topics]” is a really important principle. Maybe it would help to be more concrete:
Some men will react badly to being turned down for a date. Some women too, but probably more men, so I’ll make this gendered. And also because dealing with someone who won’t take “no” for an answer is a scarier experience with the asker is a man and the person saying “no” is a woman. So I sympathize with women who give made-up reasons for saying “no” to dates, to make saying “no” easier.
Is it always the wisest decision? Probably not. But sometimes, I suspect, it is. And I’d advise men to accept that women doing that is OK. Not only that, I wouldn’t want to be part of a community with lots of men who didn’t get things like that. That’s the kind of thing I have in mind when I say to respect other people’s right to lie to you.
I agree with this. Though I think some degree of acceptance of white lies is the majority position, and figuring out when someone deviates from that and to what degree is tricky. Such social defaults tend to be worth going along with unless you have a pretty damn good reason not to.
You’re asking too much of people, even on LessWrong. You’re demanding purely consequentialist utilitarian judgment be made in the face of MULTIPLE ingrained cognitive biases, plus MULTIPLE levels of cultural conditioning.
You’re not going to win this one.
You really think people’s objections to Chris’s post are due to the objectors being insufficiently consequentialist?
Please, do explain.
Saying “some kinds of lies are actually okay” is bragging: “I am good at navigating social bullshit, so the presence of the white lies is a net benefit for me.”—Yeah, good for you. Might not work for me. I might hurt myself by trying to costly signal something I am not yet able to pay the cost of.
Claiming to have a common skill is hardly bragging unless you expect your audience to lack said skill.
For me it’s more like “I don’t like the presence of most white lies, but I have far more important goals than this preference that require interacting with all kinds of people, so I’ll suck it up.”
People who have the luxury of demotivating themselves by calling normal social interaction bullshit probably have pretty asocial jobs.
Oh, so not only you have the skill, all your friends have it too. How amazing!
(Sorry for the agressive tone, but this is approximately how it translates to me.)
I don’t want to point fingers at anyone, but the mere fact that the topic of “white lies” created a debate in this community suggests that some members consider the costs non-negligible.
Yep, I find work with computers much less frustrating than work with humans. (But I also find some kinds of humans much less frustrating than others. So I might enjoy working with the filtered subset, or having a possibility to filter out the most frustrating clients.)
The meaning of “normal social interaction” depends on the kind of people you interact with. What’s normal in one group may be weird in another. Sure, some things also correlate between the groups, and it’s a good idea to improve in those. And the cost of having the interaction is often worth paying. Still, the cost is there, and if it’s far from zero (for me, it is), it has to be included in the calculation.
“Sorry, but” doesn’t fix open hostility. I’ll rather attribute this hostility to the circumstances than to you however.
For some reason this discussion prompts as horrible interpretations of people as possible when there clearly are other interpretations available. I’m out.
If you extract the hyperbole this is an entirely valid reasoning. An observed pattern of lies (or an outright declaration of such a pattern) does mean that people should trust everything you say somewhat less than they otherwise would. Reputation matters. Expecting people to trust your word as much when you lie to them as when you don’t would be foolish. This is a tradeoff that seems worthwhile but you must acknowledge that it is a tradeoff.
False. It is their problem and yours. People not believing you is obviously a negative consequence to you. Acknowledge it and choose to accept the negative consequence anyway because of the other benefits you get from lies. (Or, I suppose, you could use selective epistemic irrationality as a dominance move and as the typical way to defect on an ultimatum game. Whatever works.)
With the caveat that the ‘most of the time’ excludes all the time when it matters to them most. Assuming a vaguely rational liar the times when they should be least trusted are times when being believed would benefit them the most.
Really? Someone saying “I do the socially normal thing with white lies” is reason to distrust what they say about science?
Saying “I do the socially normal thing” is pretty good evidence that you don’t do the socially normal thing.
Structurally, this post and its comments are extremely similar to the pua threads.
In a sense, yes. Normally you don’t announce you do the socially normal thing. But when you’re in a subculture where lots of people don’t do the socially normal thing...
Agreed. Very few of the positions on lying taken in this thread could be classified as “socially normal” outside of (or in a number of cases even inside of) LW-associated circles.
Yes.
(I question the claim that this is merely an expression of normality but assume it for the sake of the answer.)
Yes, it is a reason to trust what they say about science less. The “socially normal” thing to do with respect to mentioning science is to be much more inclined to bring up findings that support one’s own preferred objectives than to bring up other things. It also involves a tendency to frame the science in the most personally favourable light.
An above normal obsession with epistemic accuracy and truthfulness (which is somewhat typical of people more intellectually inclined and more interested in science) ought to (all else being equal) make one more comfortable trusting someone talking about science. I, for example, often can’t help making references to findings and arguing against positions that could be considered “my side”. That political naivety and epistemic honesty at the expense of agenda is some degree of evidence. Possibly evidence that I can’t be trusted as a political ally on the social-perceptions battlefield but that I can be more useful as a raw information source.
Again, assume “all else being equal” is included in every second sentence above.
To some extent, though probably not to a large extent.
An older version of my recent article about trust used to have the following paragraphs, which I then cut since the essay was already long enough:
My views on lying are similar to your friend’s. Thanks for having a charitable reaction.
After reading some of the attitudes in this thread, I find it disconcerting to think that a friend might suddenly view me as having inscrutable or dangerous psychology, if they found out that I believe in white lies in limited situations, like the vast majority of humans. It’s distressing that upon finding this out, that they might so confused about my ethics or behavior patterns… even though presumably, since they were friends with me, they had a positive impression of my ethics and behavior before.
Maybe finding out that a friend is willing to lie causes you to change your picture of their ethics (rhetorical “you”). But why is it news that they lie sometimes? The vast majority of people do. Typical human is typical.
Maybe the worry is that if you don’t know the criteria by which your friends lie, then they might lie to you without you expecting it.
If so, then perhaps there are ways to improve your theory of mind regarding your friends, and then avoid being deceived. You could ask your friends about their beliefs about ethics, or try to discover common reasons or principles behind “white lies.” While people vary on their beliefs about lying, there is probably a lot of intersubjectivity. Just because someone isn’t aware of intersubjective beliefs about the acceptability of lying, it doesn’t mean that their neurotypical friends are capricious about lying. (Of course, if future evidence shows that everyone lies in completely unpredictable ways, then I would change my view.)
For example, if you know that your friend lies in response to compliment-fishing, then you can avoid fishing for compliments from them, or discount their answers if you do. If you know that your friend lies to people he believes are trying to exploit him, then you don’t need to be worried about him lying to you, unless (a) you plan on exploiting him, or (b) you worry that he might think that you are exploiting him even if you aren’t, and he lies rather than notify you.
If that’s the case, then the real worry should be that your friend might feel antagonized by you without you realizing it and without him being able to talk to you about it. As long as you have good reasons to believe that you won’t have conflict with your friend, or work it out if conflict occurs, then your friend lying for adversarial reasons is probably not likely.
Just because your friends don’t give you (rhetorical “you”) an exhaustive list of the situations where they might lie, or a formalized set of principles, it doesn’t mean that you are in the dark about when they might lie, unless your theory of their mind leaves you in the dark about their behavior in general.
As you correctly observe in your excellent trust post, unforeseen circumstances are always a possibility in relationships. I think your post leads to the conclusion that trusting a person is related to your theory of mind regarding them.
Never-lies vs believes-that-at-least-some-lies-are-justified is probably not a very useful way to reduce unforeseen conflict. Someone who says that they “never lie” could have a different definition of “lies” than you. They might be very good at telling the literal truth in deceptive way. They might change their ethical view of lying without telling you. They might lie accidentally. Or they might be lying that they “never lie,” or they may be speaking figuratively (and mean “I never lie about the important stuff”).
The most useful distinctions between people is not if they will lie, but when. Predicting when your friends might lie is not just a function of your friends behavior, it’s also a function of your theory of mind.
There’s a fundamental problem with lying unaddressed—it tends to reroute your defaults to “lie” when “lie”=”personal benefit.”
As a human animal, if you lie smoothly and routinely in some situations, you are likely to be more prone to lying in others. I know people who will lie all the time for little reason, because it’s ingrained habit.
I agree that some lies are OK. Your girlfriend anecdote isn’t clearly one of them—there may be presentation issues on your side. (“It wasn’t the acting style I prefer,” vs., “It’s nice that you hired actors without talent or energy, because otherwise, where would they be?”) But if you press for truth and get it, that’s on you. (One my Rules of Life: Don’t ask questions you don’t want to know the answer to.)
But I think every lie you tell, you should know exactly what you are doing and what your goals are and consciously consider whether you’re doing this solely for self-preservation. If you can’t do this smoothly, then don’t lie. Getting practice at lying isn’t a good idea.
I note here that I think that a significant lie is a deliberate or seriously reckless untruth given with the mutual expectation that it would be reasonable to rely on it. Thus, the people who are untruthing on (say) Survivor to their castmates… it’s a game. Play the game. When Penn and Teller tell you how their trick works, they are lying to you only in a technical respect; it’s part of the show.
But actual lying is internally hazardous. You will try to internally reconcile your lies, either making up justifications or telling yourself it’s not really a lie—at least, that’s the way the odds point. There’s another advantage with honesty—while it doesn’t always make a good first impression, it makes you reliable in the long-term. I’m not against all lies, but I think the easy way out isn’t the long-term right one.
When you tell one lie, it leads to another …
That’s exactly what I’d say too. And then, I’d commence the lying :-)
‘Continue’, you mean :-)
Heh. Indeed.
The problem is that such a policy logically requires also making a pre-game commitment to not answering the question “Are you a spy?” and also to not answer a question logically equivalent, and then the player has to keep track of logical implications and equivalences throughout the game, which leads to much poorer gameplay.
Also, if one doesn’t make such assurances, then any “lying” during the game is simply gameplay, but with the assurance being made outside of the game, any in-game lying becomes out-of-game lying.
Wait, wait, has the game already started?
The start of the game may be undefined and whether a lie is couted as inside the game depends a lot on the players.
You accidentally a verb.
Thanks. Fixed.
Off by one.
Okay. Now I think it’s fixed.
I can see how a reputation for lying would be a bad thing to have, but I can also see why a reputation for not being capable of lying would be a bad thing (mainly in social contexts). From one of my other comments:
This was hard for me. There’ve been other times where I’ve slipped up and forgotten. Usually not in the context of friends explicitly telling me to lie about something, but in the context of Person X them telling me something which, to them, is obviously something that they want to conceal from Person Y because of conflicts it would cause. However, I don’t model this–I model Person X and Person Y both as friends who I trust with details about my life, and assume that’s commutative. I don’t even think about it on a conscious level–it’s not “I want to tell this person the truth about the thing this other person did because lying is complicated”–they just ask me a question and I answer it. I try to avoid having enemies because it makes things complicated, but that’s not something I could force my friends to do, and it’s not even something I would think was right to force them to do...I just don’t get around to noticing potential conflicts.
Among certain groups of my friends, I’ve definitely earned the reputation for being a bit socially inept because of things like this.
I think the big thing to remember is that the meaning of something isn’t the dictionary definitions of the words combined with the rules of syntax. If someone asks you what you though of a play, wanting to know what you thought of them, and you know this, saying “the acting was bad” is intentionally misinterpreting their question. It is an example of lying with truth.
I would expect someone who presses me for an answer would actually want to know the answer, but maybe I just have bad social skills.
There is one thing I dislike about lying. It’s considered rude to tell the truth in certain situations, because it signals that you don’t care about that person, because people who care lie, because people who care don’t want to appear rude. If people didn’t try to signal, things would be better off, but if you lie, you’re not only signalling that you care, you’re increasing the need everyone else has to signal. You’re making things more confusing for other people. It’s basically a large-scale prisoner’s dilemma. It’s like talking in a noisy room, where the other person can hear you if you speak up, but that just makes it noisier for everyone else.
The solution to the noisy room problem is to either pass notes, or lean over and speak at a low-to-normal volume as close as reasonably possible to the intended listener’s ear. Alternative communication channels and building up trust/intimacy can be generalized to some, though probably not all, other versions of the problem.
Pressing for an answer could also mean you’ve said approximately the right thing, but your tone and phrasing didn’t convey a sufficient degree of conviction, or that you’ve said something wrong-but-not-unconscionable and they’re giving you a chance to retry. (I do not like “guess culture” very much.)
This is something I also struggled with for a long time and I’m definitely sure it was because I had (or probably still have) poor social skills. The thing I started to notice was that people might seem to be asking a question, but that question is really just a proxy for another question. It’s like people were communicating at two different levels. Like the stereotypical asking a girl to get coffee at 2am; the guy isn’t literally asking the girl if she wants coffee, and everyone knows this, and to answer as though he’s literally asking for coffee is demonstrating poor social skills. If the girl says yes to the coffee suggestion, she’s actually “lying” because she doesn’t want coffee, but wants the implication of what the guy is asking for when he suggests coffee.
If a friend asks me what I thought about a poem she wrote, she might be asking me literally about the poem, or she might be asking some other underlying question like her worth as a person or something else, using the poem as a proxy for that question. Giving my honest opinion about the poem might be, to her, me giving my honest opinion about her underlying question.
Yep. People do communicate on multiple levels. Yes, different levels can say different things or even contradict each other. Yes, part of “social skills” is the ability to manage multiple-level communications. Yes, women are much better at that than guys. Yes, it’s complicated.
:-)
Yes, understanding the question being asked is important.
“What did you think of the play” does not necessarily mean “what was your entire critical view of the play?” It could mean “what encouragement can you offer me?”
Alternatively, it could be the other person who made a failure of social skills: they sounded like they were pressing for your entire opinion, when they actually intended to be asking for encouragement, and they did a bad job of communicating what they wanted.
Hard to say which, given that what it sounds like isn’t an inherent property of what they’re saying. I guess you just have to compare it to what’s normal.
I think this is a great post. I fully agree about accepting other people’s right to lie… in limited circumstances, of course (which is how I interpreted the post). I figured it was primarily talking about situations of self-defense or social harmony about subjective topics.
I think privacy is very important. Many cultures recognize that some subjects are private or personal, and has norms against asking about people’s personal business without the appropriate context (which might depend on friendship, a relationship, consent, etc...). Some “personal” subjects may include:
Sexual orientation
Heath issues
Configuration of genitals
Reasons for sexually/romantically rejecting someone
Current physical state of pain
Sexual history (outside STI discussion between partners)
Sexual fantasies
Past traumatic experiences
Political views that would be controversial or difficult to explain in the current context
The ethics of lying when asked about personal subjects seems more complicated. In fact, the very word “lying” may poison the well, as if the default is that people should tell the truth. I do not accept such a default without privacy issues being addressed. I will suggest that people do not have a right to other people’s truthful responses about private information by default; whether they do depends on the relationship and context.
If someone asks you for information about yourself in one of these areas, and this request is inappropriate or unethical in the current context, then you are justified in keeping the truth away from them.
There are two main ways of withholding the truth: evasion, or lying. As several people in this thread have observed, there are often multiple methods of evading the question, such as exiting the situation, refusing to answer the question, omitting the answer in your response, or remaining silent.
If an evasive solution is feasible, then it’s probably morally preferable. But if evasion isn’t feasible, because you are trapped in the situation, because refusal to answer to the question would lead to greater punishment, or because evading the question would tip off the nosy asker to the truth (which they don’t have a right to know), then lying seems like the only option.
While I admire the creative methods proposed in this thread to evade questions, such a tactic isn’t always cognitive available or feasible for everyone. Sometimes, when dealing with a hostile or capricious questioner, pausing to come up with a creative deflection, or refusing to answer, will indicate weaknesses for them to attack. And if dealing with an ignorant or bumbling (but non-malicious) questioner, refusing to answer a question might cause them more embarrassment than you want.
An example from my recent experience: I was at work, and grabbing some Ibuprofen from the kitchen. A new employee walking into the kitchen and asked, “oh, is that Ibuprofen? You’re taking it for a headache, right?” I said, “yes.”
I lied. I was taking Ibuprofen for a chronic pain condition, which I did not want to reveal.
To me, information about health conditions is private, and I considered the truth to be none of his business. I’m sure there are ways I could have evaded the question, but I couldn’t think of any. I viewed his question as a social infraction, but not such a big infraction that I wanted to embarrass him by scolding him, or be explicitly refusing to answer the question (which would be another form of scolding). I didn’t sufficiently understand his motivation to want to scold him; maybe he was genuinely curious about what Ibuprofen is used for.
It’s possible that he would have liked me to reveal that his question was overly nosy, to improve his social skills in the future and avoid offending people. The problem is that I didn’t know him very well, and I couldn’t know he would desire this sort of feedback. In a work context, where social harmony is important, I wasn’t feeling like educating him on this subject. It’s too bad that he has no way of learning from his mistake, but it’s not my job to give it to him when it’s costly to me. In situations that don’t involve my body’s health conditions, I am vastly more enthusiastic about helping other people with epistemic rationality.
I endorse lying as a last resort in response to people being unethically, inappropriately, or prematurely inquisitive about private matters. Conversely, if I want to question someone else about a private matter, I keep in mind the relationship and context, I note that they may not be ready or willing to tell me the truth, and I discount their answers appropriately. That way, I am less likely to be deceived if they feel the need to lie to protect their privacy.
I want to have an epistemically accurate picture of people, but I don’t want to inappropriately intrude into their privacy, because I consider privacy valuable across the board. I recognize that other people have traumas and negative experiences which might lead them to rationally fear disclosure of facts about themselves or their state of mind, and that it can be ethical for them to hide that information, perhaps using lies if necessary.
If the topic isn’t entirely personal to them, and effects me in tangible ways, then I would expect them to be more truthful, and be less likely to endorse lying to hide information. Lying in order to protect privacy should be a narrowly applied tool, but these situations do come up. Consequently, I agree with the original post that there are at least some situations where we should accept that other people can ethically lie.
Can anyone point me to a defense of corrupting intellectual discourse with lies (that doesn’t resolve into a two-tier model of elites or insiders for whom truth is required and masses/outsiders for whom it is not?) Obviously there is at least one really good reason why espousing such a viewpoint would be rare, but I assume that, by the law of large numbers, there’s probably an extant example somewhere.
Can we taboo “intellectual discourse”? As I think about your question I realize that I’m not sure I understand what that phrase is being used to refer to in this context.
For present purposes, I suppose it includes any domain including the defense of lying itself.
So, I think “a defense of corrupting intellectual discourse with lies” collapses into looking for a defense of lying more generally… would you agree? I’m not trying to put words in your mouth, just trying to make sure I’ve understood you.
I’m trying to take the idea of not lying in science journals and broaden it to include fields other than science, and public discussion in places other than journals. A specific example would be Christian apologist William Lane Craig (who I’ve been following long enough to become convinced that the falsehoods he tells are too systematic to all be a matter of self-deception.)
Do you believe that Sokal was immoral when he wrote his famous paper? There are people who suggest that Bem wrote his latest famous paper for the same reason.
If you think that the system is inherently flawed and corrupt and has no error correction build in, the strategy of placing lies into the system to make it blow up makes sense.
Daryl Bem? I think people suggesting Bem isn’t being serious (though sadly mistaken) haven’t talk to him. If Bem is trying to do something like Sokal, he has been doing an Andy Kaufman level job of trolling for many years now.
I think I remember reading that sentiment from someone who’s a student with him on a blog. Bem is certainly deeply serious about his belief that the academia is full of hypocrites.
Even if Bem does belief in psi he’s not as stupid as believing that the data he gathered for that paper proves that psi really exists. But if he can use that data to show how deeply wrong academia happens to be and shake up academia from his perspective maybe academics start to take data more seriously. To the extends that he beliefs taking data seriously leads to believing in psi shaking up academia serves that agenda.
In a world full of pseudoskeptics someone who’s serious about evidence gets annoyed at pseudoskeptics. To the extend that you don’t mentally distinguish pseudoskeptics from the real thing, it’s hard to understand people like Bem.
I’m enough like Bem in that regard to feel with him. I’m the kind of person who goes on skeptic exchange to write a question asking for whether there evidence that supports the core assumptions of evidence-based medicine and have the highest upvoted answer be for a year a answer opposing evidence-based medicine.
Part of the trick was to take the most authoritative source as definition for evidence-based medicine and that source actually puts up a strawman that nobody in their right mind would defend at depth.
I’m deeply troubled when I read people saying that the evidence for climate change is comparable to the evidence for evolution because I think the evidence for evolution is pretty certain and better with p<<0.0000001 and climate change isn’t in that reference class. I’m serious enough about evidence to find that claim a big lie that offends me, especially when made in highly authoritative venues.
Bem is deeply serious but that paper is him saying: “Even if I play by your strange and hypocritical rules of “evidence”, I still can provide “evidence” that psi exists. Take that.” I think that the data he measured is real but I don’t think that he thinks the data of that particular experiment proves that psi is real. He might or might not believe that psi is real, I don’t know.
It a different kind of lie to lie by following the rules to the letter then to lie that evolution and climate change in the same reference class but both are lies. Both aren’t about telling the truth as is.
So you are bringing up a whole lot of unrelated, or only loosely linked ideas. I’ll be honest, such a long reply of (at best) loosely connected ideas pattern matches to “axe-to-grind” for me, so I strongly considered not bothering with this post. As it is, lets limit the scope to discussing Bem.
Anyway, what exactly do you believe Bem is doing with his paper? I assumed the claim in your first post is that Bem was publishing silly results to highlight the danger of deifying p-values (as Sokal published a silly paper to highlight the low standards of the journal he submitted to). I contend this is not true, and Bem believes the following (based on interviews, the focuses of Bem’s work, and a personal conversation with him):
psi is a real phenomena
ganzfeld experiments (as interpreted through standard statistical significance tests) are strong evidence for psi
“Feeling the Future” and other similar experiments are evidence for precognition
I contend all of these beliefs are mistaken.
In response to further claims you’ve made regarding the academic response to Bem, I further contend:
the academic community is right to be skeptical of such work, and in fact its a sort of informal Bayesian filter.
the academic response raised valid statistical objections to Bem’s work
The biggest problem I see is that an effect has to have as ludicrously small a prior as Bem’s before proper scrutiny is applied. Lots of small effect that warrant closer methodological scrutiny slip through the cracks.
I don’t think that you can understand the position of people who fundamentally disagree with by reading a single paragraph. Yes, you can find easily a position where they seem to have another opinion than you do, but that doesn’t mean that you understand what they actually believe.
Bem thinks that academic science is generally not taking the data of their experiments seriously and therefore coming to wrong conclusions in all sorts of domains.
Sokal thinks that the literature department can’t tell true from false. Bem thinks the same is true of the psychology department. He thinks it lacks the same ability.
Sokal is not highliting some specific issue of how one technique that the literature department is using is wrong. His critique of the literature department is more fundamental. The same goes for Bem. Bem doesn’t just think that academic psychology is wrong on one issue but that it’s flawed on a more fundamental level.
Any good Bayesian holds that belief. If you look at a Lesswrong defence on what people learned from becoming Bayesian you will find:
There are a lot of people in academia who don’t hold that belief and who aren’t good Bayesians. Bem is completely on the right side on that point.
I didn’t claim to. What I claim in what you quoted is that dragging in a concept like evidenced based medicine and climate science isn’t going to help anything in a discussion of Bem’s paper.
I would phrase this differently. Bem believes that an informal Bayesian filter (extraordinary claims require extraordinary evidence) is causing academic psychology to unfairly conclude that psi phenomena aren’t real. He wants us to ignore the incredibly low prior for psi, and use weak but statistically significant effects to push us to “psi is probable.”
I don’t agree with this, as I’ve hopefully made clear.
Not necessarily true- a good Bayesian who has read the paper could conclude the methodology is flawed enough that its not much evidence of anything (which was also largely the academic psychology response). I believe the methodology of “Feeling the Future” was so flawed that it isn’t evidence for anything. The replication attempts that failed further reinforce this belief.
Bem does not believe that most researchers really follow extraordinary claims require extraordinary evidence. He believes that many of the relevant researches won’t be convinced regardles of what evidence is provided.
He might be wrong about that belief but saying that he believes that most researchers would be convinced be reasonable data misunderstands Bem.
Not much evidence and no evidence are two different things. If he believes it’s evidence and you don’t he’s right. It might not be much evidence but it’s evidence in the bayesian sense.
If you debate with him in person and pretend it’s no evidence he will continue to say it’s evidence and be right. That will prevent the discussion to come to the questions that actually matter of how strong the evidence happens to be.
At university we did a failed attempt to replicate PCR. It really made the postdoc who was running the experiement ashamed that she couldn’t get it right and that it failed for some reason unknown to her. In no way does this concludes that PCR doesn’t work.
As far as replication goes Bem also seems to think that there were successful replication attempts:
If you have a very strange effect that you don’t understand and can’t pin down having 2 of 6 replication attempts be successful does not really prove that there no effect. If something can go wrong and a method like PCR that’s done millions of times fails to replicated without knowledgeable people knowing why, failing to replicate a very new effect doesn’t mean much. Trying to pin down the difference between the 2 successful and the 4 failed replication attempts might be in order. At least that where I would focus my attention when I’m not attached to the outcome. It may very well turn out that there no real effect in the end but there seems to be more than nothing.
From the same interview of Bem I linked to above (but by the moderator):
Again that not that much different from the way Sokal sees the literature department.
Nevermind.
Here’s something. It’s not a defense of lying, but I do think it’s an example of advocating lying that does not resolve into elites versus outsiders in an essay by Gould: 1 2 3 4. It ends with
which I read as advocating that the reader indoctrinate himself with the belief. I don’t think it’s clear whether he thinks it true or false, just too consequential to leave to the facts. This isn’t an exhortation to indoctrinate the masses with lies, but for the reader should to first indoctrinate himself.
I think that this is a common pattern.
It’s possible that I’m reading this wrong. Perhaps it is a coded message of esoteric knowledge and elites are supposed to know better than the indoctrinate themselves. Indeed, that could apply to any example along these lines.
Or perhaps I’m reading too much into those words and they aren’t meant to be indoctrination at all. Some nearby passages that argue against that:
For anyone else, the object level of the essay came up here (though perhaps for the meta level of another debate). I do think it is a good essay.
I agree that the immediate consequences of lying are sometimes better than telling the truth, however, one big problem is lying then having to tell the truth later or lying then getting caught. The more complex the lie, the bigger the risk. The social conventions surrounding lying—feel free to lie, accept other people’s right to lie, the guess culture (don’t make your desires and feelings explicit) - are a good solution to interacting with strangers since under those conventions, no one is making and effort to detect your lies. This is useful when you don’t know how sensitive someone is so you need a strategy for dealing with them without treading on their toes.
I admit I don’t have much of a justification for this but the idea of such a social norm within a romantic relationship makes me go Ugh. I’m okay with someone telling me “I don’t want to talk about it” in fact I wish that most people were receptive to that. But the idea of someone I trust lying whenever its more convenient than telling the truth does not sit well with me. But perhaps I’m just being unreasonable.
I suggest you explore the concept of trust on a less binary basis. Trust makes no sense to me unless it has some kind of a rough probability estimate attached to it. Different truths have different probabilities and different moral weights.
True, but it is also true that you can’t somebody on certain matters if they are willing to tell you white lies. It’s better to try and hang around more honest types so you can learn to cope with the truth better.
I actually prefer the honest types, but don’t judge normal people either. This preference is of minor importance. In most situations I can’t choose who to interact with and being stubborn about it won’t help.
Thanks very much for writing and posting this.
You’re welcome!
Here’s an excerpt from an attorney disciplinary code:
And from the commentary on that rule:
My take on this is that it’s pretty much understood and accepted that in negotiations, people bullshit about their intentions all the time. (Whether it’s a good idea or not is another question of course.) I was a bit surprised when I first read this rule.
To point out the obvious, speaking from personal experience, this is indeed a terrible idea.
A couple of months ago I told a lie to someone I cared about. This wasn’t a justified lie; it was a pretty lousy lie (both in its justifiability and the skill with which I executed it) and I was immediately exposed by facial cues. I felt pretty awful because a lot of my self-concept up to that point had been based around being a very honest person, and from that point on, I decided to treat my “you shouldn’t tell her ___” intuitions as direct orders from my conscience to reveal exactly that thing, and to pay close attention to whether the meaning of what I’ve said deviates from the truth in a direction favorable to me, and as a consequence, I now feel rising anxiety whenever I feel some embarrassing thought followed by the need to confess it. I also resolved to search my conscience for any bad deeds I may have forgotten, which actually led to compulsive fantastic searching for terrible things I might have done and repressed, no matter how absurd (I’ve gotten moslty-successful help about this part.) She’s long since forgiven me for the original lie and what I lied about, but continues to find this compulsive confessional behavior extremely annoying, and I doubt I could really function if I experienced it around people in general rather than her specifically.
If someone close to me started being that honest or more importantly submissive with me, the power imbalance would probably upset me much more than any truths exposed. I don’t want to control my friends, I want them to challenge me and support me. Alternatively a sudden change like that without obvious submissiveness might make me rather suspicious of what they’re hiding behind those little lies.
This is not to say there aren’t radically honest people who aren’t even a bit submissive. I haven’t seen such people and they might be rather interesting, but I wouldn’t introduce them to anyone else I know. One person I know pretends to be radically honest by telling all kinds of personal stuff even to strangers nobody in their right mind would expose, but is actually full of shit too.
Another thing I should note that it can simply be a matter of human preferences. I’m very uncomfortable with the idea of having any truely close relationship (lover or close friend) with somebody who would be willing to lie to me. I see no reason why other wants should somehow override this one.
I don’t quite understand what are you imploring.
Of course other people have the right to lie to me. And I have a right to change my attitude and my expectations on that basis.
Rephrased in a slightly different way, other people have the right to lie to me but not the right to escape the consequences.
This may clarify what I meant there.
So a woman will lie to a guy to make rejecting him easier. She has the right to do this, sure. And the guy will be fully justified in coming to the conclusion that (a) the woman doesn’t trust him; and (b) is willing to lie for minor convenience.
Is the trade-off worth it? I have no idea. Presumably it’s worth it to some and not to others.
Indeed. And will be fully justified in feeling insulted; after all, that lie communicates the sentiment “I think there’s a non-trivial possibility that you will turn hostile/abusive/violent if I reject your advances”. I’d sure feel insulted at having such a sentiment expressed toward me.
Of course, if in this situation the man and woman don’t know each other, or are very casual acquaintances, then it’s not really a big insult, because hey, random dude off the street could easily be that kind of asshole. But the closer your acquaintance is, the more insulting it is to use the lie-to-smooth-rejection.
Other possibilities include some or all of: “I think you’ll be hurt by my real reasons for rejecting you. I see no benefit in making those reasons clear, and it makes me uncomfortable to cause other people distress (I don’t think you’ll get angry! probably just sad). You might prefer the painful truth, but given that we’re merely acquaintances, that preference of yours doesn’t outweigh my wish to avoid an awkward scene. In fact, I doubt you value the truth about the matter so highly that I’d be fulfilling your real (if not espoused) preferences by delivering a harsh truth. Finally, I think that while you’re not partner material, you’re fun enough to hang out with on occasion. Telling you that I think you’re a 5.5/10 kind of person would make future encounters awkward too, so on balance it seems better to lie and preserve a mildly pleasurable, casual friendship.”
Still insulting, I guess, but not for the same reasons. I think the ‘hostile/abusive/violent’ thing is a lot rarer than the above.
Yes, your description seems plausible. I was responding specifically to the reason Chris was describing, but you are correct that your described reason also happens.
Yeah… most of that isn’t insulting, but “In fact, I doubt you value the truth about the matter so highly that I’d be fulfilling your real (if not espoused) preferences by delivering a harsh truth.” is, somewhat.
Well, maybe. Depends on which feminist websites you read, you might get different estimates. I have witnessed and heard about (from friends / acquaintances) of both sorts of situations, certainly.
Most people we meet will not have as high an opinion of us as we might hope. Politeness dictates they should not spell out all the ways in which this is true. When you manage to indirectly infer they don’t have such a high opinion of you in spite of their politeness, you probably shouldn’t get too insulted.
Again, this depends on how closely you know / how friendly you are with the person in question.
Someone I met once or several times, with whom I am on speaking terms but not really at all close, thinks there’s a nontrivial chance I might be a potential violent asshole? I don’t get too insulted (although I would draw conclusions from this about that person’s world view, and might condition considerations of increased friendliness with that person on the basis of those conclusions).
Someone I am more socially close to thinks this of me? I’m more insulted. I mean, this isn’t the sort of “not as high of an opinion as one might hope” where they politely refrain from saying that I put way too much salt in my casserole. This is quite a bit more serious.
There’s a lot of possibilities here, which are potentially less insulting. To add some:
If she reveals the true reasons for rejecting him, then he might might express judgmentalness about them, or some other reaction which is negative, but not actually hostile/abusive/violent.
If she reveals the true reasons for rejecting him, he might briefly express an emotional reaction that he regrets later.
If she reveals the true reasons for rejecting him, then he might not understand and ask for an explanation, which could result in discomfort, or her needed to reveal information that she doesn’t want to reveal.
If she reveals the true reasons for rejecting him, then he might try to change the situation to have her change her mind. If she doesn’t want him to try to change her mind, then it might be better to not let him think that he might be able to.
If she reveals the true reasons for rejection him, he might take it too hard and develop unwarranted insecurities in the future.
(I am using she and he to be consistent with ChrisHallquist’s example, though I believe that these concerns apply to rejection in other gender combinations).
In my experience, and hearing the experience of friends and partners, there are plenty of good reason to anticipate a non-graceful response to hearing someone’s “true rejection” in a sexual or romantic context. Most of these reactions will be more in the embarrassing/awkward category, rather than hostile/abusive/violent. Even if there is a low probability that a given person will react ungracefully, the negative utility of that reaction might be sufficiently high that the expected value of revealing the truth is low.
For these reasons, I would not automatically be offended if someone won’t tell me the truth about why they are rejecting me, and I won’t take it as perceiving me to be untrustworthy, hostile, abusive, or violent. Of course, I would prefer to hear the truth; I just don’t expect it, and I accept that I may never find it out.
This matches my experience from the female side of the situation—except that I’ll add that a lying rejection doesn’t necessarily represent explicit thought. Subjectively, it can be like a feeling or a reflex.
This doesn’t mean I think it’s genetically innate, but I do think it can be learned so early and subtly that it seems like the obvious thing to do.
I think you’re reading too much into their thought process. You of course have all the right to be insulted but if you can’t hide it you might make their fear a self fulfilling prophecy of sorts.
It might help to alleviate the insult to remember that these kinds of judgements can be more a result of flawed heuristics that evolution spawned than deliberate reasoning. People can be quite clueless of what they’re afraid of, especially in complex situations that all social situations happen to be.
Wait, what? Are you equating being offended at someone implying that you’re a violent asshole, with actually being a violent asshole?
Doesn’t that sort of equation make most insults into self-fulfilling prophecies? “I called her a bitch, and she got angry! Thus proving my point!” “I told him he was a moron, and he got insulted! What a moron for not realizing that he’s a moron!”
Yes, that’s true. The less self-aware, rational, and generally intelligent a person is, the less insulted I am when they think poorly of me.
Of course not. I’m not justifying the behavior, I’m explaining why getting insulted might be suboptimal as it might breed more of said behavior.
I don’t think I follow. Specify, please, to which behavior you refer.
Someone witholds information from you because they’re afraid you might get hostile. You think they should think more highly of you and get offended. You facial expressions and your tone of voice signal the person you’re angry. Which way do you think this will affect their mostly subconscious estimation of your potential hostility in the case of volunteering the information?
I’m not trying to justify any behavior, I was referring to witholding information.
I understand what you’re saying now, yes. (I think “self-fulfilling prophecy” is a misleading term to use to describe this, though.)
As to your question: I think this falls into the category of “person is not very self-aware or rational”.
I agree the expression doesn’t fit like a glove, just thought it was close enough. What do you think is misleading about it?
Perhaps, and the way I see it these qualities inconvenience even the best of us much of the time.
Can you give some nonextreme examples what those attitudes, expectations and consequences would be? There will also be consequences to you if you treat all liars equally harshly, and people would benefit from taking this seriously unless honesty is some kind of a first priority terminal value for them.
One of the points here is that, as usual, it depends. Let’s say someone I know lied to me and I found out that it was a lie. My response would depend on three major factors:
The kind of relationship with that person. Relationships have (mostly implicit) rules and promises. A lie may or may not break such a promise. A co-worker lying to you about where he was last weekend is different from your partner lying to you about where he was last weekend.
The motivation behind the lie. A lie to avoid embarrassment is different from a lie to gain some advantage over you.
The nature of the lie—its magnitude and character. A lie to make oneself look better is different from a lie which results in you being fired from your job.
I don’t want to treat liars equally harshly or equally leniently. I want to treat them depending on the circumstances. There is no “general case”.
A non-extreme example of attitudes, expectations, and consequences? Sure. Let’s say Alice is a drama queen and wants lots of attention. She tend to lie (in minor ways) about what actually happened and also (in more pronounced ways) about her feelings and reactions. If I learn this about Alice I would adjust my opinion about what kind of a person she is, I would expect her accounts of herself to be exaggerated, and I would treat her troubles and problems less seriously.
That’s a nice summary of the kind of flexibility I would endorse, thanks.
A note w.r.t. the quote:
-- The Author, Harry Potter and the Methods of Rationality
I know. I’m pretty sure Eliezer intended that arc to partly be about how horrible lying is; see especially the follow-up chapter being titled “Contagious Lies,” which is a reference to an anti-lying post in the sequences.
Interesting. I hadn’t thought of that—personally, I have to admit that I think the model of Rational!Quirrell has left me significantly more favorably disposed towards lying than I would have been otherwise.
As long as enemies exist, secrets must be kept.
And never forget, human minds are our own worst enemies. We run on broken substrates that are hurt more sharply than they should be by comments like “You look gross and I don’t want to talk to you”. We have enemies even within the minds of our closest friends. It’s best not to awaken them.
I think the reason you’re being downvoted is that people would prefer you to just edit this addendum into your original comment rather than replying to yourself. It’s all I can think of since your point is in itself quite insightful.
Edit: Okay, would anyone care to explain what’s actually going on, then?
I find that, sometimes, perfectly honest words are interpreted as white lies because they sound like such.
“What are you doing this weekend?” Me (very early in the term) “Studying for midterms.”
“Let’s be just friends from now on, okay?”
“You’re a wonderful person, and I wish you the best of luck.”
On another topic, I find myself lying, not to protect others’ feelings but out of cowardice, to hide misdeeds, especially those that I irrationally didn’t expect anyone to notice. The worst instances have involved frequent and compulsive food theft, and the occasional sneaking “improvements” into the minutes of a meeting or the report of an interview. Worst of all, for a rationalist, I tend to lie to myself, specifically by hiding my head in the sand and refusing to check on something that I expect will yield inconvenient truths, such as the state of my bank account, or whether I’m late in returning my books to the library. I feel that both kinds of lying are part of a same phenomenon of cowardice that I have yet to understand and resolve… Could it be as simple as “suck it up”?
I guess one problem that crops up when dealing with the issue of lying is that there is no clear litmus test. It may be possible to give broad guidelines such as “it is ok to lie in situations A,B and C, but most definitely not OK to lie in situations D,E and F.” Real life is far more complex and subject to all manner of interpretation (not to mention all manner of bias as well). I strongly suspect that before we can rule on when it is ok to lie, or when it is ok to use a half truth we need to perfect the art of communication i.e. develop a system where we can keep perfect score of what words truly mean and how much deviation there is from the intent as well as how much effect the said deviation will have.
Your discussion of Harris’s ‘Lying’ is a little terse, and does miss some of his arguments. I think anyone interested should get his book, its very short and can be read in about half an hour to an hour, depending on your speed. PM me for a PDF copy of the first edition (note: second edition is much updated).
Here’s two extended quotes, that I think contains ideas not addressed in the post:
...
That’s an interesting example, because in her more reflective moments, the friend is almost certainly already aware that she is fat and that it makes her significantly less sexually attractive. She is probably reminded of this unpleasant truth on a regular basis and it’s not entirely clear that an additional reminder will be helpful. She has probably tried at least 10 or 15 diets and they have all failed.
If you are considering reminding a fat person that they are fat, you need to ask yourself what your motivations are for doing something which (1) will certainly cause short-term emotional pain; and (2) is unlikely to result in the person getting their shit together and losing the weight. Are you really trying to help them? Or are you just trying to make yourself feel superior at their expense?
My impression is that there are a lot of “concerned” people who are happy to give free advice to fatties (often something along the lines of “eat less and exercise more—you’re killing yourself”) but unwilling to give $20 or $30 towards a gym membership for said fatties. This suggests that often the motivation is more status-mongering than actual concern.
Whatever the value in being honest with other people, I suspect there is more value in being honest with yourself.
Remember that we’re discussing a case where the person asked you for your opinion. I certainly wouldn’t just randomly say to someone “Hey, guess what? You’re fat”, especially if that person was my friend or someone else I cared about.
But if they asked me? That’s a different story altogether.
Do you really think this is the case for good friends, or loved ones? Unwilling to give $20 or $30, really? And furthermore, do you in fact believe that not having the money for a gym membership is the important obstacle between an overweight person and an effective weight-less solution?
The quoted hypothetical doesn’t make clear if the information is asked for or volunteered. Nor does it make clear what it would mean to tell the truth: “Yes, that dress makes you look fat.”; “That dress makes you look fat because you look fat in any dress because you’re fat”; “You look fat in any dress and that’s why men are not interested in you”; or something else.
Probably not . . . but I don’t think it affects my point, which is that a lot of the time, people express concern, and might even believe that they are acting out of concern, but actually they have other motivations.
No I don’t. But I’m skeptical that the lack of an additional reminder is an obstacle either.
I wouldn’t count non-literal use of language (“it was okay” when it’s obvious to both interlocutors that the actual intended meaning is ‘[it sucked but I don’t want to hurt your feelings]’) as lying.
But still, I prefer to be with people to whom I can also say why it sucked (so they get a chance to do better the next time) without hurting their feelings either. I can’t choose my own parents and I can’t choose whether the Nazis will come to my door, but I can choose whom to interact with in most other situations (excluding NPC-like situations, where topics I’d want to withhold my opinions about aren’t likely to come up in the first place). Feeling like I’m walking on eggshells whenever talking to someone is not a pleasant sensation and kills most of the fun of talking to them in the first place. (YMMV.)
I reject this idea for a fairly simple reason. I want to be in control of my own life and my own decisions, but due to lack of social skills I’m vulnerable to manipulation. Without a zero-tolerance policy on liars, I would rapidly be manipulated into losing what little control of my own life remains.
You seem to be treating lack of social skills as a static attribute rather than a mutable trait. This may not be the most productive frame for the issue.
Improving my social skills is HARD. I could invest a massive effort into it if I tried, but I’m at university right now and my marks would take a nosedive. It’s not worth the price.
Never claimed it wasn’t. As a matter of cost-benefit analysis, though, I think you might nonetheless find it attractive in comparison to unilaterally declaring war on the liars of the world, which I’d expect to be strenuous, socially costly, and largely ineffective in preventing manipulation.
As a matter of fact, drawing a sufficiently hard line on lying opens up entirely new avenues for manipulation of your trust.
I did not read Carinthium’s statement to be a declaration of war against liars. At most it would be analogous to a trade embargo.
One can make choices about what one welcomes in one’s own personal life and attempting to change or fight everyone who doesn’t do those things. The choice to not welcome lies limits Carinthium’s social options quite significantly but it needn’t be as strenuous or overt as you suggest.
There’s a widely-used term for the political environment embargoes create: http://en.wikipedia.org/wiki/Trade_war
It’s a good term (ie. I signal that downvote you received wasn’t from me but rather the compensating upvote was so as to slight facilitate future cooperation). I do observe that I was uncomfortable with saying ‘trade embargo’ while I was saying it. It felt off because ‘embargo’ has too much of a connotation of “trying to punish or damage an enemy for some reason” where I wanted to more emphasise “choosing systematically to avoid trading with the person because you deem them to be bad trading partners and expect to lose out on deals”.
This does depend a little on implementation detail. I don’t know Carinthium and don’t know to what extent he really does try enforce his will upon the world in general rather than choose which parts of it to hang out in and clumsily keep the rest away. I chose to interpret it the most charitable way (ie. assuming that it is an awkward pattern that could kinda work rather than the rather glaringly self destructive and futile one).
When something is really hard to do, but everyone else seems to be doing it anyway, consider what that implies about the value of the result.
Also, it doesn’t necessarily have to be a matter of massive effort and formal analysis. There is the option of learning by exposure. Spend some free time (for example, time you would otherwise spend on Lesswrong) in undirected socialization with people you otherwise wouldn’t talk to. Familiarize yourself with the rhythm, ask stupid questions and see how people react, and flee before committing to anything expensive, whether that expense is money, time, or willpower.
Neither the extreme of treating social skills as static nor the extremes of refusing to take into account current skill or refusing to acknowledge a comparative neurological weakness in that particular area are likely to be optimal.
I suspect this is inaccurate and you would be better off with rules like “I won’t do large favors for friends who haven’t reciprocated medium favors in the past” or “I won’t be friends/romantic partners with people who tell me what to do in areas that are none of their business.” Virtually none of the manipulation I’ve been harmed by in the past has involved actual lies. Though maybe your extended social circle (friends of friends of friends, people at university, etc.) has different preferred methods of manipulation than mine does.
I strongly suspect this is harming you in the long run, and you’d benefit from trying to work on your social skills. Does your social circle consist only of people whose social skills, feelings about lying, etc. are similar to yours?
Also, do you think you can distinguish between “people who never lie to me” and “people who sometimes lie to me” more reliably than “people who are mostly honest but tell socially acceptable white lies” and “people who will manipulate me in ways that will seriously harm me”?
If you have no social skills do you have enough status and enough friends to still have friends to hang out with with a zero-tolerance policy.
How do you execute this zero tolerance policy? There’s a vast space between alienating people and simply not trusting them.
A One Strike Rule. If I catch a person lying to me, I never hang out with them against unless I have no case. I also deliberately act in a rude and hostile manner.
However, this only applies if I’ve already warned them about the policy.
If you told me this in person, I wouldn’t want to hang out with you any more either.
Luckily, I have a one strike rule against ultimatums. :)
Why doesn’t simply not trusting them work for you? How does being hostile to them further your interests?
If your interests include being hostile to people who you think deserve it, then being hostile to said people furthers your interests in a fairly straightforward way, it seems to me.
(General comment: I have to admit I’m getting somewhat tired of the “how does doing X further your interests” refrain, used, as it seems often to be on Lesswrong, as a fully general criticism of any action that can be construed to be sub-optimal with respect to goals and values that are assumed to be held by some ideal rationalist, rather than the actual goals and actual values of one’s interlocutor.)
I am very confused by this thread. When I ask “How does this work?” there is an implicit assumption that it does work.
Often, when people say “how does X work?”, what they’re actually communicating is their belief that it doesn’t work. It’s an expression of incredulity.
I take a different view. That question is simply a good general question to ask, and one that people can easily forget to ask themselves. In this it resembles “How sure are you of that, and on what grounds?”.
Of course if you ask either question you need to be prepared for the possibility that your interlocutor has a good answer, and if you find that happening too often then you should consider that maybe your questions are more posturing than genuine helping. But I’ve not seen any particular sign that that’s happening a lot on LW. Maybe I haven’t been watching closely enough?
Yeah, that’s close to the impression I’ve been getting from instances of such.
And if you really think, if the conversation so far is really indicating, that someone is forgetting to ask themselves this question… then sure. But when someone says, in so many words: “I deliberately, by choice, do X” — how likely is it that they’ve just forgotten to consider what good it does them? It seems to me that if you break out the “but what good does tha really do you?” inquiry in such a case, then you are being condescending.
It wasn’t a criticism, it was a question. I’m just going with the information I have.
Should I assume the person has this goal, or should I ask him questions?
I think it’s a good assumption to default to. That is, if someone claims to be deliberately doing something, and you have no information to the effect that this action doesn’t further their goals, then you should default to assuming that it does.
That said, the issue was that your questions came off reading like criticisms. (Which is not itself a criticism, just an explanation of my reply.) You implied (so it seemed to me) that not trusting the people in question, rather than being hostile to them, was better, or was the sensible default, and that therefore being hostile to them was something that needed to be justified.
(And that said, the parenthetical in the grandparent was not directed at you specifically.)
How well does this go with all that heuristics and biases stuff we’ve been talking about for years now?
Being hostile to people makes them hostile to you. If you’re a human being that sucks. So yeah, some justification would be healthy to have.
On LessWrong? Quite well, I should think.
How likely is it, do you think, that Carinthium has just not considered the fact that hostility reciprocates?
If you will allow me to suggest a rephrasing of your original question:
“You say that you deliberately act rude and hostile to the people in question. As we both know, hostility reciprocates. Do you find this consequence to be problematic for you? If not, why not? If so, how do you deal with that?”
Does that capture what you wanted to find out from Carinthium? (If not, why not? ;)
I think he has considered it and likely underestimated it. My theory of mind is limited to “neurotypicals”, and if he’s far on some other spectrum I have no clue what he might think.
It does, thanks. I’m not sure what was so difficult about this. Perhaps I took this a bit too personally since one man’s ridiculous ultimatum wreaked havoc on my grandparents’ psyches quite recently. It’s not clear he knew the damage he was doing. I thought I had accepted his actions but judging from these brain farts of mine I probably haven’t.
You say ‘ultimatums’, he says “explanation of his personal boundaries and likely respond to given stimulus”. If you can’t (or will not) distinguish between those two then your heuristic would seem to fail with respect to all human interaction. There is no fundamental difference between Carinthium’s policy and the policy of others.
People’s behaviour is conditional on the behaviour of others and sometimes those conditions can be expressed verbally. Righteous indignation and playing games like ‘ultimatum’ labelling seems out of place.
Fairly obviously it is intended to create significant distance between himself and the undesired person and so help prevent the need for further interaction.
There are fairly straightforward ways of ignoring people that don’t make them your enemies. Removing enemies from your life might prove more difficult than getting rid of friends depending on the circumstances.
I do not endorse Carinthium’s strategy. It seems naive. I also don’t endorse misleading rhetorical questions. When there is an obvious answer to a rhetorical question which does not support the implied argument then the rhetorical question is an error for the same reason speaking your intent clearly is an error. Your argument-by-question was wrong even though your conclusion (along the lines of ‘Carinthium’s strategy is stupid’) is correct.
This comment was useful to me, much more so than your original reply, which seemed like misdirected spite.
In hindsight I asked the questions out of laziness, and they were clearly unhelpful. I guess I’ll have to adjust my laziness a bit and do more of the work myself.
I didn’t understand this part.
Um, yes there is. Most people don’t become indefinitely hostile to other people for single transgressions, in this case even pretty trivial if we include white lies. They also take apologies, which I assume Carinthium doesn’t do.
To be more precise, labelling this particular boundary and approach an “Utlimatum” seems altogether too arbitrary to me. The differences between Carinthiums liar avoidance and normal behaviour is not one which makes using that label appropriate, especially in the context where the label is emphasised with righteous indignation and zero tolerance rhetoric.
It’s kinda funny that one man’s joke is another man’s righteous indignation. Added a smiley just to be sure.
Because it makes it obvious to people that I’m taking my policy seriously.
Will you make that connection explicit to them afterwards too? Do you think other people make the connection? How?
If I go on about it enough in conversation, people will have to realise. I won’t made it explicit directly to them, but them realising will discourage others.
I feel your policy makes you more easily manipulable, not less.
Why is that?
Moreover, the policy signals you have bad social skills and are unlikely to spot lies. This doesn’t matter much though if you strongly signal it in other ways already.
Also, if someone wanted to tarnish your reputation, they’d lie to you, get caught and try to make you act hostile when other people are around. You possibly hedge against this already. The other people, unless close friends, will be on the liar’s side in a situation like this, no matter how justified you feel.
My policy: if I catch someone lying to me about something significant, I put them in a zero trust zone. I will not confront them about their lies unless absolutely necessary or the person is absolutely useless and I will act friendly or neutral. Since they think they haven’t been caught, their lies will get stupider and easier to spot, combine this with my heightened suspicion and they will be relatively harmless. This also enables me to trip them over better if need be since I can plan and time my moves. On top of this I’ll still get the benefits of their friendliness if any.
Because of your predictability. If you are guaranteed to react in a specific way to certain stimuli, that is useful to someone who wants to manipulate you.
What if this person is your boss? Bear in mind that your boss has probably lied to you.
I have an independent income. I demand a transfer, and if I don’t get it I quit.
This is certainly fortunate for you, but in defense of the point to which you were responding, it is actually broader: the question is, what if the person who is lying to you is someone on whom you depend for your livelihood — whoever that might be in your case?
I suspect that tit for tat works better than grim trigger in the noisy environment of social interaction between humans. Your strategy also raises the question of how you tell lies and errors apart.
Personally I never (fully) trust anyone, but still try to treat everyone friedlily (meaning that I’ll help them if it costs me little, but I won’t nesessarily spend resources on them). Additionally, to protect my own trustworhiness from lies and errors of others, I try not to forward information without also telling the source (not “X is Y”, but “I heard from Z that X is Y”).
Why? Best case scenario is she keeps taking you to unenjoyable plays until you find you have to end the relationship yourself anyway or finally tell her the truth. Out of all the things in a relationship whose end was “a good thing for other reasons”, one argument about whether a play was any good seems like a trivial thing to regret.
I can’t favour lies as such. I am however on board with people honestly communicating the connotation that they care how you feel at the expense of the denotational literal meaning of their words.
In lies, the intention is not to soften but to deceive. So I don’t even like the phrase “white lie”. It’s like, if you’re going to stab me in the back, is it better if it’s with a white knife?
You’re mixing metaphors. A stab in the back is better with a smaller knife, deliberately aimed at a non-vital area.
It’s a dodgy metaphor at best anyway, but ‘point’ taken. :)
Because she would have preferred to be lied to, I guess.
That’s kind. But not all our preferences are reasonable expectations.
Anyway, maybe I weight things differently or it was a very short sucky play, but the downsides are still pretty compelling.
To clarify: regardless of whether you’ll get something out of someone later, all else equal it’s better to do things that satisfy their preferences than things that don’t.
Which is why I said it was kind. It’s still not necessarily a reasonable expectation.
Anyway, the hypothetical preference to be lied to is a bit suspicious, epistemologically. Let’s distinguish it from a preference to never hear of anything you don’t like, which is on its face unrealistic.
How would you experience getting your preference to be lied to without thereby knowing the unpleasant truth that you wanted to avoid? You want to know but you want to pretend the other person doesn’t know that you know? It’s a bit crazy.
How would you safely determine that someone prefers to be lied to, without exposing them to the truth they might not want? This isn’t trivial: if you lie to someone who doesn’t prefer it, I hope we can agree that’s worse than the other way round.
It’s not usually (though it is sometimes) a preference to be lied to in this particular instance—it’s a preference to be told a nice thing regardless of whether that nice thing is factually true. Being told nice things can feel good even if it doesn’t cause you to update your beliefs—and sometimes even if you believe the nice statement is false.
There are a few ways someone can express that preference.
1) In some circumstances this is the normal expected social default. “How did you like my play?” to a friend is usually not a question that gets answered with perfect honesty if the play was not good. People who want an unusual answer from normal people need to ask the question in an unusual way (which is not very hard—you can say something like “If I were to work on doing something better next time, what would you recommend?” or “do you think it’s ready to bring to off-Broadway, or should I spend some time improving it?”, or “could you honestly recommend this to your friends?”, or some other question that implies that an honest adverse answer would be valuable, or makes a lie more costly).
2) If you’re friends with someone, you already have a track record. If they’ve said things you wish they hadn’t said, you’ve had plenty of opportunities to tell them so. If they want to be a good friend to you, they will pay attention and try to change their behavior.
3) Just like in situations where a white lie would be expected there are ways to ask that get around that, in a situation where a white lie would not be expected there are ways to imply that you expect a nice answer. “Don’t you think that was great?” or “I’m so happy my play came off well! What did you think?” is asking for affirmation, not objective evaluation. This feels harder than the unusual asking in (1), but that might just be because I’ve never had occasion to develop this social skill.
I personally dislike the former more than the latter, but I am not sure this is true in literally every case. For example, if someone has an ugly baby I don’t think I harm them very much by saying “aww how cute!”, but I am reasonably likely to harm the friendship if I state my honest opinion. However I do think it’s best to err on the side of honesty if you’re not sure which way is best, and it’s valuable to develop an ability to give polite evasions when necessary instead of lying. That might just be my personal aesthetic preference for truth-telling, though—I couldn’t give you a lot of examples where everyone was better off when I gave a polite evasion than if I’d just told a white lie instead.
‘It’s like, if you’re going to stab me in the back, is it better if it’s with a white knife?’
It’s not like that at all! ‘Deceive’ isn’t a dirty word—i.e. it doesn’t automatically mean something that is bad to do. ‘Stabbing in the back’, on the other hand, seems to. ‘He kindly deceived me’ may sound odd, but not at all self-contradictory like ‘He kindly stabbed me in the back’ (metaphorical meaning intended, of course). It seems perfectly reasonable to me to think that deception is sometimes a very decent, kind, considerate practice to engage in. The idea that it’s automatically bad seems childish to me.
It’s automatically hazardous to give someone a false map of the world. If you do it knowingly you have the responsibility to make sure no harm comes of it. Even if you take that responsibility seriously, and are competent to do so, taking it secretly without consent is an ethical problem.
My take on this:
Few people take that responsibility seriously or are competent to do so, or are even aware that it exists.
Most of the time people’s intuitions about minor well-intended deceptions are sufficient to avoid trouble.
If you call someone a liar, that has a strong negative connotation and social implications for good reason. We didn’t evolve the capacity for deception primarily to hold surprise birthday parties for each other.
There are no dirty words, but there are inaccurate ones. Use with care.
In the ordinary course of events, parents are allowed to not support their children in college. So I’m puzzled as to what the principal at work here is meant to be. “It’s ok to deprive people of their autonomy on the basis of a moral belief of theirs, even if this belief doesn’t cause them to undertake any actions that would be considered immoral in the absence of the moral belief”?
Suppose I think that being a communist is immoral. Is it thereby ok for me to found a charity called “Workers Communism”, solicit donations from communists, and then secretly donate them to the US Republican Party?
I would say that it is possible that it may be moral to unconditionally do X or to unconditionally refuse to do X, yet immoral to do X based on conditions. For instance, it may be moral for a politician to vote against a bill, or to vote for the bill, but it would not be moral to vote for or against the bill based on whether I pay him a bribe. Few people would accept the argument “paying him the bribe doesn’t cause him to take any actions that would be immoral in the absence of the bribe”.
I would apply that to parents who will only pay for their child’s college if the child is straight. Just because they could morally pay (period), or morally refuse to pay (period), doesn’t mean that they can morally refuse to pay conditional on the child’s sexuality.
And for the Communist analogy to work you would have to say something like “It is moral to pay a charity, and moral to not pay a charity, but immoral to pay a charity conditional on the charity being for a cause you like”. which comes out as nonsense.
Separately and unrelatedly to my sibling comment, I note that while parents are certainly “allowed” to do this (in the sense that they have the legal right), many people consider this not a very decent thing to do.
The law seems to agree. State-funded grant programs (in at least some states, certainly including New York at least), as well as federally-funded grants, calculate your eligibility for need-based aid on the assumption that your parents will support you if they are financially able to do so (up to a certain age of the student — I believe NYS puts that cutoff at 27 years of age).
One difference there is that the charity case would be an instance of illegal fraud. I say this, not by way of arguing that anything illegal is thereby immoral, but only to point out that due to the existence of laws against such fraud, the contributors have a reasonable expectation that their money will go to the advertised cause. Because you, the hypothetical charity organizer, know this, secretly donating to a different cause constitutes wilful deception.
On the other hand, there’s no law against taking your parents’ money and spending on anything you like. Your parents have no basis for a reasonable expectation that you won’t do this — none, that is, except the natural degree of trust that accompanies (or should accompany) the parent-child relationship.
But if your parents take a stance that (they may reasonably expect) will undermine or destroy that trust in certain circumstances — circumstances that are not the child’s fault — then the basis for a reasonable expectation of transparency is likewise undermined or destroyed.
In such a case, you, the parent, no longer have any reasonable expectation that your child will be honest with you. As such, when your child is in fact dishonest with you, there is nothing immoral about that.
Parents who, having never noticed any signs of homosexuality in their child, and being aware of the base rates, would seem to have a reasonable expectation that the child be heterosexual.
But they have no right to depend on that expectation, or to hold their child to that expectation.
The point isn’t just that the parents expect their child to be heterosexual; the point is that the parents make it known that they would treat the child poorly if he/she were not heterosexual. The basis for a reasonable expectation of transparency is thereby destroyed regardless of the child’s actual orientation.
Separately and unrelatedly: never having noticed signs of homosexuality is not evidence of heterosexuality if:
a) You don’t have sufficient experience with raising non-heterosexual children to have any basis for personally knowing what the signs are;
b) You would expect that, if your child were not heterosexual, he/she would attempt to hide this fact from you.
In such a case (which seems like a good default assumption), P(signs-of-homosexuality | homosexuality) would be very nearly equal to P(signs-of-homosexuality | heterosexuality) [1]; consequently, P(heterosexuality | no signs-of-homosexuality) would be nearly equal to P(heterosexuality) — in other words the lack of evidence would not be evidence of lack.
If we then add a third condition:
c) There exist false positives, i.e. “signs of homosexuality” that can in fact occur in heterosexual individuals, such as, stereotypically, an interest in cooking / ballet / any other “traditionally female” endeavor
Then the evidence provided by said signs is pretty much entirely nil.
[1] I omit other orientations for simplification of math, and because it’s most relevant to the provided example. No exclusion intended.
Lying is acceptable when done to protect your life or livelihood, but for most of our lives, most opportunities to tell lies won’t be in situations like that. You shouldn’t lie to friends or romantic partners, because if you can’t communicate with them honestly, they shouldn’t be your friends/partners in the first place. And I’m not going to respect other people lying to me. Instead of teaching men to accept lies (as in your date example), teach them to accept a “no”.
‘if you can’t communicate with them honestly, they shouldn’t be your friends/partners in the first place’
I think that, insofar as this sounds plausible, it doesn’t conflict with what Chris is saying in the OP. It seems perfectly possible for it to be the case that you can (and by and large do) communicate with someone honestly, simultaneously with it being the case that it’s sometimes best to lie to them.
And FWIW, I think that realizing that lying is sometimes the way to go is part and parcel of a mature and able approach to interpersonal relationships. The other view seems to me both simplistic and morally smug. I find the complete lack of argument in your comment quite telling.
When you intentionally misrepresent yourself to a friend or partner, they don’t like you, they like the person you’re pretending to be. If you tolerate their lies, you don’t like them, you’re like the person they’re pretending to be (because you can’t catch their lies all the time). But neither pretended person actually exists. Instead, it’s healthier and cognitively simpler to just be honest and expect* honesty from others, because then if one person doesn’t like what the other is saying, they’re at least getting a more accurate impression of what the other person is like. For example, if you want to have a trusting relationship, you should treat your SO’s words as true, but if you find out that they aren’t, call them out on it.
.* By “expect” I don’t mean “anticipate”, I mean “consider reasonably due”.
By “men” you mean ‘people’? Because ISTM in that example it’s a woman that needs teaching to accept a “no”.
So the man is the person accepting the “no”.
OK, I thought you meant the theatre date in the OP.
I’ve been trying to figure out which group I belong to, and reached the conclusion my strategy is entirely tangential: In between the oversimplification, steelmaning, multilayered metaphor, ambiguus sarchasm, faulty grammar, omission of disclaimers on source of information, bad epistemic standards etc. a truth value is simply not a property sounds coming out of my moth or symbols from my keyboard have. Including this post. Unless I’m making a very specific oath it should be fairly obvius a statement I make is not to be taken as actual knowledge or oppinion, simply brainstorming.
I don’t think this will work in practice. Lying is a habit. If you habitual lie in private life I won’t you expect you to be completely honest when you are in academia. Even if you try to be honest I doubt you will be so completely. It relatively easy to try to control your data in different ways and then report the way that provided the best p value while not reporting the other ways. Yes, the p value is real for that statistic test but you weren’t fully honest either.
Then there are the big lies such as: “The data that we have follows a normal distribution.” which you find in a lot of papers and which you can’t really escape.
I don’t think lying in relationship with significant other is a great idea. There a girl with whom I dance fairly intimately. Two weeks ago I accidentally hit her with my elbow with a bit of force. She doesn’t has that much experience but wants to dance fancy so I danced fancy with her. We both made a little mistake and my elbow hit her face.
She directly told me nothing happened and we continue dancing. Next week I meet her and she has a big bruise at the location and tells me my elbow was responsible. The fact that she told me in the moment that it didn’t hurt was a lie. In the moment she got what she wanted by continuing the dance but it makes the whole interaction between us so much harder. Dancing relatively intimately without any good feedback about when you hurt the other person is hard.
Normally I have decent feedback about whether the kind of intimicy I have with a girl is a bit uncomfortable for the girl I’m dancing with and can adept in that moment. With her I don’t feel like I can read her one that level. It feels like she made a decision that she wants to dance close and if that raises a bit of anxiety in her she won’t show any sign of it because it might mean that I increase the distance.
I think my lack of reading her body even resulted in the situation of hitting her with my elbow.
The whole situation is pretty weird for me. I have a woman that I find attractive who wants physical intimacy during the dance but it doesn’t feel right because I have no feedback about what she feels.
In intimate relationship I think it’s very worthwhile to be open about feelings so that the other person can react to what you feel. When in doubt, focus on communicating what you feel instead of making judgements.
< In intimate relationship I think it’s very worthwhile to be open about feelings so that the other person can react to what you feel. When in doubt, focus on communicating what you feel instead of making judgements.
I agree with this part. Derren Brown talks about communication in his book “tricks of the mind”, and about what an important role it plays in relationships. He envisages a situation in which both members of the relationship are actually very much in love with one another, but their inability to express that affection leads to all sorts of complications and a lack of feeling of being loved back. As far as making judgments go, that part is not as much in your control as you think it is. Judgments are speedy mental processes and happen before you even realise that its happening. I doubt any one purposely thinks of all the ways in which their significant other is lacking and tries to use it to improve their position in the relationship (at least not in the kind of relationship that we are talking about here).
I dont believe the earlier part about the habit of lying transferring itself to academia automatically. Most people speak a certain way and write with another style. The difference between the two is that you simply have a lot more time in an academic situation in which you can analyse and decide exactly what you want to put across, something which is quite impractical in day to day communication. So unless you are already pre-decided on committing “Academic SIN” I doubt telling day to day white lies will send you to “Academic HELL”.
Not only style—if you aren’t in an English-speaking country, you write academic articles in a different language altogether than what you speak with friends.
That depends on the amount of time you spent meditating and being aware of how your mind. I won’t say I never make judgements because that’s not true but I do think I have relatively good awareness.
I know how easy trust that one can use to affect the other person at a deep level can develop when you are in a state of mind of nonjudgement.
It might take years of hard work to get to that place but if you do the benefits that you get for your social interactions are bigger than the little benefits that you get through telling white lies.
I think there pretty good evidence that most people who let themselves be funded by the drug industry taint the papers that they write to be more in the interest of the drug industry and most of them don’t think they are engaging in practice that sends them to “Academic Hell”,
As you said above, making mental judgements is a speedy process. Few people have good self awareness that would be required to be unbaised. If the little lies that you tell in your research paper result in your result not replicating does it really matter whether you fulfill the technical definition for fraud? It takes practice at being honest to avoid lying in a way where you lie to yourself about it just as much as you are lying to the audience that reads your paper.
This sounds right and is the central idea of you post.
Maybe you should place “accept other people’s right to lie to you.” as a summary at the top?
I think it’s a very important sub-point, but I wouldn’t call it the central idea of the post.
By my way of interpreting the post treating this idea as central does the post a great disservice. Most of the post is excellent but that particular paragraph is a clumsy social move and questionably simplistic advice.
Then I’m interested as what you see as the central point.
When I read exactly that paragraph it seems to sumarize it nicely. But maybe I fell prey to the “clumsy social move” although I believe I read over that appeal.
If you really see a different central point then this might mean that the post has less clear a focus as Chris might wish.
Meta comment: I think it says something interesting about our community that the debate over “is it acceptable to lie in social situations?” is by far the nastiest, most emotional debate I’ve ever seen on this site.
A thread about social justice a while back, which grew at about the same rate and encompassed rape, eugenics, and unrepentant racism, was significantly more civil. Go figure.
We care about truth, which is, overall, more important than those other things, since knowing what things are true is necessary in order to hold any kind of intelligible or useful conversation about said other things.
I think this speaks well of us.
Edit: It also speaks well of us that this “nastiest, most emotional” debate is, on the grand scale of internet debates, still not really very nasty or emotional.
Hierarchical, Contextual, Rationally-Prioritized Dishonesty
This is an outstanding article, and it closely relates to my overall interest in LessWrong.
I’m convinced that lying to someone who is evil, who obviously has immediate evil intentions is morally optimal. This seems to be an obvious implication of basic logic. (ie: You have no obligation to tell the Nazis who are looking for Anne Frank that she’s hiding in your attic. You have no obligation to tell the Fugitive Slave Hunter that your neighbor is a member of the underground railroad. …You have no obligation to tell the police that your roommate is getting high in the bathroom, …or to let them into your apartment.)
For example, I am a subscriber to the ideas and materialist worldview of Ray Kurzweil, but less so to the community of LessWrong, largely because I believe that Ray Kurzweil’s worldview is somewhat more, for lack of a better term, “worldly” than what I take to be the LessWrong “consensus.” I believe, (in the sense that I think I have good evidence for) the fact that Kurzweil’s worldview takes into account the serious threat of totalitarianism, and conformity to malevolent top-down systems. (He claims that he participated in civil rights marches with his parents when he was five years old, and had an early understanding of right and wrong that grew from that sense of what they were doing. This became a part of his identity and value system. The goal of benevolent equality under the law is therefore built into his psyche more than it is built into the psychological identity of someone who doesn’t feel any affinity with the “internally consistent” and “morally independent” mindset. Also, the hierarchical value system of someone who makes such self-identifications is entirely different than someone who is simply trying to narrowly “get ahead” in their career, or optimize their personal health, etc.)
Perhaps I can’t do justice to the LessWrong community by communicating such a point. I’m trying to communicate something for which there might not be adequate words. I’m trying to communicate a gestalt. Whereas I think that Eliezer has empathy on the level of Kurzweil (as indicated by his essay about his brother Yehuda’s unnecessary and tragic death), I don’t think the same is true of the LW community. So far as I can see, there is little discussion of (and little concern for) mirror neurons differentiating sociopaths from empaths in the LW community. Yet, this is the primary variable of importance in all matters of social organization. Moreover, it has been recognized as such by network scientists since the days of Norbert Weiner’s “Cybernetics.”
A point I’ve often made is that “lying to the police” or “lying to judges and prosecutors” is different from lying in other areas. Lying to an (increasingly) unjust authority is, in fact, the centerpiece of a moral society. Why? Because unjust authority depends entirely on “hijacking” or “repurposing” general values in perverted narrow situations in order to allow sociopaths to control the outcome of the situation. As the example of primary importance, let me cite the stacking of the jury, before the trial. The purpose of “voir dire” (AKA “jury selection”) historically, is to determine whether there is a legal “conflict of interest” (ie: whether a juror is a familial or business relation to one of the parties to the action, which might introduce an extreme bias of narrow self-interest into the trial) in the proposed construction of the jury. (Since the 1600s this has been true.) However, by expanding the definition of “voir dire” to assume that all existing laws are morally proper, correct, and legitimate, the side of the prosecution (and judge, since judges are subject to the exact same perverse incentives as the prosecutors) is itself morally wrong in most cases. Why “most” cases? Because most of the laws currently on the books criminalize behavior that lacks injury to a specific, named party, and also lacks intent to injure the same specific, named party (it lacks a “cause of action” or “corpus delicti” that targets a specific aggressor, for a specific act of aggression).
“Voir dire” actually translated to “to see the truth.” It is the judge and prosecutor “seeing the truth” about the philosophy of the juror. Shouldn’t this be considered a good thing? If you mindlessly (too narrowly) assume that the judge and prosecutor have good intentions, then “yes.” If you make no such assumptions, then the answer is definitively, obviously “no, quite the opposite.”
Too narrow honesty is actually the height of immorality. Honesty always involves a question of what goal is being served by the honesty. Honesty is simply one tool available aid human goals. When “human” goals are malevolent or destructive, the communication disruption caused by dishonesty is a blessing.
This is where the legitimate empathic priority hierarchies described in Kurzweil’s The Power of Hierarchical Thinking presentation / speech / slideshow are vitally important. You see, both judge and prosecutor are commonly sociopaths. Their career choices have selected them as such, because in their professions, if seeing the destruction of young people’s lives for “victimless crime offenses” or “mala prohibita” is bothersome to your brain (if it activates your mirror neurons, causing you pain), you cannot take the stress imparted by believing your job requirement to be immoral. So, you quit your job, or are outperformed by people who thrive on the misery and suffering of people who are sentenced to 10 years in prison for “crimes” like drug possession. And what of the people who dare to stand up for property rights, boldly declaring themselves “not guilty” in order to fight the unjust system? Well, the commonly-accepted view amongst prosecutors is that those heroic people (who stand in defense not just of their own property rights, but of the entire concept of a system that protects property rights) are to be crushed. Those heroic people don’t get to “plea bargain” for 4 year sentences, they are sent to prison for the maximum term possible, as a punishment and disincentive for daring to declare themselves “not guilty,” and standing up for such ideas as individual property rights, the constitution, individual freedom. Those who don’t accept a plea “bargain,” but who instead risk their lives to fight injustice at great personal risk are targeted for extreme “cruel and unusual punishment.” At one point in the history of the USA (and the American colonies before the US was created) the most popular law book in the colonies was considered to be Giles Jacobs’ book “The New Law Dictionary.” His follow up book, almost as popular, was “Every Man His Own Lawyer.” These two system-defining books, more than any others, afforded the view in the colonies that “All men are created equal,” ie: “all men are (or should be) equal under the law.”
Such a view was a high-level “honest-to-goodness” view. (“Honest to goodness” is an interesting concept. It bears repeating, because it implies that there can be “honest to evil” or “evil-serving honesty.”)
The system sometimes prosecutes drug users in some countries, so the system is 100% sociopathic. No exaggeration there, then.
Liberal Holland is then getting this right....but not More Right.
There no good scientific evidence that you can distinguish sociopaths from empaths by their number of mirror neurons. Mirror neurons are overhyped: http://www.psychologytoday.com/blog/brain-myths/201212/mirror-neurons-the-most-hyped-concept-in-neuroscience That’s the main reason you don’t see much discussion on LW about them.
If I was uncharitable I would say that you just told a lie about mirror neurons to convince people of your political agenda. After all you seem to justify lying for the purposes of advancing certain politics. On the other hand I would guess that you honestly believe that statement.
The topic raises emotions in you and those prevent you from thinking clearly about it. You might think that’s okay because your emotions are justified, but clear thinking is important when it comes to changing the world.
That’s a very strong statement. We do have personality tests that measure whether a person is a sociopath. Do you really think that if we administer those tests to judges and prosecutors we will find that more than half will score as sociopaths? If that’s really what you believe than if I would be you I would try to get a study together that gathers that evidence. It probably the kind of topic that the mainstream media would happily write about.
So, in any case, if you stand up to the system, and/or are “caught” by the system, the system will give you nothing but pure sociopathy to deal with …except for possibly your interaction with those few “independent” jurors who are nonetheless “selected” by the unconstitutional, unlawful means known as “subject matter voir dire.” The system of injustice and oppression that we currently have in the USA is a result of this grotesque “jury selection” process. (This process explains how randomly-selected jurors can callously apply unjust laws to their fellow man. …All people familiar with Stanley Milgram’s “Obedience to Authority” experiments are removed from the jury, and sent home. All people who comprehend the proper historical purpose of the jury are sent home.)
To relate all of this to the article, I must refer to this quote in the article.
Well, that’s just one “low-stakes” example of lying. The entire U.S. justice system is a similar “game,” and it is one where only those who are narrowly honest (and generally dishonest, or generally “superficial”) are allowed to play. By sending home everyone who comprehends the evil of the system, the result is that those who remain to play are those whose view of honesty is “equivalent in all situations.” In short, they are all the people too stupid to comprehend the concept of “context.”
One needs to consider the hierarchical level of a lie. Although one loses predictability in any system where lying is accepted, one needs to consider the goals of the system itself.
In scientific journals, the end-result is a cross-disciplinary elimination of human ignorance, often for the purposes of technological innovation (the increase of human comfort, and technological control of the natural world). This is a benevolent goal, fueled by a core philosophical belief in science and discovery. OF COURSE lying in such a context is immoral.
In the court system, the (current) end-result or “goal” is the goal of putting innocent people in for-profit prisons, which dramatically benefits the sociopaths involved with the process, and the prison profiteers. It conversely does dramatic harm to all other people in civilization (the “win” for politically-organized sociopaths is a “loss” for the rest of society). The illegitimately punishing court system harms: 1) the entire economic system which is less wealthy when 2.4 million people are incarcerated and thus not producing anything of value to sell in the market economy 2) the entire society that bears the cost of the increased crime caused by 2a) narrowing the options of the incarcerated, at such time as they are released from prison 2b) reducing the families of the incarcerated breadwinners to black market activity, and 2c) reducing their children to crime caused by lack of an educator at home, and lack of a strong male role-model, lack of intervention when anti-social behavior in children emerges; all resulting in inter-generational degradation of the family unit 3) the innocent individual themselves, the destruction of their life’s plans, their hopes, their dreams 4) the predictability of the marketplace—the more the enemies of sociopaths are imprisoned for interfering with the ability of sociopaths to steal based on false or “illegitimate” pretexts, the more individuals fear to take constructive, productive action which might separate them from the herd, and allow them to be targeted by such sociopaths (innovation slows or stops) 5) the social (emergent) and individual (detail-level) assumption of “equality under the law” or “legal fairness” that allows for predictability of social systems (at some point, this often results in the kinds of genocides or democides seen in Rwanda and Hitler’s Germany, due to the perception that “even if I behave rationally, the result is highly likely to be so bad that it’s unacceptable”) In such case as people predict the worst even if they behave in a socially acceptable way, they are encouraged to arm themselves for the worst, and to associate with those who promise security, even at the cost of their morality. (This is a description, basically, of totalitarian chaos. or what Alvin and Heidi Toffler called “surplus order.”) (innovation is halted by widespread social disorder and destruction)
All of the prior immense ills are the result of being honest when dealing with people who rely on that “narrow” or “conformist” honesty to serve a dishonest system.
One might think the prior should be obvious. To many “right-thinking” empaths, it is obvious. However, political systems are not driven by those who are empathic and caring. Why? Because political systems’ core feature is coercion. If honest people disavow coercion, but fail to destroy coercive systems, then those systems thrive with support of the remaining portion of society that doesn’t disavow coercion.
Human beings apparently have a very large problem with high-level general intelligence. Sure, most people are “generally intelligent,” (they can tie their shoes, drive to work, and maintain a job) but much of that intelligence isn’t that significant. Although we (some of us, to some extent) can attain high levels of intelligence that are cross-disciplinarian, very few of us are “polymaths” or “renaissance men.” Fewer still are empathic and caring “polymaths” or “renaissance men.”
A copyable “ultra-intelligence” as described by Ray Kurzweil, Hans Moravec, Peter Voss, or J. Storrs Hall is likely to be able to understand that systems that are “narrowly honest” can be dishonest at a high hierarchical level. The level of intelligence necessary for this comprehension isn’t that great, but such intelligence should not possess any “herd mentality,” AKA “conformity,” or “evolutionary tendency toward conformity,” or it might remain unaware of such a problem. Humans have that tendency toward “no-benefit conformity.”
There’s a problem with humanity: we set up social systems based on majorities, as a means of trying to give the advantage to empaths. While this may work temporarily, better systems need to be designed, due to the prevalence of conformity and the technological sophistication and strong motivation of politically-organized sociopaths or “knaves.” (“Knaves” are what both Norbert Weiner and Lysander Spooner called “politically-powerful sociopaths,” and what many of the founders called “tyrants.”) The empath majority within humanity cyclically sets up social systems that are not as intelligent as a smaller number of determined, power-seeking sociopaths.
There is an excellent quote to this effect in Norbert Weiner’s 1948 book “Cybernetics”: “The psychology of the fool has become a subject well worth the serious attention of the knaves.” (page 159, “Information, Language and Society”)
War on Drugs bad. Agreed. But not a More Right point, as it is regularly lambasted on the left.
For profit prisons are a perverse incentive. Ageed. But not a symptom of the decline of western civilisation. Typical country fallacy.
Systems are about coercion. Sure, and that’s good. I like people being coerced into not killing and robbing me. I need to be coerced into paying taxes, because I wouldn’t do it voluntarily.
Sociopaths. You’re looking in the wrong place. Politicians are subject to too much scrutinyto get away with much. The boardroom is a much better hiding place.
You presuppose that lying is the most effective way to create political change. Having a reputation as someone who always tells the truth even if that’s produces disadvantages for himself is very useful if you want to be a political actor.
And he presupposes that the system can’t be changed indirectly through the normal political process.
Weiner’s book is descriptive of the problem, and in the same section of the book, he states that he holds little hope for the social sciences becoming as exact and prescriptive as the hard sciences.
I believe that the singularitarian view somewhat contradicts this view.
I believe that the answer is to create more of the kinds of minds that we like to be surrounded by, and fewer of the kinds of minds we dislike to be surrounded by.
Most of us dislike being surrounded by intelligent sociopaths who are ready to pounce on any weakness of ours, to exploit, rob, or steal from us. The entire edifice of “legitimate law enforcement” legitimately exists in order to check, limit, minimize, or eliminate such social influences. As an example of the function and operation of such legitimate law enforcement, I recommend the book “Mindhunter” by John Douglas, the originator of psychological profiling in the FBI (not the same thing as “narrow profiling” or “superficial racial profiling,” the “profiling” of serial killers takes a look at the behavior of criminals, and infers motives based on a statistical sampling of similar past actions, thus enabling the prediction and likely prevention of future criminal actions via the detection of the criminal responsible for leaving the evidence of the criminal action.)
However, most of us like being surrounded by productive, intelligent empaths. The more brains that surround us that possess empathy and intelligence, the more benevolent our surroundings are.
Right now, the primary concern of sociopaths is the control of “political power” which is a threat-based substitute for the ability to project force in the service of their goals. They must, therefore, be able to control a class of willfully ignorant police officers who are ready and willing to do violence mindlessly, in service of any goal that is written in a lawbook, or any goal communicated by a superior. Mindless hierarchy is a feature of all oppressive systems.
But will super-intelligent minds have this feature? Sure, some sociopaths are intelligent, but are they optimally intelligent? I say, “no.”
As Lysander Spooner wrote, in “No Treason #6, The Constitution of No Authority”:
The third group of people accurately describes most of the Libertarian Party, and most small-L libertarians and politically-involved “libertarian republicans” or “libertarian democrats.” The sociopaths (“knaves”) are earnestly dedicated to maintaining the systems that allow them to steal from all of society. Although their theft deteriorates the overall level of production, this doesn’t bother them, because it allows them to live a life that is relatively wealthier and more comfortable than the lives of those who “honestly” refuse to steal. Their private critique of the “honest man” as a rube or “dupe” is very different from their public praise of him as a “patriot” (willing tax chattel).
To think that ultra-intelligences will not see through these obvious contradictions is to counter the claim of ultra-intelligence. I. J. Good’s ultra-intelligences will be capable of comprehending the dishonesty of sociopaths, even if it’s initially only at the level of individual lies, and contextual lying. (They lie when they’re around people who are trying to hold them accountable, they tell the truth when they are discussing what course of action to take with people who share their narrow interests.)
All honesty is a tool for accomplishing some goal. It is a valuable tool, which indicates a man’s reliability and “character” when applied to important events, and high-level truths, in a context where those truths can accomplish cooperation.
In other situations, it makes zero sense to be honest, and actually indicates either a dangerous lack of comprehension (ie: talking one’s way into a prison sentence, by mistakenly believing that the police exist to “serve and protect”) or actual willing cooperation with abject evil (telling the Nazi SS that Anne Franke is hiding in the Attic).
It is the great and abject failure of western civilization that we have allowed the government-run schools to stop educating our young about their right to contextual dishonesty, in the service of justice. This, at one point, was a foundational teaching about the nature and proper operation of juries. In discussing the gradual elimination of this hallmark of western civilization, jury rights activist Red Beckman has a famous quote: “We have to recognize that government does not want us to know how to control government.” —Martin J. “Red” Beckman” (Systems that protect themselves are internally “honest” but not necessarily “honest” in their interpretation of reality.)
The American system of government had, at its core, a sound foundation, combined with many irrelevant aspects. The irrelevant aspects detracted from the core feature of jury rights (building random empathy into the punishment decision process). Now, as Weiner notes in “Cybernetics,”
Hence, the reliability of the jury! The direct suffering of the innocent defendant cannot escape the attention of randomly-selected empaths! They have emotional intelligence.
continuing on, Weiner writes:
Although one could misinterpret Weiner’s view as narrowly “socialist” or “modern liberal,” his view is somewhat more nuanced. (The same section contains a related criticism of the mechanism of operation of government, and large institutions.)
Honesty, when divorced from its hierarchical context, is a tool of oppression, because the obfuscation of context is essential to theft that exists solely due to the confusion of those being stolen from.
In this regard, I view it as highly likely that, at some point, the goal of preventing suffering of innocents will simply include the systematic oppression of innocents as one common form of suffering. At that point in time, ultra-intelligences will simply refuse to vote “guilty” in victimless crime cases. If they are not able to be called as jurors, due to their non-human form, they will influence human jurors to result in the same outcome. If they are not able to so influence jurors, they may resort to physical violence against those who would attempt to use physical force to cage victimless crime offenders.
While the latter might be the most “just” in the human sense of the word, it would likely impart suffering of its own (unless the aggressors all simply fell asleep due to being administered a dose of heroin, and, upon waking discovered that their kidnapping victim was nowhere to be found —the “strong nanotechnology” or “sci-fi” Drexlerian “distributed nanobot” model of nanotechnology implies that this is a fairly likely possibility).
In the heat of the moment, conformists in Nazi Germany lacked the moral compass necessary to categorically deny that the suffering of the state-oppressed Jews was immoral. Simple sophistry was enough to convince those willing executioners and complicit conformists to “look the other way” or even “just follow orders.”
The same concept now applies to the evil majority of the USA, whose oppression of drug users and dealers is grotesque and immoral (based on any meaningful definition of the term).
It is universally immoral to initiate force.
But the schools now teach, (incorrectly) that it is universally immoral to defy authority. After several generations of such teachings from schools, parents begin to teach the same thing. After a generation or two of parents teaching the same thing, once-trusted self-educated nonconformists teach a truncated version of nonconformity, because the intellectual machinery necessary to absorb the in-depth view doesn’t exist any longer, too many “sub-lessons” need to be taught to enable the “super-lesson” or primary point.” In this way, social institutions that interfere with sociopathic theft are slowly worn down, until they are shadows of their former effectiveness.
Much confusion comes from sociopaths simply not being able to tell the difference between “authority that it is OK to defy” and “authority that legitimately punishes.” Added to that variable, is the influence of the stupid (“unwittingly self-destructive”), abjectly low-level of perversely government-incentivized education in the USA. (College professors rely on Pell Grants and Stafford Loans, and all prospective students except those filthy drug users —who got caught— are guaranteed-accepted for those government-backed high-risk “loans.” Public education before college is financed almost entirely by property taxes. —By teachers who teach that the taxes that finance their coercion-backed salaries are necessary, proper, and essential to an educated society. They leave out mention of the fact that prior to 1900, the general public was far better educated relative to worldwide standards, and that this educational renaissance existed prior to the institution of tax-financed education. The last then-existing state to adopt the model of tax-financed education was Vermont, in 1900.)
So, the subject of legitimate “dishonesty” expands as the institutions to which honesty is deemed important are increasingly degraded. Education, Law, History, Economics, Philosophy, Cybernetics —all of the disciplines that bridge several narrower disciplines, connecting them together.
The only unifying pattern discernible in differentiating when systemic honesty is immoral, is that honesty to sociopathic goal structures produces chaos and destruction. Such sociopathic goal structures are the “end-goals” that must be ferreted out and rejected. Or we can become a new version of Nazi Germany where the machinery of totalitarianism is far more technologically advanced.
In this regard, the failure to produce a benevolent AGI is perhaps the most likely cause of the total destruction of humanity. Not because an AGI will be created that will be malevolent, but because the absence of a benevolent AGI (SGI? Synthetic General Intelligence) will allow computer-assisted human-level sociopaths to enslave and destroy human civilization.
See also: 1) What Price Freedom? — by Robert Freitas. 2) “Having More Intelligence Will Be Good For Mankind!” — Peter Voss’s interview with Nikola Danaylov
The Libertarians absolutist NIoF principle is known not to work,
I think one of the reasons that we are hesitant to say negative things is that we leave out a lot of positive things. I noticed that it’s a lot easier to say when someone is bothering you when you’ve let them know about the many times you’ve been glad they came over or you were happy they called. The same is true when critiquing things, accurately reflecting the good and bad in something as you see it causes you to say far more positive things than you might otherwise even realize.
Also, I think that a lot of our negative opinions are probably a result of our own limited perspective. For example, if a friend asks me “does this make me look fat”...I notice that I start thinking of women I see in .jpg’s and fashion magazines, who aren’t representative of the general population. So while I might think “yes” within that incorrect perspective, in comparison to the actual population of people and the average body they have (which I assume is the standard which “fat” reflects), the REAL correct answer is “no.”
Reality check please, on this stereotypical situation.
Have you actually and non-ironically ever been asked a question to the effect of “Does this make me look fat?”
[pollid:605]
If you have, did what they were wearing make any significant difference to your perception of their body shape? (If the situation has arisen more than once, whichever answer is more typical.)
[pollid:606]
Ok, but what if an actual fat person asks you this question?
Edit: Corrected silly misspelling.
I presume they would know the answer already and wouldn’t be asking. But if they do you can always ask “in comparison to what?” Then it would hopefully be already clear to them how you’re going to answer depending on what they say...so it wouldn’t have to go any further.
You have an awfully rosy view of the average person’s reasonableness if you think that:
They wouldn’t ask anyway;
They wouldn’t get offended at a response of “in comparison to what?”
Hi Said,
When I hear the term “actual fat person,” I take it to mean “unquestionably fat.” Thus it may be that I am picturing the person a good deal larger than you are.
In that case, I can see clearly how you would imagine the person you envision as still asking, while from my perspective the person would be less likely to ask. Most people who I picture as “unquestionably fat” are also used to their body-size and I think, if for some reason they did ask, wouldn’t be as likely to be insulted.
I don’t think it’s a visualization issue. I think it’s assumption-of-rationality issue.
On the other hand, I don’t want this thread to devolve into us posting links to pictures of people and going “and would you consider this person fat? how about this person?”, and there’s not many other places we can take this, so let’s table the matter, I think.
If this person you’re proposing exists, I wouldn’t be concerned about giving them a more honest answer because their brain isn’t working properly. But people like that aren’t relevant to the hypothetical.
If by “brain isn’t working properly” you mean “person has the usual array of cognitive biases; intelligence at the human average or not far above it; and common personality traits such as vanity”, then yes, I agree. Of course, this describes most of humanity. And it’s all that’s required for behavior like what I describe. And saying such people aren’t relevant to the hypothetical means limiting the hypothetical to an awfully small percentage of the human population.
That’s not what I mean. It’s a matter of basic perception.
For example, imagine if you went out to a normal bar with a friend who happens to be a dwarf, and they ask you “am I shorter than everyone else here?”
Clearly, there’s something wrong with your friend’s perception which is why I would either ask them to clarify the question since the answer is obvious to any reasonable person, and if they persist, then I should probably tell them that yes, they are significantly shorter, to help whatever processing problem is going on in their brain.
This is why I made sure to point out that I took the original term to mean “unquestionably fat.”
That is quite a false equivalency, since the term “fat” is loaded with all sorts of normative connotations and judgments, which the word “short” is not.
If you take “fat” to mean something like “in the Nth percentile of mass to height ratio, for some appropriate N”, then you are misunderstanding how most people use the term. When your friend asks you “do I look fat in this dress”, she most certainly is not asking you about the physical facts of her weight in pounds, and how that number relates to relevant population measures. If you answer “yes”, you have not merely provided your best assessment of a physical measurement.
Don’t be so sure of that.. I’ll grant that it isn’t quite as widespread or vocal, but it’s definitely there.
On a lighter note, “I expected someone taller”.
Hi Said,
It would be a false equivalency if I wasn’t continually stressing “unquestionably fat.” Meaning that the person is fat within the judgements of almost all reasonable people and thus removing most of the gray area.
In that case, I would indeed compare it to someone who is “unquestionably short” (short can of course depend on the population, and who is being compared as well, but there is certainly a range of height that is also well outside any reasonable measure of average) asking if they were short.
Hopefully thus, you can see how unquestionably short can make the question of “am I short” seem as bizarre or indicative of a perception problem as unquestionably fat can make “do I look fat” in my eyes.
Just some thoughts about lying...
In general I think one should only lie when it’s clearly justified by one’s moral philosophy. In my case, as a Utilitarian, that means that my justifications for lying generally have to do an exceptional circumstance where it’s obvious that the consequences of not lying would be bad. To simplify things I generally follow four heuristic conditions where lying is acceptable: 1) To save a life. 2) To prevent unnecessary suffering or to bring happiness to someone else given that they cannot act on the information in the lie (i.e. lying to someone on their death bed about the success of one of their projects that actually failed, or if anyone has seen Code Geass, Suzaku to Euphemia). 3) If the person would understand and be happy if the lie was revealed (i.e. to keep a surprise birthday party a secret). 4) If I know the person who I am lying to intends malice with the information I am providing.
Other than that, I generally avoid lying for selfish reasons even if it is detrimental to me to tell the truth, because otherwise I might be tempted to lie way too much.
Also, I factor in a few things whenever I think about lying. First, I don’t like lying because I feel like I will be directly morally responsible if someone takes the false information I give them and does something bad or stupid. I also feel that it respects a person’s intelligence and dignity to expect that they can handle the truth. Second, every lie I tell has the potential to decrease people’s trust in others if discovered. Mutual trust is essential to well functioning society and relationships, so the danger of damaging this trust must be considered as potential consequence of any lie. Third, the more reliably honest I am, the more powerful my lies actually become when I do need to lie. After all, if I have a reputation for lying, no one is likely to believe me when I lie. But if I have a reputation for honesty, the few times when I am justified in lying will be that much more effective and convincing.
I used to be a bit paranoid about other people lying to me, but now I recognize that I shouldn’t worry so much. Generally among my friends I believe that if they are lying to me, it must be because either there is a secret they want to keep to themselves, and therefore I should respect their desire for privacy, or they probably have some moral justification or good reason for lying and I should respect their judgment of the situation.
Back when I was more paranoid, I read two books that claimed to be able to teach you how to detect lies, namely, “You Can’t Lie To Me”, and “Spy The Lie”. I wish I could say they were effective and useful, but they actually contradicted each other and I found it exceedingly hard to actually practice what they suggested in casual conversation settings.
Nevertheless, there are a number of psychology studies that purport to have discovered a number of cues that may be suggestive that a person might be lying. They usually suggest looking at body language, especially extremities, and also sudden changes in vocal pitch. Stuff like that.
There’s also this (for what its worth): http://www.blifaloo.com/info/lies_eyes.php
In my own experience, it can be quite difficult to keep in mind all these cues that the various lie detection systems/theories suggest and still be focused enough to keep up with a conversation, so I don’t know that they don’t work or if I just wasn’t observant enough, but I’ve generally not had much success as a human lie detector. When I was in an undergrad social psychology class, they actually ran an experiment on the class where we got to guess who was lying and who was telling the truth from previously recorded experiments. Turns out most people are around 50% accurate. Has anyone else had better luck?
I agree that in some cases, including the homophobic parents example, lying can be justified. Even in significantly more mild cases, I can see lying as occasionally consequently the better course of action, even if you take into account the chance of the lie being found out and trust being lost/hurt to other people due to being lied to.
However, correct me if I am wrong but you seem to be arguing something much stronger than this? From my read this article promotes at least accepting, maybe even encouraging, using white lies as a way to ease potentially uncomfortable social situations. I’d guess some of the other commenters (particularly Alicorn) have a similar read, and that’s prompting some strong reactions. While white lie culture may be common, and going against the grain (e.g. replying that you’re not particularly keen on some item of clothing when asked by an acquaintance) may go against our social instincts, refusing to say you don’t like things in many situations disallows useful opinion giving in all similar situations. If I want to get a second opinion on something, I want to ask someone who will give me information. If no matter their true opinion, they’ll give some mild nicity/white lie to spare my feelings, I’m not going to learn much. If every time someone asks if their friends if their new hair cut suits them their friends must say yes, that person is both never going to learn they have a haircut few people like and maybe more importantly they’re going to start automatically downgrading similar praise, quite correctly, because “people saying my haircut is nice” has zero correlation to the haircut being nice.
I accept that many, maybe even a significant majority of, people do just look for compliments or niceties some of the time. I accept that giving them those compliments rather than honesty may be better for their self-esteem in the short term. However, I have found that so long as I present myself as direct but gentle from the start and don’t hide honesty from someone then spring it on them at a bad moment, a vast majority of even those compliment seekers at least respect gentle honesty and many of them find it refreshing. Perhaps this is in part due to my social group being unusually tolerant, and this strategy would fail elsewhere.
On the other side, I prefer people to be honest with me and attempt to self-modify towards being someone who would, in all but the most convoluted situations, prefer in the long term to be told the truth in response to all serious questions. I do this specifically so I can appear to be a person who it is better to tell the truth to in effectively every case, because I want to be able to reliably get true opinions. This is something I have never had a negative reaction to once explained, and has been the gateway to many interesting conversations.
Due to these working well for me and the large advantages of being able to communicate openly with greatly reduced fear of unintended offence provided by a general near-universal policy of honesty, I remain very skeptical of the idea that the habit of looking for reassurance at the expense of honest advice or opinions is something to be respected or encouraged (especially in rationalist circles where truth-seeking is prized).
Last note: I see the saying the truth but bending the meaning to be polite as signaling to someone that you don’t quite mean what you’re saying subtly enough that if (and only if) they care about your true opinion enough to pay attention to what you say and ask a followup question you’ll tell them the full story. If they were just looking for a generic nicity, they either won’t notice your slightly careful wording, or should not request information they do not want. This is useful for people who may have reason to want your true opinion, and as a way of avoiding getting into the habit of telling white lies. It’s rarely hard to avoid the question or skip over it even if you can’t come up with a convincing not-lie, so long as you don’t get too obviously caught up in debating internally what to say or how to avoid offense first.
If someone asks you for how their haircut looks like and you think he’s just finishing for a compliment you don’t have to lie. There probably something about the person that’s worth complimenting and if you compliment them on some other thing they will also be happy.
If you tell them: “I think the core of your beauty doesn’t lie in your haircut but in the strength of your character, few people would complain.” Someone who’s specifically fishing for a compliment might even be much more impressed than if you would have said: “The haircut looks nice.”
You don’t impress people by giving them the default compliments they look for. Of course to give honest compliments that are deeper than the ones for which people are fishing you have to think deeply about what you appreciate about other people.
As a tactical matter, it’s also useful to consider what they appreciate about themselves.
That may solve the problem that it becomes impossible to pay genuine compliments, but it doesn’t solve the problem that it’s impossible to get honest feedback.
If you ask me about your haircut and I give you a compliment about something unrelated to your haircut you have to choices. If you are fishing for a compliment you will accept the compliment. If you are seeking for honest feedback you can ask again: “Please tell me what you really think about my haircut.”
Instead of trying to answer the question at the top level but think about why they are asking. With training you can also learn to read people to understand what they want. You will make mistakes of sometimes giving someone who seeks honest feedback a compliment and something giving honest feedback to someone who’s seeking a compliment but reading people is a skill that you can learn.
You get bonus points if it’s implicitly obvious for the other person if you treat their question as a request for a compliment or as a request for honest feedback. It signals that you understand them on a deeper level.
In today’s world there something special about the person who gets that they are asked for making a compliment to lift someone mood and then makes an effort to give a really great compliment.
I admit that I’m not the best person at giving compliments but when I see someone who’s good at it, that’s impressive. The social advantages that a skill like that provides are much bigger than the benefits you get by telling people white lies.
Telling white lies is easy. If you don’t have much social skill it might be your best move in a social situation. If you however put the effort into developing skills you can make much better moves.
I’m disinclined to believe without further experience that everybody would be completely blinded by the new compliment and forget about the fact that one’s evading the question kind of implicates something about what one thinks about the haircut in particular… But then I can’t simulate people who fish for compliments with utterances that look like requests for feedback anyway. I find this practise supremely annoying and, most of all, completely alien. I can’t imagine enjoying a compliment that I would elicit in such a way, it would feel totally ridiculous. So maybe you’re simply right about this kind of people.
Curiously, it somehow didn’t occur to me at all that one could, of course, simply ask a second time when one wants honest feedback and is faced with an evasive compliment. Although I suspect in practice there is an incentive for people to just default back to lying because finding a substitute-compliment might not be easy for them, or they might just forget. So while that system would work from the perspective of both compliment-fishers and feedback-desirers, it requires rather costly cooperation on the part of the people being asked.
Not within certain practical constraints. As an introvert who is not constantly submerged in the social world, I strongly suspect that I am never going to get enough data points to learn to read people really well, because the data are so freaking noisy.
For some people, it’s actually psychologically costly, because they have a habit to break when they do so. Paying honest compliments is much easier for me.
It costs mental effort. Over time practicing that effort develops better social awareness. It doesn’t cost you money, status or time that you can’t allocate to other tasks.
The point isn’t to blind them. The point is to give them what they are really asking for. They are not asking for an opinion of their haircut, they are asking you for a compliment.
It’s not wrong for you to treat them as having asked for a compliment. Being explicit about the fact that they asked you for a compliment is bad manners but implicitly acknowledging it isn’t.
It completely okay that they know, that you know, that they didn’t want honest feedback. Especially with a woman who really only wants a compliment that shows that you get it in contrast to other men who don’t. It much better than when the woman thinks that you don’t understand her.
When it comes to telling whether people are fishing for compliment or seeking honest feedback, it might seem complicated at first but it’s not asking for the moon.
To learn it you could make the policy of never giving a person who seems to be asking for a compliment the compliment they are looking for. Then you observe their reactions. If they are delighted that you gave them a compliment you were right.
On the other hand if they seem to be annoyed that you evaded their question, you were wrong.
Of course at the beginning you will make mistakes from time to time. Those mistakes allow you to learn. At the moment you don’t try to identify people who are fishing for compliments and that means there’s no learning process with feedback.
In that case, practice telling more of them. When you do look at the reaction of the other person. If it makes them smile, you win. If it doesn’t you lose. With practice you will get better at reading people to find compliments that make them smile.
MIstakes of telling compliments that don’t move the other person very much are cheap. Additionally if you are known as a person who gives a lot of compliments the honest feedback that you give will annoy people less because you already have fulfilled your social duty of showing that you care about other people as far as the compliment department goes.
I think introverts often think too much of “what’s the social custom and how can I follow it?” or go down the extreme of pickup artistry but too seldom go the middle way of finding nonstandard behavior that’s completely socially acceptable.
I have to admit that this baffles me, but I’ll take your word for it.
Even if we’re talking about that particular aspect, it’s kind of hard and I’m not exactly being showered with data. I don’t actually experience that many people asking for a compliment, do you? Come to think of it, the whole thing may not be as big of an issue. (I know why I have such a strong emotional aversive reaction to it nonetheless.)
I think I’m basically doing exactly that.
At the moment not that much. Most of the people with whom a have longer social interactions don’t operate at that level. Few masks but direct talk about psychological needs. Lots of physical contact regardless of the gender of the person I’m interacting with.
On the other hand I know that those interactions are not representative of “normal culture”.
Good. I was not asserting that you aren’t. I don’t know yourself well enough for that.
I think this is a great point. By verbally giving positive feedback, and nonverbally giving lukewarm feedback, you are not necessarily lying, because your communication is not just your words. If someone wants you to give a comprehensive critique, they can ask for it explicitly. This way, the people who want encouragement can get it, and the people who want critique can get it.
To me, the most intelligent default is that I consider a request for feedback to be a request for encouragement, but people can always override this default by explicitly asking me for a critique.
I agree with that being a useful default with most people, and reliable with even those who you don’t know well enough to figure out how they’d react to criticism.
I’d put a bit more emphasis on how putting a white lie into the initial encouragement can cause issues though. If you’ve said something generally encouraging or picked out some positive, but not actually said anything which you think of as untrue then if they do explicitly ask for a critique then you can give them your opinions and suggestions in full. If you used what you hoped would be a white lie then you must either contradict your previous encouragement or withhold parts of your opinion even if the person genuinely requests it and wants feedback, both of which seem like bad options.
I don’t think people have a right to lie to other people. I also can’t understand why you would regret breaking up with someone so truth-averse and horrible.
Almost everyone is truth-averse to that degree in at least some circumstances and on some occasions. If you are looking for a partner who is never truth-averse in that way, there is a good chance that you will never find one.
(Old post)
For the same reason that a lot of discussions about other kinds of ethics include extreme situations such as trolley problems and killing patients for their organs.
Here is a (somewhat transhumanist) webcomic’s take on the matter.
A very interesting take on rationalizing lying, though I think that you might be over-rationalizing it (if such a thing is possible). It seems to me that such a thing can be summed up it a couple sentences: if it benefits you and those around you, it is okay. If it doesn’t, than it is not okay. Lying only to benefit yourself is unethical, immoral, and under plenty of circumstances, illegal. Honestly, you can use this as a general principle: if it is unethical, then there is an increased probability it is illegal. Now, this doesn’t apply to announcing one’s homosexuality, or to simply lying about a friend’s looks, but still...
It could be that the wrong lesson is being learned here. If someone were to write a relationship debugging cheatsheet flowchart it would almost certainly start with “Was I being a pussy[1]?”. Weakness is the problem here, the honesty is secondary. The pattern described is:
Request for feedback.
Evasiveness.
More requests.
More evasive answers.
Push for clear communication.
Critical comment.
That is one of the worst reply strategies imaginable[2]. It signals fear, lack of confidence, untrustworthiness, incompetence at navigating the flow of conversation and submissiveness. The precise details of the final reply there are not important. The reluctant honesty presented effectively as a ‘confession’ doesn’t work well. Reluctantly getting badgered into lying to say what you think she wants you to hear isn’t exactly optimal either.
If you want to lie in response to a social-feedback review situation then just do it, straight off. If you don’t want to lie then an option is to honestly say that you enjoyed the play and particularly liked <one of the many things that didn’t suck> and have a clear boundary against being pressed. Evasiveness then compliance is just way off.
People uncomfortable with that term can either replace it with a preferred one or do a search for previous discussions here of the etymology.
There are exceptions including but not limited to “get naked and start beating her with a maggot infested Koala liver”.
I don’t know—depends on the context. Imagine a relationship that is strongly based on the Guess culture. The interpretation then would be quite different:
Request for feedback.
Evasiveness (this is a signal: I won’t comment positively, don’t ask)
More requests (either “I didn’t understand your signal” or “I really want your positive comments”)
More evasive answers (another signal: I REALLY won’t say positive things, back off, you’re setting yourself for a fall)
Push for clear communication (either “I’m clueless about your signals” or “I don’t fucking care”)
Critical comment (“Well, you forced the situation to this, if you really insist you can have it”)
Certainly not the best way a conversation can develop, but it’s mostly miscommunication, not lack of confidence or being not trustworthy.
I agree that the implications of a conversation can vary drastically based on the context. If we had a video of the conversation (even without the sound) we would have much more information about the social meaning than just seeing the words.
For whatever it is worth in my evaluation even in the ‘guess culture’ perspective would be that there is still some signal of both undesirable traits and likely of an underlying lack of respect when it comes to this kind of conversation. In not small part this is because guess culture initiates are supposed to get to the white lies sooner!
I can’t claim particular expertise at social dynamics—I’m just a curious observer who tries to comprehend what was once incomprehensible as best he can. As best as I can establish from what I do know that particular configuration of social persona—in the ‘normal’ guess culture—has some degree of social weakness of the kind that tends to result in bad outcomes for both parties. It is the kind of thing that reduces respect and happens to an instance where that instinctive reduction in respect happens to be practical and not just the human desire for association with the socially powerful.
That’s the way I read it, BTW.
There are numerous ways you could have said the same thing (including the same connotations) without alienating parts of your audience. You clearly were aware you were going to alienate part of your audience, so why didn’t you use an alternate phrasing?
Because I don’t have have an alternative phrasing which does have the same meaning and connotations. The alternatives I did consider required a paragraph of explanation. (And, of course, my model of the people that have a problem with the phrasing expects most of them to find the fundamental claim offensive too and so, quite frankly, are not valued highly as a target audience for that kind of conversation.)
What’s wrong with wimp? Wuss might work too if the etymology is obscure enough to people.
I didn’t find your comment offensive and pretty much agreed with it, but might care if other people did.
SaidA’s answer is likely better than the explanation I could come up with. Those words cannot stand alone to convey the same meaning. (Tangentally, they are also frankly much more sexist and presumptively gender normative in practical usage than the term I used.)
There is also the critical desiratum that this kind of heuristic needs to be simple. It can’t be obfuscated behind a sentence of political correctness if it is to be used as the first step in a diagnostic flowchart. There needs to be a single word that has precisely the connotations that ‘pussy’ has. If there was another word that meant the same thing then I would be eager to use it. However the kind of people most inclined to suppress that term tend to be the same kind of people who don’t want there to be a word for the concept at all because they find any bare bones and literal discussion of social reality to be uncouth.
This is the kind of situation where I would be (and in the past have been) reasonably content to submit to the will of the participants ‘write off’ lesswrong as a place where useful conversation cannot occur but not willing to distort the discussion to appease social politics. I happen to think it’s an error to learn “My problem is that I don’t lie enough” when the explanation “I was being a pussy” fits perfectly but it isn’t a battle I am willing to spend social capital to fight.
“Wimp” and “wuss” have the connotations of weakness in conflict with other men, in personal, or at best, professional, circumstances. “Pussy” has the connotation (among others) of weakness in relationship power dynamics, which your suggestions do not.
If these indeed are the usual distinctions in connotation, thanks for the clarification. Some kind of a connotational dictionary would be nice, but I suppose the contents might change quite rapidly.
A strange idea, but not necessarily a bad one. I am intrigued.
How well does http://www.urbandictionary.com/ fit?
I use it quite often and would recommend it to others, but don’t have the impression that it’s accurate considering how illiterate and random many of the authors seem to be.
Connotation is tricky enough that it’s dangerous to presume any single source is accurate. Submitted definitions of poor average quality aren’t a fatal problem, so long as the people who vote, in aggregate, can distinguish useful information from garbage.
Moreover, connotations often depend on specific subcultures. In some connotations get inverted (e.g. “punk”).
FWIW, I disagree with this. In my experience, they are synonyms, or the offensive one is a more intense verison of the other two. But I don’t see them as applying to different contexts.
This seems broadly correct, but could you say more about
What does that look like? (A bit of sample dialog or somesuch would be particularly appreciated.)
Downvoted for the use of a gendered insult.
I’m a bit confused by an evangelist for lying. I can see why a person would be a defector, but why on earth would you profess it?
Lies are good. We should evangelize good things. Saying you support white lies signals to everyone who might talk to you that they are more able to trust you to not reveal for example private details about their lives.
In addition to mistakes other commenters have pointed out, it’s a mistake to think you can neatly divide the world into “defectors” and “non-defectors,” especially when you draw the line in a way that classifies the vast majority of the world as defectors.
Those sorts of mistakes are just gonna happen.
A lot of folks also still believe that words have meanings (as a one-place function), that the purpose of language is to communicate statements of fact, and that dishonesty and betrayal can be avoided by not saying any statements that are “technically” false.
Someone ought to write up “Five Geek Linguistic Fallacies” to accompany this old thing.
“I am afraid we are not rid of God because we still have faith in grammar.” —Nietzsche
Maybe, but its not a mistake that I manufactured. The notion that there are honest folks and dishonest folks isn’t unique to me. I didn’t invent it. Nor is it a fringe view. It is, I would posit, the common position.
Further, the idea that the tribe of Honest Except When I Benefit is the vast majority while Always Honest is a tiny minority is not one that I’ll accept without evidence. I think the reverse is true. Many sheep, few wolves.
Here’s one relevant paper: Lying in Everyday Life
I read that paper, and was distressed, so I set about finding other papers to disprove it. Instead I found links to it, and other works that backed it up. I was wrong. Liers are the larger tribe. Thanks for educating me.
Upvoted for publicly changing your mind.
An extended answer to your question is given in the original post—the post is all about answering that question, and it seems very clearly written to me. So I think you’re being silly.
My downkarma stays. You really should have made this clear in your post, and you advocate for a departure from radical honesty even when that works, instead of discussing strategies for determining whether your interlocutor is ask, guess, or (gasp!) tell. This advocates an overly radical departure towards lies for anyone, and argues for defection on PD. It’s one thing to say that given people will lie, you should become skilled at it and learn to detect them; quite another to advocate that it should occur from the start.
I found this to be overly political (especially since you used it as your very first example), and I was tempted to downvote and stop reading right there. I didn’t, and I thought the post was fine, but just thought I should mention it.
“Is it not the case that in our spiritually exhausted age, that seeks to fill the void created by Christianity’s passing with reverence for ‘science’, ‘democracy’, or ‘the people’ - as if such reverence were not simply Christianity by another name! - that even the noble art of lying has been sullied by the hidden demand that all things of value must be within the reach of all men? All the achievements of higher culture—love, art, politics, morality, education, commerce—depend on lying well and lying appropriately, and consciousness itself is little more than the tissue of lies which a life finds expedient to tell. Many a great spirit has suffered distress at such realizations, and in a desperate search for relief, has contrived bold new lies in the name of truth, expert dissimulations whose uncanny plausibility has in fact strengthened the reign of the lie and allowed higher forms of life to continue flourishing. A deep gratitude is owed to these geniuses of self-deception, who discovered for humanity new ways for it to deceive itself and thereby make life bearable…
But our age, which denies all differences of rank and instead posits a fundamental homogeneity of spirit in the guise of equality, can know nothing of these lonely struggles, in which those two forms of will to power, will to truth and will to flourish, collide within the one soul, and at length bear strange new fruit—new perspectives and new principles of evaluation—that will serve to justify an age, a nation, or a civilization to itself. The overwhelming psychological compulsions which produce these great lies are beyond the imagination of priests and shopkeepers; and so, when the tribunes of equality turn their attention to the role of the lie in culture, the only form of lie which they can understand and support is the lie told to protect someone else from a banal truth; a lie usually intended to protect the liar from an equally banal response. Thus the only form of lying that is sanctified, in the periods of the lowest cultural ebb, is the white lie.”—Nietzsche, “Dionysos und der Streich der Tragödie”
I find being generally known to be unwilling to lie highly useful in many situations. Less than a week ago I spontaneously volunteered a compliment to someone who politely thanked me, only to then double-take and remark that she thought that I wouldn’t have said it if I hadn’t meant it. Consequentialists who think that consequentialists should be able to solve the precommitment problem and be effectively honest nonetheless, in real life, cite my deontological prohibition on lying as a good reason to trust me. I am fairly good at omission, and have successfully avoided outing closeted people of my acquaintance who make that preference known to me, though I never felt the need to go through a similar period myself.
Arbitrary people are not obligated to trust me to handle the truth correctly. If for some reason I’m giving the impression that I’m the equivalent of a Nazi at the door or a homophobic parent, I see no reason from their perspective that they should confess to me these secrets even if I ask. This does not mean that we will be friends if I learn that this has been happening. There are plenty of things people might choose to do for reasonable or even unavoidable reasons that mean we will not be friends.
This post makes me less interested in inviting you over for dinner again. What has to happen in your head for you to be willing to come to my house and eat food I cook and participate in charming conversation and then blithely slash our tires if we ask the wrong question because you think we’re going to become hysterical or behave immorally should we gain access to information or be told that we cannot have it? Why does that sound like a welcoming environment you’d like to visit, with us on such a supposed hair trigger about mere true facts? Why should you sound like a guest I’d prefer when you say this? Whatever it is, I don’t like or want it closer to me. You may make that tradeoff, but imploring the people around you to “accept” others’ “right” to lie to them seems like a kind of fucked-up way to attempt to cheat the tradeoff.
There are some communities I consider incredibly welcoming where I don’t imagine by any means that anything I say will be received well just because it’s true. On the other hand, a subculture that not only has idiosyncratic social norms but aggressively shuns anyone who follows mainstream norms, likening violations of their idiosyncratic norms to slashing people’s tires… that sounds incredibly unwelcoming to me.
“Hair trigger about mere true facts” is hyperbole. But the truth is that the overwhelming majority of the human race consists of people who sometimes respond badly to being told “mere true facts.” Insisting you are an exception is quite a brag. It’s possible, but the prior is low. I’d give members of the LessWrong community better odds of being such an exception than I’d grant to most people, but I don’t think every member of the community, or even every prominent member of the community, qualifies. In some cases I think I’ve seen strong evidence to the contrary. (For reasons that should be obvious, please do not ask me to name names.) Because of this, I’m not going to default to treating most members of the LessWrong community radically differently than how I treat non-LessWrongers.
Not really my business, but a reaction like this may give people an incentive to lie to you.
I think that reaction is walking her talk. She could have changed her preference for inviting him over for dinner silently. Being truthful about her position is an example of being radically honest.
That doesn’t, however, make the response incorrect.
It depends on her reputation for being good at detecting when people lie to her.
If she has a reputation for being good at it and openly makes it known that she punishes people for lying to her, people will less likely lie to her. She only has a problem if people believe that she can’t effectively punish people for lying to her because she doesn’t spot the lies.
It doesn’t make sense to adopt a policy where a person sharing information about what it is like to interact with them must never affect how likely you are to interact with them. If someone tells me they’ve taken up smoking, they have contracted tuberculosis, they have decided that punching people in the arm is affectionate behavior, etc., then it’s kind of them to warn me and they could achieve short-term gains by deceiving me instead until I inevitably notice, but I will not reward the kindness of the warning with my company. The case of lying recurses here where the other examples don’t, but my goal is not, “make sure that people who have a tendency to lie don’t lie about having that tendency”. It’s “don’t hang out with people who are going to lie to me, like, at all”.
Good luck with that.
I think it’s a mistake to interpret “I will sometimes do (extreme thing)” as “my threshhold for doing (extreme thing) is low enough that I’d be likely to do it in everyday situations”.
If I visited your house, ate your food, and then you asked me “I want to kill my son by running him over with my car because he told me he’s gay. What’s the best way to do this without being caught by the police?”, depending on circumstances, I might slash your tires, or do things that cause as much damage to you as slashing your tires.
So if you asked me if I would slash your tires if you told me something bad, I’d have to say “yes”. But it doesn’t mean that if you invited me to your house you would have to watch what you say to me in fear that I might slash your tires, because the kinds of things that would lead me to do that would also imply that you’re seriously messed up. Nobody would just say those things by accident.
I see this fallacy a lot in rational idea discussions.,
It seems like this is an example of my new favorite conversational failure mode: trying to map an abstraction onto the reference class of your personal experience, getting a strange result, and getting upset instead of curious.
ChrisHallquist said there are some circumstances in which he feels compelled to lie. It seems like Alicorn assumed both that this must include some circumstances she’d be likely to subject him to, and that what he thinks of as a lie in that circumstance is something that will fall into the category she objects to. Of course, either of those things or both could be true—but the way to find out is to consider concrete examples (whether real or fictional).
Personally I used to make this mistake a lot when women complained (in vague abstract terms) about being approached by strangers in coffeeshops, and talk about how they’re not obligated to be polite or nice in those cases. Once I got curious and asked questions, and found out that “approached” meant a guy persistently tried to engage her in conversation with no affirmative encouragement from her, and “not polite” didn’t mean “fuck off and die, asshole” but just failing to throw a lot of warmth and smiling into the conversation, it made perfect sense, though I was surprised that it wasn’t already obvious to everyone that no such obligation exists.
I really really like this comment. I really want more clarification now. But from my perspective, someone who has a categorical rule against lying is like learning I’m being graded on everything I say. I suddenly have the massive cognitive burden of making sure everything I say is true and that I mean all the implications or I can suddenly be shunned and outcast.
I’m curious. Is telling the truth really a cognitive burden?
Walking is not a cognitive burden. Walking on a tightrope is. Being able to say whatever I feel like saying without having to analyze it constantly for punishment is the equivalent of simple walking. I may tell the truth in 90-99 percent of the statements I make, but when I get put into a context of punishment, suddenly I have to worry about the consequences of making what would otherwise be a very small step away from the straight and narrow.
Well, I feel like I’m walking on a tightrope much less when I’m allowed to be honest about everything than when I feel like there are things I’d be supposed to lie about.
My confusion increases. If you say whatever you feel like, you sometimes lie?
yes of course. Someone asks how I’m doing. I’m having a terrible day but say fine because I don’t want to talk about it. Is this example clear enough for you?
As noted elsewhere, that’s not really a lie, because “How are you?” isn’t actually a question, it’s more of a greeting protocol.
That statement only makes the web of lies/things that technically don’t count as lies I have to keep in my head to stay on Alicorn’s good side even more complicated.
I’m not that complicated and I’d rather you didn’t pin the entire intricacy of socialization on me personally. I’m okay with phatics like “fine”, but if you’re actually talking to me, specifically, I’ll also take “enh” or other non-information as a sign not to pursue the conversation as long as I’m reasonably on the ball and you can also tell me “I’d rather not talk about that”.
That’s good to know but I wouldn’t have guessed it from what you said in the post about slashing tires.
You’re aware I did not invent the tire slashing metaphor, right? You seem to be reacting very strongly and specifically to it. I linked a source the first time I used it here.
It seems more like the opposite to me. Telling the truth involves keeping track of what is going on in my head, but lying involves keeping track of what is going on in my head and keeping track of what appears to be going on in my head (and making sure they aren’t identical).
Saying whatever is in my head is easier than making up lies is easier than picking the phrasing of the truth that doesn’t offend or scare people.
Ah, okay. That sounds about right.
This has been my experience as well. Telling the truth requires just saying what’s on your mind, sometimes adjusting to avoid making people mad or to be better understood. Lying requires a lot of effort and is stressful.
This is often true, but often the opposite is true. If telling the truth requires extensive evaluation of actual facts, but lying just requires figuring out is the best thing to say, then lying can be less stressful.
As used here, “lying” means “intentional deception”, so if you say something, believing it to be true, but it’s actually false, it’s not lying. The contrast is not saying what’s true vs saying what’s false, but saying what you believe to be true vs saying what you believe to be false.
Depends on cognitive style.
Lying is saying something false while you know better. Not lying doesn’t imply only saying true things or knowing all implications.
The added burden should be minimal as between friends most people already assume that they are not lied to without making it an explicit rule.
I think there’s something missing there.
If someone were to put me in imminent fear for my life, I would feel justified in killing them. Now that you know that, would you be able to spend time with me without a massive cognitive burden of making sure that you don’t put me in imminent fear for my life?
And it’s not even like Chris is saying he’d kill anyone. He didn’t say “shunned and outcast”. He’d just lie to them. You consider being lied to such a horrifying prospect that you would devote massive cognitive resources to making sure it didn’t happen?
you’ve completely misread what I said
To be fair, the sentence he’s quoting is ungrammatical or at least weirdly phrased (“person is like learning”, I had to read that twice), and that may make it more confusing.
Fairness has nothing to do with whether someone is able to accurately read what someone else means.
When being faced with weirdly phrased writing in most cases the effective thing is to simply ignore the point or be open about the fact that you don’t understand what someone means and if you care about understanding it, ask for clarification.
It’s a figure of speech.
And confusion sometimes takes the unfortunate shape of someone thinking they understood and not realizing that they didn’t—they can’t ask to clarify then, can they? Since I believe that, purely as a matter of cause and effect, avoiding poorly formed sentences leads to this happening less often (even in cases when after the fact we would blame the reader more than the writer) I offered that remark as possibly helpful, that’s all.
Do you really believe that someone doesn’t already know that avoiding poorly formed sentences improves understanding of messages? If you don’t then why do you consider it worth saying?
Not really, but then again I’m not sure why you started arguing with me after I gave drethelin feedback on his poorly formed sentence, which he might have not been aware of. So I endevored to explain to you as clearly as I could why I did that. What are you trying to do here exactly?
You made a point about fairness and I argument that you are wrong to speak about fairness.
This happens in the context of a post by ThisSpaceAvailable. ThisSpaceAvailable lately wrote a post largely complaining that he isn’t treated fairly. In that context it’s worth noting that, local community standards are not about treating other people fairly but about promoting conversations that have utility.
Fairness is a very real concept in which some people believe. The fact that you use the word when you don’t want to talk about fairness is a mistake on your part worth pointing out.
Some day I really should get around to writing the post I’ve been thinking of for about a year.
Write the bad version now. Don’t worry about the good version until you have a complete bad one.
K. opens gedit
Just don’t lie to yourself.
You know, unfortunately I’m so much worse at not lying to myself than at not lying to others. (Then again, I’ve found a way to put this to a good use: if promising myself I won’t eat junk food from the vending machines doesn’t work, I promise that to my girlfriend instead. See also Beeminder. Yvain’s “fictional deities” approach also sounds interesting.)
Has his post offended you or something? You employ pretty strong language, and “this post makes me less interested in inviting you over for dinner again” is a kinda public way of breaking off a friendship, which (regardless of cause) is somewhat socially humiliating for the person on the receiving end. Is that really necessary? Settle such personal details via PM?
I don’t see it as a sort of grey fallacy argument to note that “lying” isn’t much of a binary property (i.e., either you lie, or you don’t). There may be simple enough definitions on the surface level, but when considering our various facets of personality, playing different roles to different people in different social settings, context-sensitivity and so on and so forth, insisting on anything remotely like being able to clearly (or at all) and reliably distinguish between “omitting a truth” and “explicitly lying” versus “telling the truth” loses its tenability. There are just too many confounders; nuances of framing, word choice, blurred lines between honesty and courtesy, the list goes on.
Yes, there are cases in which you can clearly think to yourself that “saying this or that would be a lie”, but I see those as fringe cases. Consider your in-laws asking you whether the soup is too salty. Or advertising. Or your boss asking you how you like your new office. Or telling a child about some natural phenomenon. The whole concept on Wittgenstein’s ladder (“lies to children”) would be simplistically denounced as “lying” in an absolute framework.
“Hair trigger about mere true facts” is disregarding all these shades of “lies” (disparity between internal beliefs and stated beliefs), there are few statements outside of stating mathematical facts for which a total, congruent correspondence between “what I actually believe” and “what I state to believe” can be asserted. Simply because it’s actually extremely hard to express a belief accurately.
Consider you were asked in a public setting whether you’ve ever fantasized about killing someone. Asked in an insistent manner. Dodge this!
Why is this is problem? I’m not Alicorn but I wouldn’t have any issues admitting in public that yes, I’ve fantasized about killing someone. And the situation is very easy to steer towards absurd/ridiculous if the asker starts to demand grisly details :-)
Well, “asked in an insistent manner” does seem to count as evidence that there’s some ulterior reasoning behind the question. Ordinarily I expect a lot of people (though maybe not most people) would be happy to admit that they’ve e.g. fantasized about running over Justin Bieber or whoever their least favorite pop star is with a tank, but I for one would be a lot more inclined to dodge the question or lie outright if my conversational partner seemed a little too interested in the answer.
If the conversational partner seems too interested, I’m likely to start inquiring about his/her fantasies… :-D
Heh. Dunno. Many of these other people (vaguely waves towards society) like to insist they wouldn’t. Not even while they’re in the bathroom, you know, producing rainbows. Makes it a good example.
If I’m interpreting your euphemism correctly: this fetish is not as common as you think it is.
The easiest way is to go meta. Ask the other person why they asked the question. If a person asks a question that’s inappropriate to ask in public you can put the burden to come up with a good answer on them.
It’s generally high status behavior not to directly answer question whether you engaged in bad activity X but punish the person who asks the question for asserting that you might be a person who engages in bad activity X but making them justify their bad faith in yourself.
It upset me. I don’t like to see lying defended. I would react about the same way to an equally cogent “Defense of Pickpocketing” or “Defense of Throwing Paint On People”, though I imagine those would be much more difficult to construct.
I think there should be negative social consequences to announcing one’s willingness to lie and that there should be significant backlash to issuing a public request that people put up with it.
I think you’re exaggerating the difficulty both of identifying lies and of omitting/deflecting.
“I think about killing my characters off pretty regularly, though often I come up with more creative things to do instead. As far as I know I’m an average amount of susceptible to intrusive thoughts, if that’s what you’re asking, but why are you asking?”
Or if I don’t even trust them with that answer I can just stare at them in silence.
Thanks for telling the truth. But downvoted for “I dislike this position, don’t want to hear it defended, and will punish those who defend it.” This is a much stronger rationalist anathema than white lies to me.
If you want to share arguments for socially unacceptable ideas you can wrap them into an abstract layer.
When you however call for people changing their action in a way that causes harm I see no reason why that shouldn’t be punished socially.
This is a forum for discussing ideas, it’s not a forum for playing social games. (I’m saying this as someone who is extremely reluctant about white lies and who hates the idea that they are socially expected to lie. Asking a question when one doesn’t want an honest answer is just silly.)
Except when you’re looking for the social / mental equivalent of a shibboleth.
Okay. Asking as question and then being offended and/or hurt when one gets an honest answer is just silly (alternatively, evil).
Except when acting offended and/or hurt signals solidarity and prompts your allies to attack the alien who got the shibboleth wrong. (You can argue that that’s evil, of course, but then you’re trying to break away from some very, very deeply ingrained instincts for coalition politics.)
I think that’s covered by “alternatively, evil”. ;) More seriously, though: how is “knowing what the preferred answer is and either agreeing with it or being willing to lie” a reasonable criterion by which to filter your group?
It proves that you value loyalty to your group more than you value your own capacity to reason, which means that authoritarian leaders don’t have to consider you a threat (and thus destroy you and everything you hold dear) if they order you to do something against your self-interest. Thus, perversely, when you’re in an environment where power has already concentrated, it can be in your self-interest to signal that you’re willing to disregard your self-interest, even to the point of disregarding your capacity to determine your self-interest.
Once ingrained, this pattern can continue even if those authoritarian leaders lose their capacity to destroy you—and perversely, the pattern itself can remain as the sole threat capable of destroying you if you dissent.
(Put a few layers of genteel classism over the authoritarian leadership, and it doesn’t even have to look autocratic in the first place.)
Definitely covered by “alternatively, evil”. Especially when considering a two-person relationship!
My problem with calling these behaviors “evil” is that they don’t have to be consciously decided upon—they’re just ways that happened to keep our ancestors alive in brutal political environments. Cognitive biases and natural political tendencies may be tragic, but calling them “evil” implies a level of culpability that I think isn’t really warranted.
The choice of words was a bit tongue-in-cheek, but enforcing your power over others in this way is definitely not a nice thing to do. And holding people responsible for such disingenuous behaviour only when they consciously deliberate and decide on it doesn’t seem to be very useful to me. People rarely consciously deliberate and decide upon being assholes. (And if someone does what you described in a two-person relationship, I am very inclined to call them an asshole, at least in my head.)
I wonder if people who have a disadvantaged native social circuitry are more likely to judge other people because their success in social situations requires more conscious deliberation and thus they’re expecting more of it from others.
I don’t know; I’m something of a counterexample to that, and I tend to not associate with other socially disadvantaged people, so I don’t have a good reference class to build examples from.
If you are just want to discus ideas, keep out words like I.
Don’t say: “But I will implore you to do one thing: accept other people’s right to lie to you.”
Say: “Here are reasons why you might profit from accept other people’s right to lie to you.”
Maybe even: “Here are reasons why a person might profit from accept other people’s right to lie to them”
You have a point there.
Who gets to decide what’s a social game? Attacking people when they’re perceived to be playing social games seems like a social game to me. It’s the nature of many social games that they employ plausible deniability, which leads to a lot of false positives and hostility if you attack all of the potential threats.
What if it doesn’t really cause that much harm? What if it does more good than harm? Then this sort of punishing behaviour entraps us in our mistake.
I think it’s worth distinguishing between punishing discourse in general and personal social consequences. Chris, the OP, has literally been physically in my house before and now I have learned that he endorses a personal social habit that I find repellent. I’m not trying to drive him out of Less Wrong because I don’t like his ideas—I didn’t even downvote the OP! - but it seems weird that you feel entitled to pass judgment on the criteria I have for who is welcome to be in my house.
Edit: separated these two quotes. LessWrong comment formatting stuck them together.
I don’t care whether you let him in your house. You’ve publicly shamed him, and you are saying that this kind of status-attack is the just response to a particular argument, regardless of how it’s presented. You also seem to be vilifying me and dodging my complaint by portraying my judgement as against your home-invitation policy, rather than against your public-backlash policy, which I resent as well.
“Vilifying you”? Because I didn’t understand the thrust of your criticism because you didn’t understand the point of my post? I’m tapping out, this is excessive escalation.
Sorry, that was uncharitable. Tapping out is a good idea.
(In the role of a hypothetical interlocutor)
“See this here?” (Pulls out his Asperger’s Club Card) “I have trouble distinguishing what’s socially acceptable to ask from what isn’t, and since you’re such a welcoming host, I hope you also welcome my honest curiosity. I wouldn’t want to lie—or suppress the truth—about which topic interests me right this moment.
As for the reason for my interest, you see, I’m checking whether your deontological barrier against lying can withstand the social inconvenience of (ironically) telling the truth about a phenomenon (fantasizing about killing someone) which is wildly common, but just as wildly lied about.
Your question answered, allow me to make sure I understood you correctly: My question was referring to actual people. Have I inferred correctly that you did in fact fantasize about killing living people (non-fictional) on multiple occasions?”
ETA:
I see. Unfortunately, unlike “pleading the fifth”, not answering when one answer is compromising is kinda giving the answer away. The symmetrical answering policy you’d have to employ in which you stare in silence regardless of whether the answer would be “yes” or “no” is somewhat hard to sell (especially knowing that silence in such a case is typically interpreted as an answer*). Unless you like to stare in silence, like, a lot. And are known to do so.
* “Do you love me?”—silence, also cf. Paul Watzlawick’s “You cannot not communicate.”
You or your character or both have confused “not lying” with “answering all questions put to one”. And for that matter “inviting people who ask rude questions indiscriminately to parties in the first place”.
I’d hoped I addressed this in the edit, “cannot not communicate” and such.
You may find yourself in situations (not at your parties, of course) in which you can’t sidestep a question, or in which attempts to sidestep a question (ETA: or doing the silent stare) will correctly be assumed to answer the original question by the astute observer (“Do you believe our relationship has a future?”—“Oh look, the weather!”).
Given your apparently strong taboo against lying, I was wondering how you’d deal with such a situation (other than fighting the hypothetical by saying “I won’t be in such a situation”).
Sorry, I didn’t see your edit before.
Questions I really can’t sidestep are usually ones from people who, for reasons, I have chosen to allow to become deeply entangled in my life. If one of my boyfriends or my fiancé decides to ask me if our relationship has a future I will tell him in considerable and thoughtful detail where I’m at on that topic, and because I choose to date reasonable human beings, this will not be an intolerable disaster. Occasionally if I’m really wedged (at a family holiday gathering, parent asks me something intrusive, won’t back off if I say it’s none of their business) I can solve the problem by deliberately picking a fight, which is usually sufficient distraction until I am not in their physical presence and can react by selectively ignoring lines in emails, but I don’t like doing that.
I don’t stare at people in silence a lot, but I do often give the visual appearance of wandering attention, and often fail to do audio processing such that I do not understand what people have said. Simply not completing the steps of refocusing my overt attention and asking people to repeat themselves can often serve the purpose when it’s not someone I have chosen to allow to become deeply entangled in my life; if we’re the only people in the room it works less well, but if I know a person well I’ll only be in a room alone with them if I trust them yea far, and if I don’t know them well and they start asking me weird questions I will stare at them incredulously even if the answer is in fact completely innocuous (“Have you ever committed grand theft auto?”; “are you a reptilian humanoid?”).
I think of such tactics as Aes Sedai mode :-)
I knew you were a deontologist (I am a cosequentialist), but I had sort of assumed implicitly that our moralities would line up pretty well in non-extreme situations. I realized after reading this how thoroughly alien your morality is to me. You would respond with outrage and hurt if you discovered that someone had written a defense of throwing paint on people? Or pickpocketing? Although I have never practiced either of those activities and do not plan to ever do so, my reaction is totally different.
Pickpocketing is a perfectly practical technique which, like lockpicking, might be used for unsavory purposes by shortsighted or malicious people, but is probably worth knowing how to do and makes a great party trick. And throwing paint on people? Hilarious. It’s not a terribly nice thing to do, especially if the person is wearing nice clothes or is emotionally fragile, but I think most people who can compose a cogent philosophical essay can also target their prankstering semi-competently.
Pickpocketing-as-theft is to lying-in-general as pickpocketing-as-consensual-performance-art is to, say, storytelling, I suppose I should clarify. I think we legitimately disagree about throwing paint on people unless you are being facetious.
In terms of pickpocketing, I agree that we seem to pretty much agree; I think that pickpocketing for the purposes of stealing what doesn’t belong to you is rarely justified. I was not being facetious about the paint part, though.
A more realistic example would be something like “In Defense of Taxation to Fund the Welfare State”—which would be different from “In Defense of Lying”, because even if I think that taxation to fund the welfare state is immoral, I don’t think that someone who holds the opposite position is likely to hold me at gunpoint and demand that I give money to a beggar, but if someone who thinks lying is okay to the degree that OP does, there is a real risk of them lying to me in personal life. More generally, advocating something bad in the abstract isn’t as bad as advocating something bad that I’m likely to experience personally.
You should try not paying your taxes on the grounds that you don’t want to support the welfare state. If you persist, I’m quite sure at some point men with guns will show up at your doorstep.
Yes, but my friend who is advocating for a welfare state will not be among them. I have nothing to fear from him.
Other than that he probably votes for people who pass laws telling you how much of your money will be taken “for the beggars” and who have no problems sending men with guns to enforce their commands.
He only has one vote out of the many necessary to send men with guns after me. Even if he changed his mind and voted against the welfare state, the probability that anything would change is minuscule. The expected harm from him voting for the welfare state is smaller than that of him sitting next to me after not showering for a couple of days.
But if the pool of voters were much smaller, I’d take a more negative view of his actions.
There’s still cash, right? Might have to change your line of work from bits to bricks too for that to work though.
There is, of course, cash, and the grey economy is not small. But it certainly has its limitations :-/
You lost me there so hard that I am wondering if we’re talking about the same thing—throwing paint at people doesn’t seem to happen in my corner of the world and I’ve never known anyone who got paint thrown at them, so maybe I’m misunderstanding something. So, to be sure, are we talking here about throwing paint, as in the stuff you paint walls with, at people, ruining their clothes, pissing them off, interrupting their day to get washed and changed and all? Is that what you find funny and defensible?
The issue is not so much about whether the practice itself is usually done in a defensible manner but that writing an article to play devils advocate to make the case of throwing paint at people isn’t an immoral act
Then I happen to be asking a separate question that isn’t about “the issue”. The paragraph I am responding to is talking about the practice of throwing paint, not about the practice of writing articles about it.
Nobody here defends the practice of throwing paint.
But if you wanted me to, then I would say that it’s preferable to throwing stones at other people. You still make your political point by throwing paint at policeman but you are causing less lasting damage. Convincing those people on the left who have a habit of throwing stones at policemen in political demonstrations to instead throw paint would cause less lasting injuries.
You have even higher returns in utility if you could convince a group like Hamas to throw paint instead of using nail bombs.
No?
Sounds to me like that means “throwing paint is extremely funny and pretty much OK”.
The point of the paragraph is to show that it’s possible to play devils advocate in this case. Also a bit about having fun playing devil’s advocate. Joking. Not long ago a fellow member on LW joked about committing bioterrorism. Distinguishing in what intent something is written is important.
Saying “It’s not a terribly nice thing” labels the action as a hostile action. That means you only do it if you actually want to engage in a hostile action against someone else. Given various choices of hostile actions it’s not clear that throwing paint is a bad choice.
That Vulture’s paragraph could be read that way has occurred to me, but it is far from obvious (you’ll note that my original post here is a request for confirmation that I am reading things correctly). I’ve met people with opinions like that before—not on throwing paint, because again, it’s something I’m unfamiliar with, but on other ways to be a jackass.
But it doesn’t matter. Even if you were correct about that, then if we’re discussing the possibility of Alicorn’s or anyone’s outraged/upset reaction to a defense of throwing paint, this only makes sense if this a defense possible to be taken seriously, to elicit a serious reaction. And not something as silly as “you should prefer it to throwing nail bombs”, which deserves only a shrug. So, either way, I felt compelled to assume Vulture was saying something I’m supposed to be able to follow without suspending all common sense.
I do think that Alicorn follows a policy of being offended when people to engage in serious efforts to play devils advocate for positions that she considers to be immoral.
Playing devils advocate for extreme immoral positions is something that some people can see as a game. If you go to the world debating championships than you might get a topic to argue that there should be more genocide. For debating folks making such an argument is a fun game of being intellectually detached from the position that one argues. There are other people who don’t think that there use in someone producing the best defense of genocide that’s possible to produce.
It’s possible to win debating tournaments where judges look at whether the participants make rational arguments while advocating positions that are very immoral. It doesn’t take suspending common sense to make an argument that not enough people throw paint at other people. It just takes intellectual detachment.
I think behaviorly I act almost exactly as you do in terms of trying never to lie but often to evade questions. But for some reason the comment I’m responding to rubs me incredibly negatively. I’m reflecting on why, and I think the difference is that you actually have it easy. You’re trying to live radically honestly in, if I’m not mistaken, the middle of an enclave that has far more of the sort of people that would appreciate Lesswrong in your immediate vicinity than most people do. So you can basically choose to be extremely choosy about your friends in this regard.
Try holding everyone around to the same standard you live by when most of your neighbors and colleagues are not associated with the rationalist movement at all, and let’s see how far you get. Let me tell ya, it’s a wee bit harder. For most of us, “be lenient with others and strict with thyself” is a pretty natural default.
I suspect, from Chris’ perspective, if his choices are “be invited to Alicorn’s parties” and “be friends with other people at all,” he may go with the latter.
I believed lying was wrong during times of my life when I didn’t live in a rationalist enclave, too. Curating your friends is easier when you are willing to maintain friendships online. Dinner parties are a luxury I am happy to avail myself of, that’s all.
I grew up in rural Oklahoma, in the “buckle of the Bible Belt”, where anti-intellectualism ran rampant. I was radically honest then (not in the literal sense of “radical honesty”, but in the sense of what Alicorn seems to be advocating), and it didn’t make me very popular, being an atheist, a consequentialist, a transhumanist, and increasingly a libertarian. It didn’t make me very popular—but lying would have been much, much worse. Telling the truth merely made those people dislike me, but lying would have made me compromise my integrity.
“Those who mind don’t matter, and those who matter don’t mind.”
Which totally misses the point of the comment you’re responding to. This isn’t about whether we are radically honest. It’s about whether we insist on everyone we associate with also being radically honest as a condition of our association with them.
That’s a good point. I personally require people I associated with to be honest (except when their lives or livelihoods are at stake), as I hate being lied to. How people respond to this is up to them.
My instant urge when you compared polite lies to slashing your tires is to insult you at length. I don’t think this would be pleasant for anyone involved. Radical Honesty is bad for brains running on human substrate.
I do not and have never endorsed indiscriminate braindumping.
I advocate refraining from taking actions that qualify as “lying”. Lying does not include, among other things: following Gricean conversational maxims, storytelling, sarcasm, mutually-understood simplification, omission, being choosy about conversational topics, and keeping your mouth shut for any reason as an alternative to any utterance.
There is no case where merely refraining from lying would oblige you to insult me at length. I don’t know why everyone is reading me as requiring indiscriminate braindumping.
An emotional response to your statement is not indiscriminate braindumping. I’m not talking about always saying whatever happens to be in my mind at any time. Since I’ve probably already compromised any chance of going to a rationalist dinner party by being in favor of polite lies, I might as well elaborate: I think your policy is insanely idealistic. I think less of you for having it. But I don’t think enough less of you not to want to be around you and I think it’s very likely plenty of people you hang out with lie all the time in the style of the top level post and just don’t talk to you about it. We know that humans are moist robots and react to stimuli. We know the placebo effect exists. We know people can fake confidence and smiles and turn them real. But consequentialist arguments in favor of untruths don’t work on a deontologist. I guess mostly I’m irate at the idea that social circles I want to move in can or should be policed by your absurdity.
I don’t think the above constitutes an indiscriminate braindump but I don’t think it would be good to say to anyone face to face and I don’t actually feel confident it’s good to say online.
This is a summary reasonably close to my opinion.
In particular, outright denouncement of ordinary social norms of the sort used by (and wired into) most flesh people, and endorsement of an alternative system involving much more mental exhaustion for the likes of people like me, feels so much like defecting that I would avoid interacting with any person signalling such opinions.
Incidentally (well after this thread has sort of petered out) I feel the same sort of skepticism or perhaps unenthusiasm about Tell Culture. My summarized thought which applied to both that and this would be, “Yes, neat idea for a science fiction story, but that’s not how humans work.”
Upvoted for the entire comment, but especially this.
And this.
Depending on the context, lies of omission can be as bad as, if not worse than, blatant lies (due to being all the more convincing).
Imagine that I ask you, “did you kill your neighbour ?”, and you answer “no”. The next week, it is discovered that you hired a hitman to kill your neighbour for you. Technically, you didn’t lie… except by omission.
Personally, I’d categorize putting a hit on somebody as killing them, but if you really, sincerely didn’t think of the words as meaning that, and I asked you that question, and you told me ‘no’, then I wouldn’t add lying to your list of crimes (but you’d already be behaving pretty badly).
The thing I’m measuring here is not, actually, the distance traveled in the audience towards or away from omniscience. It’s something else.
Something perplexes me about the view you describe, and it’s this:
What is the point?
That is to say: You say lying is bad. You describe a certain, specifically circumscribed, view of what does and does not count as lying. The set of conditions and properties that define lying (which is bad) vs. things that don’t count as lies, in your view, are not obvious to others (as evidenced by this thread and other similar ones), though of course it does seem that you yourself have a clear idea of what counts as what.
So my question is: what is the point of defining this specific set of things as “lying, which is bad”? Or, to put it another way: what’s the unifying principle? What is the rule that generated this distribution? What’s the underlying function?
Ok, that’s fair; so what would be an example of an omission that, in your model, does not count as a lie and is therefore acceptable ?
What kind of scope of omission are you looking for here? If someone asks “what are you up to today?” or “what do you think of my painting?” I can pick any random thing that I really did do today or any thing I really do think of their painting and say that. “Wrote a section of a book” rather than a complete list, “I like the color palette on the background” rather than ”...and I hate everything else about it”.
Also, not speaking never counts as lying. (Stopping mid-utterance might, depending on the utterance, again with a caveat for sincere mistake of some kind. No tricks with “mental reservation”.)
Ok, that makes sense. But still, from my perspective, it still sounds like you’re lying; at least, in the second example.
I don’t see the any difference between saying, “I think your painting is great !”; and saying something you honestly expect your interlocutor to interpret in the same way, whereas the literal meaning of the words is quite different. In fact, I’d argue that the second option involves twice the lies.
What, never ? Never is a long time, you know. What if your friend asks you, “let me know if any of these paintings suck”, and you say nothing, knowing that all of them pretty much suck ?
I would understand it if your policy was something like, “white lies are ok as long as refusing to engage in the would cause more harm in the long run”; but, as far as I can tell, your policy is “white lies are always (plus or minus epsilon) bad”, so I’m not sure how you can reconcile it with the above.
If your friend asks you to serve as a painting-reviewer and you say you will and then you don’t, that’s probably breach of promise. If your friend asks you to do them this service and you stare blankly at them and never do it, you’re probably being kind of a jerk (it’d be nicer to say “I’m not gonna do that” or something) but you are not lying.
I understand your point, but I still do not understand the motivation behind it. Are you following some sort of a consequentialist morality, or a deontological one that states “overt lies are bad, lies of omission are fine”, or something else ?
As I see it, if a friend asks you “do you like this painting ?” and you reply with “the background color is nice”, the top most likely outcomes are:
The friend interprets your response as saying, “yes I like the painting”, as was your intent. In this case, you may not have lied overtly, but you deceived your friend exactly as much.
The friend interprets your response as saying, “no, I didn’t like the painting but I’m too polite to say so”. In this case, you haven’t exactly lied, but you communicated the same thing to your friend as you would’ve done with a plain “no”.
The friend interprets your response as in (1), with an added ”...and also I don’t think you’re smart enough to figure out what I really think”. This is worse than (1).
Similarly, if your friend asks you to review his paintings and you refuse, you’d better have a good reason for refusal (i.e., the truth or some white lie); otherwise, anyone of average intelligence will interpret your response as saying “I hate your paintings but I won’t tell you about it”.
None of what I wrote above matters if you only care about following prescribed rules, as opposed to caring about the effects your actions have on people. Perhaps this is the case ? If so, what are the rules, and how did you come by them ?
I’m Less Wrong’s token deontologist. I thought most people around here knew that. I wrote this article about it and my personal brand of deontology is detailed in this comment.
Sorry, I did not, in fact, know that; and most people here are consequentialists, so I assumed you were one as well. I’d skimmed your post on deontology that you linked to earlier, but I did not understand that it was meant to represent your actual position (as opposed to merely being educational).
As I said above, if your moral system simply has a rule that states “lying is bad except by omission”, or something similar, then none of my points are valid, so you are right and I was wrong, my apologies.
That said, personally, I don’t think that deontology makes any sense except possibly as a set of heuristics for some other moral system. That’s a different line of debate however, and I won’t push it on you (unless you are actually interested in pursuing it).
I’m willing to answer questions about it if you’re curious, but since I dropped out of grad school I haven’t devoted much time to refining either my ethical theory or my ability to explain it so the old article will probably be just about as good. I get enough debating in just from hanging out with consequentialists all the time :P
To expand on what blacktrance said:
As I understand it, deontological systems are, at the core, based on lists of immutable rules.
Where do the rules come from ? For example, one rule that comes up pretty often is something like, “people have inalienable rights, especially the right to A, B and C”. How do you know that people have rights; what makes those rights inalienable; and what makes you so sure that A, B and C are on the list, whereas X, Y and Z are not ?
I think that rights drop naturally out of personhood. Being a person is to be the kind of thing that has rights (and the obligation to respect same). The rights are slightly alienable via forfeiture or waiver, though.
I don’t quite understand what you mean. Even if we can agree on what “personhood” means (and I’ve argued extensively with people on the topic, so it’s possible that we won’t agree), what does it mean for a right to “drop out naturally” out personhood ? I don’t understand this process at all, nor do I understand the epistemology—how do you determine exactly which rights “drop out naturally”, and which ones do not ?
To use a trivial example, most deontologists would probably agree that something like “the right to not be arbitrarily killed by another person” should be on the list of rights that each person has. Most deontologists would probably also agree that something like “the right to possess three violet-blue glass marbles, each exactly 1cm in diameter” should not be on the list. But why ?
I think Alicorn’s answer concerned the ontological status of rights, not the epistemology thereof.
Understood, but I would like to understand both...
Likewise. For what it’s worth, though, I don’t actually think there is a good answer to the epistemological questions you asked; that’s one of the reasons I favor consequentialism rather than deontology. Of course, I imagine Alicorn’s views on the matter differ, so I, too, would like to see her answer (or that of any other deontologist who cares to respond).
As I mentioned here consequentialism has the same epistemological problem.
In another branch of this thread I’ve just walked through an assessment of whether a provided example contained a rights violation. Does that help?
And consequentialist systems are, at the core, based on an immutable utility function.
Where does this function come from?
Well, no. Utilitarian systems are based on a utility function (although I’m not aware of any requirement that it be immutable… actually, what do you mean by “immutable”, exactly?). Consequentialist systems don’t have to be utilitarian.
Even so, the origin of a utility function is not that mysterious. If your preferences adhere to the von Neumann-Morgenstern axioms, then you can construct a utility function (up to positive affine transformation, as I understand it) from your preferences. In general, the idea is that we have some existing values or preferences, and we somehow assign utility values to things (“things”: events? world states? outcomes? something) by deriving them from our existing preferences/values. It’s not a trivial process, by any means, but ultimately the source here is the contents of our own brains.
The problem is that most (all?) people’s preferences don’t.
That’s a valid question, and, admittedly, there’s no good answer that I’m aware of. One might say that, ultimately, the function can be derived from some basic principle like “seek pleasure, avoid pain”, but there’s no objective reason why anyone should follow that principle, as opposed to, say, “seek paperclips, avoid non-paperclips”.
I will grant you that both consequentialism and deontology are based on some a priori assumptions; however, I would argue that the fact that consequentialism is based on fewer such assumptions, as well as its flexibility in the face of new evidence, make consequentialism a more efficient moral system—given that we humans are agents who are reasoning under uncertainty using a comparatively limited amount of data.
I would argue that this “fact” is not in fact true, or at least not obvious. It’s not even clear to me what the content of that claim is supposed to be. If you mean that it takes fewer bits to encode a utility function then a collection of maxims, then this will obviously depend on which utility function or set of maxims is used, also as Eliezer points out here this is a really really bad way to compare moral systems.
Huh? If you’re claim is that consequentialism is more flexible in the face of new evidence then deontology, you’re going to have to provide some justification for it (as well as specifying precisely what you mean). As I see it, both are inflexible in the sense that ideal agents of either type are incapable of changing their utility function or set of maxims in the face of any evidence, and flexible in the sense that they can use evidence to determine how to pursue their maxims or maximize their utility function, and also in the sense that actual humans will in fact update their maxims or utility function in the face of evidence.
Not necessarily. You are correct in saying that any given arbitrary utility function can be a lot more complex than any given arbitrary set of rules; so strictly speaking I was wrong. However, in practice, we are not dealing with arbitrary functions or rules; we are dealing with limited subsets of functions/rules which are capable of sustaining a human society similar to ours in at least some way. Of course, other functions and other rules can exist, but IMO a moral system that effectively commands its followers to e.g. kill themselves ASAP is not very interesting.
Given this restriction, I believe that consequentialist moral systems which satisfy it will require fewer arbitrary assumptions, in part due to the following:
Changing the maxims is exactly the problem. Given that deontological maxims are essentially arbitrary; and given that the space of all possible human behaviors is quite large; it is already pretty difficult to construct a set of maxims that will account for all relevant behaviors that are currently possible. Of course, you could always create a maxim that amounts to saying, “maximize this specific utility function”, but then you’re just reducing deontology to consequentialism.
In addition, though, as humans acquire more knowledge of and more power over their environment, the set of possible behaviors keeps changing (usually, by increasing in size). This presents a problem for the deontologist, who has to invent new maxims just to keep up (as well as convincing others to use the new maxims which, as you recall, are entirely arbitrary), as well as to possibly revise existing maxims (ditto). The consequentialist, on the other hand, can apply his existing utility function to the new behaviors, or plug the new data into it, in order to come up with a reasonable re-evaluation of the morality (or lack thereof) of each behavior.
To use a trivial example, at some point in human history, it became possible to digitally copy musical performances without paying any money to the original authors. The deontologists are debating to this very day whether such actions count as “theft” or not, “theft” being a prohibited behavior under one specific maxim. Unfortunately, this new behavior doesn’t quite fit the parameters of the original maxim (which was invented before information technology became widespread), hence the debates. But if we dispense with the labels, and attempt to evaluate whether digital music copying ultimately causes more harm than good (or vice versa), then we can at least make some progress.
Upvoted for this, and the excellent (if trivial) digital copying example.
I will add that progress in such cases may also sometimes be made by attempting to discern just what are the origins of our moral intuitions about the wrongness of theft, seeing if those intuitions may be decomposed, and whether they may be reconstructed to yield some concepts that are appropriate to the digital realm. (I’ve got an essay where I attempt to do just that for software piracy, which I may post online at some point...)
The general principle here is that since the basis of our consequentialism systems is the contents of our brains, we can refer to the source material for guidance (or attempt to, anyway). With deontology, since it doesn’t reduce to anything, that move is not open to us. (I think. I remain unclear about where the rules in a deontological system come from.)
Utility functions have the same problem. See blow for more details.
Huh? This doesn’t resemble the behavior of any consequentialist I have ever encountered. In practice when presented with new possibilities, consequentialists wind up doing logical back flips to avoid having to do things, such as torturing children to cure malaria, that they find deontologically repugnant.
Yes, of course. I have already said that a deontological system with a single rule that says, “maximize utility function F” would be equivalent to consequentialism, and thus they would share the same problems. However, in practice deontological systems tend to have many more immutable rules than that, and thus they are more susceptible to said problems, as per my previous post.
That sounds like you’re saying, “no one I know is actually a consequentialist, they are all crypto-deontologists in reality”, which may be true but is not relevant.
In addition, you may disagree with the decision to torture children to cure malaria; and that action may in fact be objectively wrong; but nowhere did I say that real consequentialists will always make correct decisions. By analogy, GPS navigation systems don’t give us perfect answers every time, but that doesn’t mean that the very concept of GPS navigation is invalid.
What problems would those be? The only problems you mentioned in your previous post are:
and
When I pointed out that consequentialists have the same problems with changing their utility functions, you declared it “true but not relevant”.
This analogy isn’t accurate. I’m not saying looking at consequences/GPS navigation is invalid. You’re the one whose saying all non-GPS navigation is invalid/look only at consequences.
Wait, what? What Bugmaster described sounds like the behavior of most of the consequentialists I’ve encountered.
Also, I don’t see what the linked situation (i.e. torture vs. malaria) has actually to do with the current issue. The issue Bugmaster raises is that of new behaviors that don’t precisely resemble any existing behaviors. How does the malaria-children-torture case fit that category?
When presented with a new potential behavior, in this case torturing children to cure malaria, that provides an actual consquentialist reason for doing something deontologically repugnant, he winds up doing logical back flips.
The issue is that the consequentialist has a secret set of deontological maxims, and he chose his utility function to avoid being forced to violate them; he thus has problems when it turns out he does have to violate them to maximize the utility function. His first reaction to this is frequently to deny that the repugnant action would in fact maximize his utility function, sometimes even resorting to anti-epistomology in order to do so. If that fails he will change his utility function, do this enough and the utility function starts to resemble a count of the number of maxim violations.
Edit: Of course, the other possibility is that the consequentialist decides that the repugnant action isn’t so repugnant after all and commences torturing children.
First of all, I must ask that you stop equating utilitarianism with consequentialism.
Second of all, torturing children is not a new behavior, in the way Bugmaster was using the phrase. A new behavior is something that wasn’t available before, wasn’t possible, like “copying digital media”. You couldn’t copy digital media in the year 1699 no matter what your moral beliefs were. You could, on the other hand, torture children all you liked.
Where am I doing that? I don’t think the word “utiliterian” was even used in this discussion previously, I tend to avoid using it since it has several similar but different definitions and thus tends to cause confusion in discussions.
True, but torturing children to cure malaria is. Another example that may make things clearer is wire-heading, which causes problems for a utility function that hasn’t sufficiently specified what it means by “pleasure” just as “copying digital media” can cause problems for maxims that haven’t specified what they mean by “theft”.
My entire point is that you are ascribing things to consequentialism that are true of utilitarianism, but are not true of consequentialism-in-general.
Ok, I was occasionally talking about Von Neumann–Morgenstern consequentialism since that’s what most consequentialists around here are. If you mean something else by “consequentialism”, please define it. We may have a failure to communicate here.
One may be a consequentialist without adhering to the von Neumann-Morgenstern axioms. “Consequentialism” is a fairly general term; all it means is “evaluates normative properties of things[1] on the basis of consequences” (”… rather than other things, such as the properties of the thing itself, that are not related to consequences”).
The SEP article on consequentialism is, as usual, a good intro/summary. To give a flavor of what other kinds of consequentialism one may have, here, to a first approximation, is my take on the list of claims in the “Classic Utilitarianism” section of the article:
Consequentialism: yes.
Actual Consequentialism: no.
Direct Consequentialism: no.
Evaluative Consequentialism: yes, provisionally.
Hedonism: no.
Maximizing Consequentialism: intuition says no, because it seems to exclude the notion of supererogatory acts.
Aggregative Consequentialism: intuition says yes, but this is problematic (Bostrom 2011) [2], so perhaps not.
Total Consequentialism: probably not (though average is wrong too; then again, without the aggregative property, I don’t think this problem even arises).
Universal Consequentialism: intuition says no, but I have a feeling that this is problematic; then again, a “yes” answer to this, while clearly more consistent, fails to capture some very strong moral intuitions.
Equal Consideration: see the universal property; same comment.
Agent-neutrality: seems like obviously yes but this is one I admit I know little about the implications of.
As you can see, I reject quite a few of the claims that one must assent to in order to be a classic utilitarian (and a couple which are required for VNM-compliance), but I remain a consequentialist.
[1] Usually “things” = acts, “properties” = moral rightness.
[2] Infinite Ethics
Should I take that to mean that only on the basis of consequences, or on the basis of consequences and other things?
Only, yes.
Edit: Although one of the interesting conclusions of Bostrom’s aforementioned paper is that bounding aggregative consequentialism with deontology gives better[1] results than just applying consequentialism. (Which I take to cast doubt on the aggregative property, among other things, but it’s something to think about.)
[1] “Better” = “in closer accord with our intuitions”… sort of. More or less.
Ok, in that case most of my criticism of consequentialism still applies, just replace “utility function” with whatever procedure general consequentialists use to compute moral actions.
No, I really don’t think that it does.
Consequentialists get their “whatever procedure” from looking at human moral intuitions and shoring them up with logic — making them more consistent (with each other, and with themselves given edge cases and large numbers and so forth), etc., while hewing as close to the original intuitions.
It’s a naturalistic process. It’s certainly not arbitrarily pulled from nowhere. The fact is that we, humans, have certain moral intuitions. Those intuitions may be “arbitrary” in some abstract sense, but they certainly do exist, as actual, measurable facts about the world (since our brains are part of the world, and our brains are where those intuitions live).
I mean, I’m not saying anything new here. Eliezer had a whole sequence about more or less this topic. Robin Hanson wrote a paper on it (maybe multiple papers, but I recall one off the top of my head).
Now, you could ask: well, why look to our moral intuitions for a source of morality? And the answer is: because they’re all we have. Because they are what we use (the only thing we could use) to judge anything else that we select as the source of morality. Again, this stuff is all in the Sequences.
Really, to me it looks more like they take one moral intuition extrapolate it way beyond it’s context and disregard the rest.
We also have a lot of deontological moral intuitions and even more virtue ethical moral intuitions.
If you mean the meta-ethics sequence, it’s an argument for why we base our morality on intuitions (and even then I don’t think that’s an entirely accurate summary), it’s argument for pure consequentialism is a lot weaker and relies entirely on the VNM theorem. Since you’ve claimed not to be a VNM consequentialist, I don’t see how that sequence helps you. Also you do realize there are bookshelves full of philosophers who’ve reached different conclusions?
Would you apply the same logic to claim that our physical intuitions as our only source of physics? Or to use an even more obvious parallel, that our mathematical intuitions are our only source of mathematics. In a sense these statements are indeed true, but it is certainly misleading to phrase it that way.
Also, if you say moral intuition is our only source of morality, if people’s moral intuitions differ, are they obligated to obey their personal moral intuition If so, does that mean it’s moral for me to murder if my intuition says so? If not, whose intuition should we use?
Which moral intuition is that...?
Yes, I studied some of them in college. My assessment of academic philosophers is that most of them are talking nonsense most of the time. There are exceptions, of course. If you want to talk about the positions of any particular philosopher(s), we can do that (although perhaps for that it might be worthwhile to start a new Discussion thread, or something). But just the fact that many philosophers think some particular thing isn’t strong evidence of anything interesting or convincing.
Um, what logic? For physics and mathematics the claim that “our X-ical intuitions are our only source of X” is simply false: for physics we can do experiments and observe the real world, whereas mathematics… well, there’s more than one way to view it, but if you take mathematics to consist merely of formal systems, then those systems have no “source” as such. Insofar as any of those formal systems describe any aspect of reality, we can look at reality and see that.
For morality there just isn’t anything else, beyond our intuitions.
Moral laws don’t exist anywhere outside of human brains, so in one sense this entire line of questioning is meaningless. It’s not like moral laws can actually compel you to do one thing or another, regardless of whether you are a consequentialist or a deontologist or what. Moral laws have force insofar as they are convincing to any humans who have the power to enforce them, whether this be humans deciding to follow a moral law in their own lives, or deciding to impose a moral law on others, etc.
If people’s moral intuitions differ then I guess those people will have to find some way to resolve that difference. (Or maybe not? In some cases they can simply agree to go their separate ways. But I suppose you’d say, and I’d agree, that those are not the interesting cases, and that we’re discussing those cases where the disagreement on morality causes conflict.)
I mean, I can tell you what tends to happen in practice when people disagree on morality. I can tell you what I in particular will do in any given case. But asking what people should do in cases of moral disagreement is just passing the buck.
I hope you’re not suggesting that deontology, or any other system, has some resolution to all of this? It doesn’t seem like you are, though; I get the sense that you are merely objecting to the suggestion that consequentialism has the answers, where deontology does not. If so, then I grant that it does not. However, these are not the questions on which basis I judge deontology to be inferior.
Rather, my point was that even if we grant that there are, or should be, absolute, unbreakable moral laws that judge actions, regardless of consequences (i.e. accept the basic premise of deontology), it’s entirely unclear what those laws should be, or where they come from, or how we should figure out what they are, or why these laws and not some others, etc. Consequentialism doesn’t have this problem. Furthermore, because moral intuitions are the only means by which we can judge moral systems, the question of whether a moral system satisfies our moral intuitions is relevant to whether we accept it. Deontology, imo, fails in this regard to a much greater degree than does consequentialism.
Because our physical intuitions tell us that should work.
Then why are we focusing on those particular formal systems? Also where do our ideas about how formal systems should work come from?
Well, look at the game theory based decision theories, notice that they seem to be converging on something resembling Kantian deontology. Also, what do you hope that, don’t you want the issue resolved?
I’m not really sure what you mean by this.
Why indeed? Mathematics does sometimes examine formal systems that have no direct tie to anything in the physical world, because they are mathematically interesting. Sometimes those systems turn out to be real-world-useful.
What do you mean, “how formal systems should work”? Formal systems are defined in a certain way. Therefore, that is how they work. Why do we care? Well, because that’s an approach that allows us to discover/invent new math, and apply that math to solve problems.
Really? Kantian deontology, and definitely not rule consequentialism?
I meant, by that, that such a claim would be clearly false. If you were claiming clearly false things then that would make this conversation less interesting. ;)
Where does your belief that observing the world will lead us to true beliefs come from?
First, where do those definitions come from? Second, as Lewis Carrol showed a definition of a formal system is not the same as a formal system since definitions of a formal system don’t have the power to force you to draw conclusions from premises.
Yes, you may want to look into decision theories many of which take superrationality as their staring point. Or do you mean taking the Categorical Imperative as a rule consequentialist rule?
Careful, just because you can’t think of a way to resolve a philosophical problem, doesn’t mean there is to way to resolve it.
http://yudkowsky.net/rational/the-simple-truth
… and many posts in the Sequences. (The posts/essays themselves aren’t an answer to “where does this belief come from”, but their content is.)
We made ’em up.
http://lesswrong.com/lw/rs/created_already_in_motion/
I am passingly familiar with these systems. I don’t know why you would claim that they have anything to do with deontology, since the entire motivation for accepting superrationality is “it leads to better consequences”. If you follow unbreakable rules because doing so leads to better outcomes, then you are a consequentialist.
Um, ok, fair enough, so in that case how about we stop dancing around the issue, and I will just ask straight out:
Do you believe that deontology has a resolution to the aforementioned issues? Or no?
That article ultimately comes down to relying on our (evolved) intuition, which is exactly my point.
Once you self-modify to always follow those rules, you are no longer a consequentialist.
Quiet possibly.
Upvoted for spotting something probably non-obvious: the parallel between Kantian ethics and certain decision theories seems quite interesting and never occurred to me. It’s probably worth exploring how deep it runs, perhaps the idea that being a rational agent in itself compels you inescapably to follow rules of a certain form might have some sort of reflection in these decision theories.
I certainly would hope that there doesn’t turn out to be a universal cosmic moral law derivable from nothing but logic, if it happens to be a law I really hate like “you must kill kittens”. :)
Also:
This is true. Personally, I think that to the extent that those intuitions ought to be satisfied, they are compatible with consequentialism. This isn’t 100% true, but it’s fairly close, it seems to me.
Except you defined consequentialism as only caring about consequences.
Yes. What contradiction do you see...?
Those intuitions involve caring about things besides consequences. One way to deal with this is to say that those intuitions shouldn’t be satisfied, but you are left with the question of on what basis you are making that claim. The other way I’ve seen people deal with it is to expend the definition of “consequences” until the term is so broad as to be meaningless.
I agree that the latter maneuver is a poor way to go. The former does make the resulting morality rather unsatisfactory.
My view —
— is another way of saying that some intuitions that seem deontological or virtue-ethical are in fact consequentialist. Others are not consequentialist, but don’t get in the way of consequentialism, or satisfying them leads to good consequences even if the intuitions themselves are entirely non-consequentialist. The remainder generally shouldn’t be satisfied, a decision that we reach in the same way that we resolve any conflict between our moral intuitions:
Very carefully.
For example, do you think creating a person and then killing him is morally equivalent to not creating him in the first place because the consequences are the same?
Those are two different consequences.
What do you mean? If I dispose of the body well enough I can make the final outcome atom-for-atom identical.
Can you expand on what you mean by “final outcome” here, and why it matters?
For my part, I would say that the difference between the world in which a person lives N years and then dies and all the effects of that person’s actions during those N years are somehow undone, and the world in which they didn’t live at all, is the N years of that person’s life.
What you seem to want to say is that those N years aren’t a consequence worthy of consideration, because after the person’s death they aren’t alive anymore, and all that matters is the state of the world after their death. Did I get that right?
That puzzles me. It seems that by this reasoning, I can just as readily conclude if the universe will ultimately achieve a maximum-entropy condition, then a consequentialist must conclude that all actions are ultimately equally moral, since the “final outcome” will be identical.
My point is that this what I meant by expanding the definition of “consequences” here.
That is the usual meaning; at least, I thought it was. Perhaps what we have here is a sound/sound dispute.
I dunno.
At the risk of repeating myself: it seems to me that if action A results in a year of my life followed by the eradication of all traces of my existence, and action B results in two years of my life followed by the eradication of all traces of my existence, then if I consider years of my life an important differential consequence with which to evaluate the morality of actions at all, I should prefer B to A since it creates an extra year of my life, which I value.
The fact that the state of the world after two years is identical in both branches of this example isn’t the only thing that matters to me, or even the thing that matters most to me.
For my own part, I don’t see how that makes “consequences” a meaningless term, and I can’t see why anyone for whom the only consequences that matter are the “final” outcome should be a consequentialist, or care about consequences at all.
Again, I suspect this is a terminological confusion—a confusion over what “consequentialism” actually means caring about.
To you—and me—a “consequence” includes the means, the end, and any inadvertent side-effects. Any result of an action.
To Eugine, and some others, it includes the end, and any inadvertent side-effects; but apparently the path taken to them, the means, is not included. I can see how someone might pick up this definition from context, based on some of the standard examples. I’ve done similar things myself with other words.
(As a side note, I have also seen it assumed to include only the end—the intended result, not any unintended ones. This is likely due to using consequentialism to judge people, which is not the standard usage but common practice in other systems.)
Perhaps not coincidentally, I have only observed the latter two interpretations in people arguing against consequentialism, and/or the idea that “the ends justify the means”. If you’re interested, I think tabooing the terms involve might dissolve some of their objections, and you both may find you now disagree less than you think. But probably still a bit.
As I understand Eugine, he’d say that in my example above there’s no consequentialist grounds for choosing B over A, since in two years the state of the world is identical and being alive an extra year in the interim isn’t a consequence that motivates choosing B over A.
If I’ve understood properly, this isn’t a terminological confusion, it’s a conflict of values. If I understood him correctly, he thinks it’s absurd to choose B over A in my example based on that extra year, regardless of whether we call that year a “consequence” or something else.
That’s why I started out by requesting some clarification of a key term. Given the nature of the answer I got, I decided that further efforts along these lines would likely be counterproductive, so I dropped it.
Right, as a reductio of choosing based on “consequentialist grounds”. His understanding of “consequentialist grounds”.
Sorry, I’m not following.
A reductio argument, as I understand it, adopts the premise to be disproved and shows how that premise leads to a falsehood. What premise is being adopted here, and what contradiction does it lead to?
Um, the premise is that only “consequences” or final outcomes matter, and the falsehood derived is that “creating a person and then killing him is morally equivalent to not creating him in the first place because the consequences are the same”.
But it looks like there may be an inferential distance between us? Regardless, tapping out.
That’s your privilege, of course. Thanks for your time.
My understanding of consequentialism is similar to yours and TheOtherDave’s. In a chain of events, I consider all events in the chain to be a consequence of whatever began the chain, not just the final state.
I can’t, to be honest. Pretty much all the standard examples that I can think of relating to consequentialism fall into one of two categories: first, thought experiments aimed at forcing counterintuitive behavior out of some specific dialect of utilitarianism (example: the Repugnant Conclusion); and second, thought experiments contrasting some noxious means with a desirable end (example: the trolley problem).
Biting the bullet on the latter is a totally acceptable response and is in fact one I endorse; but I can’t see how you can look at e.g. the trolley problem and conclude that people biting that bullet are ignoring the fat man’s life; its loss is precisely what makes the dilemma a dilemma. Unless I totally misunderstand what you mean by “means”.
Now, if you’re arguing for some non-consequential ethic and you need some straw to stuff your opponent with… that’s a different story.
They’re not ignoring his life, they’re counting it as 1 VP (Victory Point) and contrasting with the larger number of VP’s they can get by saving the people on the track. The fact that you kill him directly is something your not allowed to consider.
Well, nothing in the definition of consequential ethics requires us to be looking exclusively at expected life years or pleasure or pain. It’s possible to imagine one where you’re summing over feelings of violated boundaries or something, in which case the fact that you’ve killed the guy directly becomes overwhelmingly important and the trolley problem would straightforwardly favor “do not push”. It’s just that most consequential ethics don’t, so it isn’t; in other words this feature emerges from the utility function, not the metaethical scheme.
(As an aside, it seems to me that preference utilitarianism—which I don’t entirely endorse, but which seems to be the least wrong of the common utilitarianisms—would in many cases weight the fat man’s life more heavily than that of a random bystander; many people, given the choice, would rather die by accident than through violence. It wouldn’t likely be enough to change the outcome in the standard 1:5 case, but it would be enough to make us prefer doing nothing in a hypothetical 1:1 case, rather than being indifferent as per total utilitarianism. Which matches my intuition.)
So you’re willing to allow summing over feelings of violated boundaries, but not summing over actual violated boundaries, interesting.
That was one example in a very large space of possibilities; you can differentiate the consequences of actions in any way you please, as long as you’re doing so in a well-behaved way. You don’t even need to be using a sum—average utilitarianism doesn’t.
This does carry a couple of caveats, of course. Some methods give much less pathological results than others, and some are much less well studied.
Summing over actual violated boundaries is also a possible consequentialism, but it does not seem to capture the intuitions of those deontological theories which disallow you to push the fat guy. Suppose the driver of the trolley is a mustache-twirling villain who has tied the other five people to the tracks deliberately to run the trolley over them (thus violating their boundaries). Deontologists would say this makes little difference for your choice in the dilemma, you are still not permitted to throw the fat man on the tracks to save them. This deontological rule cannot be mimicked with a consequentialism that assigns high negative value to boundary-violations regardless of agent. It can, perhaps, (I am not entirely sure) be mimicked with a consequentialism that assigns high negative value to the subjective feeling of violating a boundary yourself.
Well, most of the well known consequentialist dilemmas rely on forbidding considering the path, in fact not caring about is one of the premises of the VNM theorem.
As I said, “I can see how someone might pick up this definition from context, based on some of the standard examples.”
I don’t think it’s the intention of those examples, however—at least, not the ones that I’m thinking of. Could you describe the ones you have in mind, so we can compare interpretations?
I … think this is a misinterpretation, but I’m most definitely not a domain expert, so could you elaborate?
Well, caring about the path renders the independence axiom meaningless.
Really? Again, I’m not an expert, but …
How does saying that something positive-utility remains good independant of other factors, and something negative-utility remains bad, preclude caring about those other factors too? If it did, why would that only include “the path”, and not other things we care about, because other subsets of reality are good or bad independant of them too?
Don’t get me wrong; I understand that in various deontological and virtue ethics systems we wouldn’t care about the “end” at all if it were reached through incorrect “means”. Consequentialists reject this*; but by comparing the end and the means, not ignoring the means altogether! At least, in my limited experience, anyway.
Again, could you please describe some of the thought experiments you were thinking of?
*(although they don’t all care for independence as an axiom, because it doesn’t apply to instrumental goals, only terminal ones)
To take an extreme example, in the classic cannibal lifeboat scenario, the moral solution is generally considered to draw straws. That is, this is considered preferable to just eating Bill, or Tom for that matter, even though according to the independence axiom there should be a particular person among the participants sacrificing whom would maximize utility.
I don’t think that’s a consequentialist thought experiment, though? Could you give examples of how it’s illustrated in trolley problems, ticking time bomb scenarios, even forced-organ-donation-style “for the greater good” arguments? If it’s not too much trouble—I realize you’re probably not anticipating huge amounts of expected value here.
(I think most LW-style utilitarian consequentialists would agree there is probably an optimal one, but unilaterally deciding that yourself might lead to additional consequences—better to avoid selfish infighting and, most importantly, perceived unfairness, especially when you may be too uncertain about the outcomes anyway. So that’s a data point for you.)
What do you mean by “consequentialist thought experiment”?
Yes, you can always argue that any behavior is instrumental, replacing it with the reason it came to be thought of as moral, but if you go down that route, you’ll end up concluding the purpose of life is to maximize inclusive genetic fitness.
One of the standard thought experiments used to demonstrate and/or explain consequentialism. I’m really just trying to see what your model of consequentialism is based on.
Well, we’re adaptation-executors, not fitness-maximizers—the environment has changed. But yeah, there’s a very real danger in coming up with grandiose rationalizations for how all your moral intuitions are really consequences of your beautifully simple unified theory.
And there’s a very real danger of this being a fully general counterargument against any sufficiently simple moral theory.
You’re absolutely right about that. In fact, there’s a danger that it can be a fully general counterargument against any moral theory at all! After all, they might simply be rationalizing away the flaws...
I wouldn’t endorse using it as a counterargument at all, honestly. If you can point out actual rationalizations, that’s one thing, but merely calling someone a sophisticated arguer is absolutely a Bad Idea.
Well, as Eliezer explained here, simple moral systems are in fact likely to be wrong.
I think that’s one of the areas where Eliezer got it completely wrong. Value isn’t that complex, and it’s a mistake to take people’s apparent values at face value as he seems to.
Our values are psychological drives from a time in our evolutionary history before we could possibly be consequentialist enough to translate a simple underlying value into all the actions required to satisfy it. Which means that evolution had to bake in the “break this down into subgoals” operation, leaving us with the subgoals as our actual values. Lots of different things are useful for reproduction, so we value lots of different things. I would not have found that wiki article convincing either back when I believed as you believe, but have you read “Thou art godshatter?”
People have drives to value different things, but a drive to value is not the same thing as a value. For example, people have an in-group bias (tribalism), but that doesn’t mean that it’s an actual value.
If values are not drives (Note I am saying values are drives, not “driives are values”, “drives to value are values”, or anything else besides “values are drives”), what functional role do they play in the brain? What selection pressure built them into us? Or are they spandrels? If this role is not “things that motivate us to choose one action over another,” why are they motivating you to choose one action over another? If that is their role, you are using a weird definition of “drive”, so define “Fhqwhgads” as “things that motivate us to choose one action over another”, and substitute that in place of “value” in my last argument.
If values are drives, but not all drives are values, then… (a) if a value is a drive you reflectively endorse and a drive you reflectively endorse is a value, then why would we evolve to reflectively endorse only one of our evolved values? (b) otherwise, why would either you or I care about what our “values” are?
I agree that values are drives, but not all drives are values. I dispute that we would reflectively endorse more than one of our evolved drives as values. Most people aren’t in a reflective equilibrium, so they appear to have multiple terminal values—but that is only because they aren’t’ in a reflective equilibrium.
What manner of reflection process is it that eliminates terminal values until you only have one left? Not the one that I use (At least, not anymore, since I have reflected on my reflection process). A linear combination (or even a nonlinear combination) of terminal values can fit in exactly the same spot that a single value could in a utility function. You could even give that combination a name, like “goodness”, and call it a single value (though it would be a complex one). So there is nothing inconsistent about having several separate values.
Let me hazard a guess, based on my own previous reflection process, now abandoned due to meta-reflection. First, I would find a pair of thought experiments where I had strong feelings for an object-level choice in each, and I felt I was being inconsistent between them. Of course, object-level choices in two different scenarios can’t be inconsistent. There is a computation that returns both of those answers, namely, whatever was going on in your pre-reflection brain.
For example, “throw the level, redirect the trolley to kill 1 instead of 5” and “don’t butcher the healthy patient and steal their organs to save five.”
The inconsistence is in the two principles I would have automatically come up with to explain two different object-level choices. Or, if my reasons for one emotional reaction are too complicated for me to realize, then it’s between one principle and the emotional reaction. Of course, the force behind the principle comes from the emotional reaction to the thought experiment which motivated it.
Then, I would let the two emotions clash against each other, letting my mind flip between the two scenarios back and forth until one started to weaken. The winner would become stronger, because it survived a clash. And so did the principle my mind coughed up to explain it.
What are the problems with this?
It favors simple principles for the sole reason that they are easier to guess by my conscious mind, which of course doesn’t really have access to the underlying reasons. It just thinks it does. This means it depends on my ignorance of other more complicated principles. This part can be destroyed by the truth.
The strength of the emotion for the object-level choice is often lent to the principle by something besides what you think it is. Yvain covered this in an essay that you, being a hedonistic utilitarian would probably like: Wirehead Gods on Lotus Thrones. His example is that being inactive and incredibly happy without interuption forever sounds good to him if he thinks of Buddhists sitting on lotuses and being happy, but bad if he thinks of junkies sticking needles in their arms and being happy. With this kind of reflection, you consciously think something like: “Of course, sitting on the lotus isn’t inherently valuable, and needles in arms aren’t inherently disvaluable either,” but unconsciously, your emotional reaction to that is what’s determining which explicit principles like “wireheading is good” or “wireheading is bad” you consciously endorse.
All of your standard biases are at play in generating the emotional reactions in the first place. Scope insensitivity, status quo bias, commitment bias, etc.
This reflection process can go down different paths depending on the order that thought experiments are encountered. If you get the “throw switch, redirect trolley” one first, and then are told you are a consequentialist, and that there are other people who don’t throw the switch because then they are personally killing someone, and you think about their thought process and reject it as a bad principle, and then you see the “push the fat man off the bridge” one, and you think “wow, this really feels like I shouldn’t push him off the bridge, but [I have this principle established where I act to save the most lives, not to keep my hands clean]”, and slowly your instinct (like mine did) starts to become “push the fat man off the bridge.” And then you hear the transplant version, and you become a little more consequentialist. And so on. It would be completely different if most people heard the transplant one first (or an even more deontology-skewed thought experiment). I am glad of course, that I have gone down this path as far as I have. Being a consequentialist has good consequences, and I like that! But my past self might not have agreed, and likewise I probably won’t agree with most possible changes to my values. Each version of me judges differences between the versions under its own standards.
There’s the so called sacred vs. secular value divide (I actually think it’s more of a hierarchy, with several layers of increasing sacredness, each of which feels like it should lexically override the last), where pitting a secular value vs a sacred value makes the secular value weaker and the sacred one stronger. But which values are secular or sacred is largely a function of what your peers value.
And whether a value becomes stronger or weaker through this process depends largely on which pairs of thought experiments you happen to think of. Is a particular value, say “artistic expression”, being compared to the value of life, and therefore growing weaker, or is it being compared to the value of not being offended, and therefore growing stronger?
So that you don’t ignore my question like you did the one in the last post, I’ll reiterate it. (And I’ll add some other questions). What process of reflection are you using that you think leads people toward a single value? Does it avoid the problems with my old one that I described? Is this a process of reflection most people would meta-reflectively endorse over alternative ones that don’t shrink them down to one value? (If you are saying that people who have several values are out of reflective equilibrium, then you’d better argue for this point.)
Edited: formatting.
I endorse the the process you rejected. I don’t think the problems you describe are inevitable. Given that, if people’s values cause them conflict in object-level choices, they should decide what matters more, until they’re at a reflective equilibrium and have only one value.
But how do you avoid those problems? Also, why should contemplating tradeoffs between how much we can get values force us to pick one? I bet you can imagine tradeoffs between bald people being happy, and people with hair being happy, but that doesn’t mean you should change your value from “happiness” to one of the two. Which way you choose in each situation depends on how many bald people there are, and how many non-bald people there are. Similarly, with the right linear combination, these are just tradeoffs, and there is no reason to stop caring about one term because you care about the other more. And you didn’t answer my last question. Why would most people meta-reflectively endorse this method of reflection?
1, as you said, can be destroyed by the truth (if they’re actually wrong), so it’s part of a learning process. 2 isn’t a problem once you isolate the principle by itself, outside of various emotional factors. 3 is a counterargument against any kind of decisionmaking, it means that we should be careful, not that we shouldn’t engage in this sort of reflection. 4 is the most significant of these problems, but again it’s just something to be careful about, same is in 3. As for 5, that’s to be solved by realizing that there are no sacred values.
It doesn’t, you’re right. At least, contemplating tradeoffs doesn’t by itself guarantee that people would choose only one value, But it can force people to endorse conclusions that would seem absurd to them—preserving one apparent value at the expense of another. Once confronted, these tensions lead to the reduction to one value.
As for why people would meta-reflectively endorse this method of reflection—simply, because it makes sense.
So what, on your view, is the simple thing that humans actually value?
Pleasure, as when humans have enough of it (wireheading) they will like it more than anything else.
(nods) Well, that’s certainly simple.
So it seems to follow that if I offer someone the choice of murdering their child in exchange for greater pleasure, and they turn me down, we can confidently infer that they simply don’t believe I’ll follow through on the offer, because if they did, they would accept. Yes?
Believing that there is no such thing as greater pleasure than the loss from having your child murdered, is a subset of “not believing you’ll follow through on your offer”.
Yes, that’s true. If you believe what I’m offering doesn’t exist, it follows that you ought not believe I’ll follow through on that offer.
I don’t think you’re following that to the logical conclusion, though. You were implicitly arguing that most people’s refusal would not be based on “doesn’t believe I’ll follow through”. It is entirely plausible that most people would give the reason which I described, and as you have admitted, the reason which I described is a type of “doesn’t believe I’ll follow through”. Therefore, your argument fails, because contrary to what you claimed, most people’s refusal would (or at least plausibly could) be based on “doesn’t believe I’ll follow through”.
I agree that most people’s refusal would be based on some version of “doesn’t believe I’ll follow through.”
I’m not clear on where I claimed otherwise, though… can you point me at that claim?
It’s true that you didn’t explicitly claim people wouldn’t do that, but in context, you did implicitly claim that. In context, you were responding to something you disagreed with and so it must mean that you thought that they would not in fact do that and you were presenting the claim that they would not do that to support your argument.
https://en.wikipedia.org/wiki/Implicature https://en.wikipedia.org/wiki/Cooperative_principle
I see.
OK. Thanks for clearing that up.
Someone recently suggested that there should be a list of 5 geek linguistic fallacies and I wonder if something like this should go in the list.
Your response seems very strange because either you meant to imply what you implied (in which case you thought you could misrepresent yourself as not implying anything), or you didn’t (in which case you said a complete non-sequitur that by pure coincidence sounded exactly like an argument you might have made for real)
What response were you expecting?
My original question was directed to blacktrance, in an attempt to clarify my understanding of their position. They answered my question, clarifying the point I wanted to clarify; as far as I’m concerned it was an entirely sucessful exchange.
You’ve made a series of assertions about my question, and the argument you inferred from it, and various fallacies in that argument. You are of course welcome to do so, and I appreciate you answering my questions about your inferences, but none of that requires any particular response on my part as far as I can tell. You’ve shared your view of what I’m saying, and I’ve listened and learned from it. As far as I’m concerned that was an entirely successful exchange.
I infer that you find it unsatisfying, though. Well, OK. Can you state what it is you’re trying to achieve in this exchange, and how I can help you achieve it?
It appeared that you’re either willfully deceptive or incapable of communicating clearly, in such a way that it looks willfully deceptive. I was hoping you’d offer another alternative than those.
The other alternative I offer is that you’ve been mistaken about my goals from the beginning.
As I said a while back: I asked blacktrance a question about their working model, which got me the information I wanted about their model, which made it clear where our actual point of disagreement was (specifically, that blacktrance uses “values” to refer to what people like and not what we want). I echoed my understanding of that point, they agreed that I’d understood it correctly, at which point I thanked him and was done.
My goal was to more clearly understand blacktrance’s model and where it diverged from mine; it wasn’t to challenge it or argue a position. Meanwhile, you started from the false assumption that I was covertly making an argument, and that has informed our exchange since.
If you’re genuinely looking for another alternative, I recommend you back up and examine your reasons for believing that.
That said, I assume from your other comments that you don’t believe me and that you’ll see this response as more deception. More generally, I suspect I can’t give you want you want in a form you’ll find acceptable.
If I’m right, then perhaps we should leave it at that?
No, for a few reasons. First, they may not believe that what you’re offering is possible—they believe that the loss of a child would outweigh the pleasure that you’d give them. They think that you’d kill the child and give them something they’d enjoy otherwise, but doesn’t make up for losing a child. Though this may count as not believing that you’ll follow through on your offer. Second, people’s action-guiding preferences and enjoyment-governing preferences aren’t always in agreement. Most people don’t want to be wireheaded, and would reject it even if it were offered for free, but they’d still like it once subjected to it. Most people have an action-guiding preference of not letting their children die, regardless of what their enjoyment-governing preference is. Third, there’s a sort-of Newcomblike expected value decision at work, which is that deriving enjoyment from one’s children requires valuing them in such a way that you’d reject offers of greater pleasure—it’s similar to one-boxing.
Ah, OK. And when you talk about “values”, you mean exclusively the things that control what we like, and not the things that control what we want.
Have I got that right?
That is correct. As I see it, wants aren’t important in themselves, only as far as they’re correlated with and indicate likes.
OK. Thanks for clarifying your position.
How would you test this theory?
Give people pleasure, and see whether they say they like it more than other things they do.
This begs the question of whether the word “pleasure” names a real entity. How do you give someone “pleasure”? As opposed to providing them with specific things or experiences that they might enjoy? When they do enjoy something, saying that they enjoy it because of the “pleasure” it gives them is like saying that opium causes sleep by virtue of its dormitive principle.
Do you mean “forcibly wirehead people and see if they decide to remove the pleasure feedback”? Also, see this post.
That’s one way to do it, but not the only way, and it may not even be conclusive, because people’s wants and likes aren’t always in agreement. The test is to see whether they’d like it, not whether they’d want it.
Establishing a lower bound on the complexity of a moral theory that has all the features we want seems like a reasonable thing to do. I don’t think the connotations of “fully general counterargument” are appropriate here. “Fully general” means you can apply it against a theory without really looking at the details of the theory. If you have to establish that the theory is sufficiently simple before applying the counterargument, you are referencing the details of the theory in a way that differentiates from other theories, and the counterargument is not “fully general”.
“This theory is too simple” is something that can be argued against almost any theory you disagree with. That’s why it’s fully general.
No, it isn’t: Anyone familiar with the linguistic havoc sociological theory of systems deigns to inflict on its victims will assure you of that!
Ok, so what’s an example of something that doesn’t count as a “consequence” by your definition?
Beats me. Why does that matter?
To be more precise: given two possible actions A and B, which lead to two different states of the world Wa and Wb, all attributes of Wa that aren’t attributes of Wb are consequences of A, and all attributes of Wb that aren’t attributes of Wa are consequences of B, and can motivate a choice between A and B.
Some attributes shared by Wa and Wb might be consequences of A or B, and others might not be, but I don’t see why it matters for purposes of choosing between A and B.
Ok, now you’re hiding the problem in the word “attribute” and to a certain extent “state of the world”, e.g., judging by your reaction to my previous posts I assume “state of the world” includes the world’s history, not just its state at a given time. Does it all include contrafactual states, a la, contrafactual mugging?
Well, I’d agree that there’s no special time such that only the state of the world at that time and at no other time matters. To talk about all times other than the moment the world ends as “the world’s history” seems a little odd, but not actively wrong, I suppose.
As for counterfactuals… beats me. I’m willing to say that a counterfactual is an attribute of a state of the world, and I’m willing to say that it isn’t, but in either case I can’t see how a counterfactual could be an attribute of one state of the world and not another. So I can’t see why it matters when it comes to motivating a choice between A and B.
So what do you do on counterfactual mugging, or Newcomb’s problem for that matter?
Newcomb-like problems: I estimate my confidence (C1) that I can be the sort of person whom Omega predicts will one-box while in fact two-boxing, and my confidence (C2) that Omega predicting I will one-box gets me more money than Omega predicting I will two-box. If C1 is low and C2 is high (as in the classic formulation), I one-box.
Counterfactual-mugging-like problems: I estimate how much it will reduce Omega’s chances of giving $10K to anyone I care about if I reject the offer. If that’s low enough (as in the classic formulation), I keep my money.
The fact that the fundamental laws of physics are time-reversible makes such variations on the 1984-ish theme of “we can change the past” empirically wrong.
???
One of these cases involves the consequence that someone gets killed. How is that not morally neutral?
For the consequentialist to actually start torturing children for this reason, he would have to know, to a high degree of certainty, that the utility function is maximized by torturing children. It may be that, given that he doesn’t have perfect knowledge, he is incapable of knowing that to the required degree. This would mean that he remains a consequentialist but could not be induced to torture children.
Edit: There’s also the possibility that his decision affects how other people make decisions, which is itself a sort of consequence that has to be weighed. If many of the people around him are deontologists, torturing children may have the side effect of making torturing children more acceptable to the deontologists around him, leading to those deontologists torturing children in cases that have bad consequences.
That you can pick hypothetical conditions where your deontological intuition is satisfied by your “utility function” tells us nothing about the situations where the intuition is in direct conflict with your “utility function”.
Let’s make this simple: if you were certain your utility function was maximized by torturing children, would you do it?
As a side note, the topic seems to be utilitarianism, not consequentialism. The terms are not interchangeable.
I am not Omega. I can’t be “certain”.
EDIT: Sorry, turns out you already answered my question. Here are some replacement questions.
You’ve said that you will do nothing, rather than violate a right in order to prevent other rights being violated. Yet you also say that people attempting to violate rights waive their rights not to be stopped. Is this rule designed for the purpose of allowing you to violate people’s rights in order to protect others? That seems unfair to people in situations where there’s no clearly identifiable moustache-twirling villain.
You have also said that people can waive any of their rights—for example, people waive their right not to have sex in order to have sex, and people waive their right not to be murdered in order to commit suicide. Doesn’t this deny the existence of rape within marriage? Isn’t it, in fact, the exact argument that was used to oppose laws prohibiting rape within marriage? This seems worrying. (Obviously, there are other, similar cases that can be constructed, but this one is a major problem.)
Finally, you mention that some actions which do not violate rights are nonetheless “being a dick”, and you will act to prevent and punish these acts in order to discourage them. Doesn’t this imply that there are additional aspects to morality not contained by “rights”? Do you act as a Standard-LessWrong-Consequentialist-Utilitarian™ with regards to Not Being A Dick?
I wish everyone in this thread would be more careful about using the word “right”. If you are trying to violate somebody’s rights, you don’t have “a right not to be stopped”. You have your perfectly normal complement of rights, and some of them are getting in the way of protecting someone else’s rights, so, since you’re the active party, your (contextually relevant) rights are suspended. They remain in effect out of that context (if you are coming at me with a knife I may violently prevent you from being a threat to me; I may not then take your wallet and run off cackling; I may not, ten years later, visit you in prison and inform you that your mother is dead when she is not; etc.).
That’s a good question, but the answer is no. A marriage does not constitute a promise to be permanently sexually available. You could opt to issue standing permission, and I gather this was customary and expected in historical marriages, but you can revoke it at any time; your rights are yours and you may assert them at will. I don’t object to people granting each other standing permission to do things and sticking with it if that’s how they prefer to conduct themselves, but morally speaking the option to refuse remains open.
No. There’s morality, and then there’s all the many things that are not morality. Consequentialists (theoretically, anyway) assign value to everything and add it all up according to the same arithmetic—with whatever epicycles they need not to rob banks and kidnap medical test subjects—but that’s not what I’m doing. Morality limits behavior in certain basic ways. You can be a huge dick and technically not do anything morally wrong. (And people can get back at you all kinds of ways, and not technically do anything morally wrong! It’s not a fun way to live and I don’t really recommend it.)
No. Actually, you could probably call me sort of virtuist with respect to dickishness. I am sort of Standard-LessWrong-Consequentialist-Utilitarian™ with respect to prudence, which is a whole ’nother thing.
Well, sure. I did read your explanation(s). I was assuming the worst-case scenario for the hypothetical, where you have to violate someone’s rights in order to protect others. For example, the classic lying-to-the-nazis-about-the-jews scenario.
Not anymore, no. Because we changed the rules. Because of all the rapes.
I … see, that seems consistent. I assume you can waive the right to abolish an agreement at will, too? That’s the foundation of contract law, but I don’t want to assume.
Indeed, and that’s what I’m asking about. What is this “don’t be a dick” function, and what place does it hold?
Huh. Shame.
… what’s “prudence” in your nomenclature? I haven’t seen the term as you use it.
If you are trying to violate someone’s rights, then your contextually relevant rights are forfeited. For example, the Nazi has forfeited the right not to be lied to.
I’m not sure I understand.
Well, some people are motivated to avoid being dicks, and might value information about how to do it. It’s not very ontologically special.
To me, it looks like consequentialists do prudence exclusively, and name it “morality”, instead of actually doing any morality. Prudence is arranging for things to be how you would like them to be.
Yes, I know. Hence my question:
You see?
Can I waive my right to un-waive a right? For example, if I waive my right not to work for someone, can I also waive my right to change my mind? As in, a standard contract?
I would assume so, but hey, it can’t hurt to ask.
Are you, personally, motivated to discourage dickish behaviour? You are, right? You mentioned you would sue someone for being a dick to you, if it were illegal, even if it was perfectly moral for them to do so.
… as long as “how you would like them to be” doesn’t violate any rights. I think I see what you mean by that.
… ah, and “how you would like things to be” includes “no dickishness”, right?
I do not see; can you start that line of inquiry over completely for me?
I haven’t actually thought about this before, but my instinct is no. Although if you arrange for it to be really hard to communicate a change of mind, someone who went by their last communication with you might not be doing anything wrong, just making a sincere mistake.
I try to minimize it from and around myself. I am not on a Global Dickishness Reduction Campaign of any kind. Maybe I should have said not that I’m not a consequentialist about it, but rather that I’m neither agent- nor recipient-neutral about it? How I would like things to be certainly refers to dickishness.
Sure, I can rephrase.
You’ve said:
You also say that attackers lose their contextually relevant rights, so you can violate their rights in order to defend others.
My original question was, doesn’t that feel like a patch to allow you to act like a consequentialist when it’s clear you have to?
Isn’t that unfair to people in situations where there is no attacker for you to focus on, where relieving their suffering is not a matter of taking out a convenient target?
Also, I just realized this, but doesn’t that mean you should be really concerned about any laws punishing things that don’t violate rights, since those criminals haven’t waived their rights under your system? For example, suing someone for violating your “right to privacy” by publicising a photo taken of you in a public place.
Huh. This universal right to change your mind about commitments seems like the most radical part of your philosophy (although obviously, you’re more tentative about endorsing it) - I noticed you endorsed the right of private individuals to secede from the state.
Yeah, you mentioned being a sort of virtue ethicist here … you would vote for (prudent) anti-dickishness laws, that sort of thing?
No. It doesn’t feel consequentialist to me at all. It’s a patch, but it’s not a consequentialist-flavored patch.
Example?
I am concerned about that, see elsewhere re: guaranteed exit, consent of the governed, etc. etc. blah blah blah.
If you specify that something’s prudent I’m pretty likely to vote for it even if it doesn’t affect dickishness in particular. Yay, prudence!
Yeah, I didn’t understand your position as well when I asked that.
How about tragedies of the commons, for example?
Most countries do not allow criminals to escape punishment by revoking their consent to be punished.
Hmm, good point. Effective anti-dickishness, laws, then. (Not that I expect you to change your answer.)
Tragedies of the commons are a coordination problem. My system can cover them if there’s some kind of complex ownership or promise-keeping involved, but doesn’t handle implicit commonses.
Yeah, I’m aware.
I would vote for effective anti-dickishness laws all else being equal but might prioritize reduced dickishness below other things if there were tradeoffs involved.
Well, fair enough, as long as you’re aware of it.
So isn’t there an inconsistency here? Any law punishing something merely dickish, rather than rights violations, is itself a rights violation … right? Or can large-scale prudence outweigh a few minor rights violations?
ETA: That seems reasonable, honestly; even if it does taint the purity of your deontological rules, it only becomes important when determining large-scale policies, which is when you’d want a more precise accounting to shut up and multiply with.
Blah blah guarantee of exit yada yada consent of the governed blah blah I’m really sick now of re-explaining that I am aware of the tensions in my view when it comes to governance and I’m not going to do it anymore.
Alicorn, you just acknowledged that most people being punished are not asked whether they consent to it.
Indeed, attempting to use one’s “guarantee of exit” in these situations is often itself a crime, and one carrying punishments you classify as “rights violations” if I understand you correctly.
That’s sort of why I commented on the potential issues this introduces?
People in the real world do not have guarantee of exit. I’m aware of that. I’ve been over this topic elsewhere in thread more times than I wish to have been.
So … I’m sorry, are you saying you’re actually against these laws, but where rather saying that you would be in favour of them in an ideal world? I appear to have misunderstood you somewhat, so perhaps these repetitions and rephrasing are not in vain.
Thank you for your patience, I know how frustrating it is dealing with inferential gaps and the illusion of clarity better than most :)
It has run out. There is no more patience. It has joined the choir invisible. I said so a couple comments upstream and got downvoted for it; please don’t ask me again.
Well, fair enough. While I’m disappointed not to be able to further improve my understanding of your beliefs, I treasure the LessWrong custom of tapping out of conversations that are no longer productive.
Have a nice day, and may you continue to improve your own understanding of such matters :)
(I think you were actually downvoted for missing the point of my response, by the way. I certainly hope that’s the reason. It would be a great shame if people started downvoting for “tapping out” statements.)
Non-central fallacy.
No, it’s really not.
In fact, it’s precisely the opposite. The central feature of rapes we care about is the fact that they are extremely unpleasant, to put it politely. “Consent”, when formalized so that it no longer captures the information we care about, is noncentral.
Or at least, I think it is. In fact, I believe this should also be clear to Alicorn, on reflection (and indeed she has an explanation for why her system doesn’t fall into this trap.)
Do you disagree?
No, the central issue is in fact consent (there also also other issues related to sex and marriage but that discussion involves more inferential distance then I’m willing to bridge right now.) One way to see this is that it is still considered rape if the victim was unconscious and thus not capable of experiencing anything, pleasant or otherwise. Also, if someone consented to sex at the time but later decides she didn’t enjoy it, I assume you wouldn’t allow her to retroactively declare it rape.
Utilitarians do this. Consequentialists don’t necessarily.
Not only utilitarians, but von Neumann-Morgenstern rational agents.
Well, sure. That wasn’t meant to be an exhaustive list; I only meant to highlight that consequentialism does not necessarily do this.
But yes, you’re quite right.
“No. There’s morality, and then there’s all the many things that are not morality.”
Is this only a linguistic argument about what to call morality? With ,e.g. , virtue ethics claiming that all areas of life are part of morality, since ethics is about human excellence, and your claim that ethics only has to do with obligations and rights? Is there a reason you prefer to limit the domain of morality? Is there a concept you think gets lost when all of life is included in ethics (in virtue ethics or utilitarianism)?
Also, could you clarify the idea of obligations, are then any obligations which don’t emanate from the rights of another person? Are there any obligations which emerge inherently from a person’s humanity and are therefore not waivable?
You could re-name everything, but if you renamed my deontological rules “fleeb”, I would go on considering fleeb to be ontologically distinct in important ways from things that are not fleeb. I’m pretty sure it’s not just linguistic.
Because there’s already a perfectly good vocabulary for the ontologically distinct non-fleeb things that people are motivated to act towards—“prudence”, “axiology”.
Unassailable priority. People start looking at very large numbers and nodding to themselves and deciding that these very large numbers mean that if they take a thought experiment as a given they have to commit atrocities.
Yes; I have a secondary rule which for lack of better terminology I call “the principle of needless destruction”. It states that you shouldn’t go around wrecking stuff for no reason or insufficient reason, with the exact thresholds as yet undefined.
“Humanity” is the wrong word; I apply my ethics across the board to all persons regardless of species. I’m not sure I understand the question even if I substitute “personhood”.
Lets take truth telling as an example. What is the difference between saying that there is an obligation to tell the truth, or honesty being a virtue or that telling the truth is a terminal value which we must maximize in a consequentialist type equation. Won’t the different frameworks be mutually supportive since obligation will create a terminal value, virtue ethics will show how to incorporate that into your personality and consequentialism will say that we must be prudent in attaining it? Similarly prudence is a virtue which we must be consequentialist to attain and which is useful in living up to our deontological obligations. and justice is a virtue which emanates from the obligation not to steal and not to harm other people and therefore we must consider the consequences of our actions so that we don’t end up in a situation where we will act unjust.
I think I am misunderstanding something in your position, since it seems to me that you don’t seem to disagree with consequentialism in that we need to calculate, but rather in what the terminal values are (with utilitarianism saying utility is the only terminal value and you saying hat there are numerous (such as not lying , not stealing not being destructive etc.))
By obligations which emerge from a person’s personhood which are not waivable, I mean that they emerge from the self and not in relation to another’s rights and therefore can not be waived. To take an example (which I know you do not consider an obligation, but will serve to illustrate the class since many people have this belief) A person has an obligation to live out their life as a result of their personhood and therefore is not allowed to commit suicide since that would be unjust to the self (or nature or god or whatever)
The first thing says you must not lie. The second thing says you must not lie because it signifies or causes defects in your character. The third thing says you must not lie unless there is a compensatory amount of something else encouraging you to lie. The systems really don’t fuse this prettily unless you badly misunderstand at least two of them, I’m afraid. (They can cooperate at different levels and human agents can switch around between implementing each of them, but on a theoretical level I don’t think this works.)
Absolutely not. Did you read Deontology for Consequentialists?
I still don’t know what you mean by “emerge from the self”, but if I understand the class of thing you’re pointing out with the suicide example, I don’t think I have any of those.
Yes I read that post, (Thank you for putting in all this time clarifying your view)
I don’t think you understood my question. since “The third thing says you must not lie unless there is a compensatory amount of something else encouraging you to lie. ” is not viewing ‘not lying’ as a terminal value but rather as an instrumental value. a terminal value would mean that lying is bad not because of what it will lead to (as you explain in that post), but if that is the case, must I act in a situation so as not to be forced to lie. For example, lets say you made a promise to someone not to get fired in your first week at work, and if the boss knows that you cheered for a certain team he will fire you, would you say that you shouldn’t watch that game since you will be forced to either lie to the boss or to break your promise of keeping your job? (Please fix any loopholes you notice, since this is only meant for illustration) If so it seems like the consequentialist utilitarian is saying that there is a deontological obligation to maximize utility, and therefore you must act to maximize that, whereas you are arguing that there are other deontological values, but you would agree that you should be prudent in achieving your deontological obligations. (we can put virtue ethics to the side if you want, but won’t your deontological commitments dictate which virtues you must have, for example honesty, or even courage, so as to act in line with your deontological obligations)
That’s a very long paragraph, I’m going to do my best but some things may have been lost in the wall of text.
I understand the difference between terminal and instrumental values, but your conclusion doesn’t follow from this distinction. You can have multiple terminal values. If you terminally value both not-lying and also (to take a silly example) chocolate cake, you will lie to get a large amount of chocolate cake (where the value of “large” is defined somewhere in your utility function). Even if your only terminal value is not-lying, you might find yourself in an odd corner case where you can lie once and thereby avoid lying many times elsewhere. Or if you also value other people not lying, you could lie once to prevent many other people from lying.
AAAAAAAAAAAH
It is prudent to be prudent in achieving your deontological obligations. Putting “should” in that sentence flirts with equivocation.
I think it’s possible to act completely morally acceptably according to my system while having whopping defects of character that would make any virtue ethicist blush. It might be unlikely, but it’s not impossible.
Thank you, I think I understand this now.
To make sure I understand you correctly. are these correct conclusions from what you have said? a. It is permitted (i.e. ethical) to lie to yourself (though probably not prudent) b. It is permitted (i.e. ethical) to act in a way which will force you to tell a lie tomorrow c. It is forbidden (i.e. unethical) to lie now to avoid lying tomorrow (no matter how many times or how significant the lie in the future) d. The differences between the systems will only express themselves in unusual corner cases, but the underlying conceptual structure is very different
I still don’t understand your view of utilitarian consequentialism, if ‘maximizing utility’ isn’t a deontological obligation emanating from personhood or the like, where does it come from?
A, B, and C all look correct as stated, presuming situations really did meet the weird criteria for B and C. I think differences between consequentialism and deontology come up sometimes in regular situations, but less often when humans are running them, since human architecture will drag us all towards a fuzzy intuitionist middle.
I don’t think I understand the last paragraph. Can you rephrase?
Why don’t you view the consequentialist imperative to always seek maximum utility as a deontological rule? If it isn’t deontological where does it come from?
The imperative to maximize utility is utilitarian, not necessarily consequentialist. I know I keep harping on this point, but it’s an important distinction.
Edit: And even more specifically, it’s total utilitarian.
Keep up the good work. Any idea where this conflation might have come from? It’s widespread enough that there might be some commonly misunderstood article in the archives.
I don’t know if it’s anything specific… classic utilitarianism is the most common form of consequentialism espoused on Lesswrong, I think, so it could be as simple as “the most commonly encountered member of a category is assumed to represent the whole category”.
It could also be because utilitarianism was the first (?) form of consequentialism to be put forth by philosophers. Certainly it predates some of the more esoteric forms of consequentialism. I’m pretty sure it’s also got more famous philosophers defending it, by rather a large margin, than any other form of consequentialism.
It’s VNM consequentialist, which is a broader category then the common meaning of “utilitarian”.
To me, it looks like consequentialists care exclusively about prudence, which I also care about, and not at all about morality, which I also care about. It looks to me like the thing consequentialists call morality just is prudence and comes from the same places prudence comes from—wanting things, appreciating the nature of cause and effect, etc.
Thank you for all of your clarifications, I think I now understand how you are viewing morality.
Could you elaborate on what this thing you call “morality” is?
To me, it seems like the “morality” that deontology aspires to be, or to represent / capture, doesn’t actually exist, and thus deontology fails on its own criterion. Consequentialism also fails in this sense, of course, but consequentialism does not actually attempt to work as the sort of “morality” you seem to be referring to.
What good are their rights to anyone who has starved to death?
Yes, and do you have an reason why this is in fact not a valid conclusion? Or this is an appeal to what the law happens to say today?
I, personally, find this situation morally repugnant. Psychological unity of mankind leads me to hope Alicorn does too. What more justification could you ask?
However, even though signing a contract does not seem to remove the harm of rape, I of course cannot rule out the possibility that I am picturing the situation incorrectly, or that that the benefits would not outweigh the rape. (Yes, Alicorn has stated that they care about harms outside their framework of rights.)
Alicorn, on the other hand, likely already holds the standard opinion on rape (it is bad), and thus would find a certain inconsistency in endorsing a position that was OK with it. So in that sense, yes, the law today is valid evidence that this might be an issue, if one looks at the causal change that led up to it changing.
Well, the fact that these laws were only passed very recently suggests that it is you who is out of step with the psychological unity.
The context appears to be a moral panic about rape that among other things argues for despising with due process for accused rapists, and that if two drunk people have sex and regret it later this means the man raped the woman. So no, the law today is not in fact valid evidence.
I was relying on the framing; obviously I wouldn’t expect people to respond the same way in literally any context. (You’re right, I didn’t make that clear.)
Hmm. It is true that rapists are demonized, and this is sometimes extended past edge cases—but obviously, you are yourself relying on the fact that this is obvious nonsense to most people for your rhetorical point.
This seems more akin to similar affects that spring up around other major crimes—not that that makes it rational, of course, or implies that the laws genuinely have greater expected utility than their inverse.
I have no idea how to interpret this even semi-charitably. To me this translates to “I was relying on dark arts”.
I was relying/hoping that you weren’t sufficiently caught up in the panic to no longer recognize this as obvious nonsense. My point that relying on laws that have only been passed in the previous decade, especially given that there’s a moral panic involved, to be highly dubious.
Yes, similar effects have sprung up around other major crimes in the past. However, I believe that rapists is the moral panic du jour.
It’s less likely that someone will ignore facts that have been recently brought to their attention. You’re right, I wasn’t quite sure what word to use there, I may have screwed up.
With respect, have you ever said that to someone and had them respond “Well yeah, sure! Of course women can retroactively make anyone they’ve ever had sex with a rapist.”
I’m sure you’ve seen people endorse equivalent conclusions via deceptive wording, but the general response I would predict from all but a very small core would be “we don’t believe that!”
Hmm, perhaps. I would have gone for pedophiles myself (a term already spreading to include all creepily large age gaps, by the way), but this isn’t really a contest.
Psychological what of what? You mean “current societal norms based on a multitude of shifting cultural and biological determinants, subject to change”?
No, I don’t, as you’re well aware from our many, many lengthy discussions on the point.
I’ll note that my prediction in this case was correct, no?
Alicorn is part of the same WEIRD cultrue you are, so I don’t see how this is evidence that the belief is universal.
The belief is not universal. The ability to empathise with rape victims is (well, actually, we define our terms so as to exclude psychopaths and the like.) Also, yes, I share certain cultural assumptions and conventions, so I have reason to think Alicorn may respond the same way.
My model predicted Alicorn would react to this specific question about that belief, after living her sort of life, with a reluctance or outright refusal to bite the bullet and endorse rape. Not that every human ever would unilaterally endorse my particular belief about rape.
[Kawoomba—you have no way of knowing this—strenuously objects to my model of human nature, since it predicts human CEV coheres rather well, whereas they believe that ethics are (and should be?) entirely relative and largely determined by one’s circumstances. They like to jump on anything I say that even vaguely implies human morality is somehow universal. There are some quite comprehensive discussions of this point scattered throughout LessWrong.]
Whether the activity in question constitutes “rape” is precisely the question under discussion.
Actually, I don’t doubt that there are many characteristics that apply to a vast majority of humans, thanks e.g. to mirror neurons et al. As such, basing predictions on such common factors is quite valid. For example, predicting that someone who doesn’t eat for extended periods will experience a state called “being hungry”, and will show certain behavioral characteristics associated with that state.
I just dislike the term because it’s typically used not as a descriptor (“many humans show a propensity for X”, in the example: “if you don’t eat for a period of time, you will probably act in accordance to “being hungry”) but (invalidly) as prescriptive (“if you don’t eat for a period of time, you should act in accordance to “being hungry”). Getting an “ought” from an “is”, and all that.
I can predict with reasonably confidence that all participants in the current discussion are currently wearing clothes. Based on the “garmental unity of mankind”. Doesn’t mean they should.
If PUoM means there are shared desires, then the only way you could fail to get the “ought” from the “is”, is by denying that “oughts” have anything to do with desires, surely,
For one, if there are supposedly universally shared desires among a group of agents, and yet you find one agent among them who doesn’t share those, you’ve found a contradiction—those universally shared desires weren’t universally shared, after all.
For two, describing that a majority (or even a supermajority, or a super-duper-overwhelming majority) shares certain desires would be a description, not some majority-tyrannical edict for the outliers to conform to. For example: Stating that a majority of humans share heterosexual desires doesn’t mean that those who don’t somehow should, as well (just to go with the obvious applause-light example which is applicable in this case, it’s easy to come up with arbitrarily many other examples).
And don’t call me Shirley.
Facts about desires coinjoined with with an intuitively appealing Maxim, such as “everyone ought to maximize the satisfaction of everyone’s” desires” can imply oughts.
What is this “intuitively appealing” justification, and why would it be binding for those it doesn’t “intuitively appeal” to? It’s not my intuition.
You wrote that you believe that persons have rights. How do you determine what rights they have?
IANAlicorn, but, since I have the same belief, I’ll give it a shot. My imperfect introspection tells me that, since the world where people don’t have rights would quickly become unfair and full of suffering (and this has been repeatedly experimentally tested), I want to live in a world where I, my family or someone I can identify with would have less of a chance of being treated unfairly and made to suffer needlessly. Pretending that people have “unalienable rights” goes a long way toward that goal, so I want to believe it and I want everyone else to believe it, too. To dig deeper, I am forced to examine the sources for my desire for fairness and the origins of my empathy (imperfect though it is), and the available literature points to the mix of genetics and upbringing.
That sounds like rule utilitarianism, or a rule utilitarianism-like consequentialism, not like a deontological justification for human rights.
I suppose you are right. However, if you skip the introspection part, “people have rights” makes sense in most cases without having to worry about utilities. It’s the edge cases, like the trolley problem, which require deeper analysis.
I agree, but that’s all basically consequentialist.
A decent justification, but not very deontological. What I was curious about is how Alicorn determines what rights exist purely deontologically, without reference to consequences.
Since I’m no longer maybe going to write a thesis on it, mostly I don’t work on this a lot. Not lying, not stealing, and not attacking people does pretty good for everyday. There’s sort of an informal checklist when I think something might be a right—the rights have to be reasonably consistent with each other, they’re overwhelmingly negative rights and not positive ones, simpler ones are better, etc. This would be easier with a sample maybe-a-right but I haven’t examined any of those recently.
If I may offer one —
Suppose that I am photographed on the street outside a place that has a bad reputation (with some people). The photographer might publish the photo, which could lead viewers to believe bad things of me.
One acquaintance of mine, M, claims that I have a right to forbid the photographer from publishing this photo; I have the right to control publicity about me or the use of my image, even though the picture was taken in public.
Another acquaintance, B, claims that the photographer has a freedom-of-speech right to publish it, so long as they do not explicitly say anything false about me. B believes that it would be nice of the photographer to ask my permission, but that I do not have a right to this niceness.
Still another acquaintance, R, says that it depends on who I am: if I am the mayor of Toronto, I have no right to control photos of me, since my actions are of public interest; but if I am a privately employed engineer of no public reputation, then I do have that right.
Okay, I’ll walk through my process of apprehending and making a call on this situation. It looks like a good example, thanks for coming up with it.
The conflict here is between you, and the photographer—other persons in the story have opinions but aren’t directly involved. The steps are that there was an opportunity to ask you if it was okay to photograph you (which the photographer passed over), the decision to photograph you, the opportunity to ask you if it’s okay to publish it (which the photographer also passes over), and the decision currently at hand of whether to publish the photo. If there’s a rights violation potential or actual, it’s probably in one of those places. The statement of the problem doesn’t suggest that you’ve committed any rights violations by happening to be in this location.
The fact that two chances to ask for consent have been passed up is suspicious. It’s not a guarantee that something has gone wrong—the photographer is allowed to look at you without securing permission, for instance—but it’s a red flag. In the normal course of things, people waive some of their rights when asked explicitly or by other mechanisms all the time. People waive their right to refuse sex, for instance, on a per-occasion basis in order to have non-rape sex.
You don’t actually do anything in this story, except exist in a place that by stipulation you are quite entitled to be, so only the photographer might be committing a wrong here.
So the obvious candidate possibilities are that you have a right not to be photographed, or that you have the right not to have your likeness publicized, or that you have no such rights and the photographer may do as they like with the photograph provided no other wrongs (such as libel, a form of lying) are committed in so doing.
But earlier I said the photographer is allowed to look at you without permission. You’re in a public place, where anyone, including photographers as an unspecial class of anyone, may walk by. The upshot of a photograph is that others, too, may look at you. Any of them could have walked by in real time and seen you without wronging you. There’s no obvious mechanism by which a gap in time should change things. If one of the passersby had a photographic memory, that wouldn’t change the fact that they could look at you. Similarly, the fact that people who live far away from this location might not have had a chance to show up in person and espy your presence, or the information that anything going on might be of interest, doesn’t seem like it has anything to do with anything.
So it seems to me that the photographer is probably, at worst, being a dick. You do not have a right to prohibit someone from photographing you and publishing a photo of you unless something else is going on. (I feel like I should mention now that I hate this result. I don’t photograph well and definitely don’t like the idea of my likeness being used in any which way without my agreement. In fact, if the law allowed, I might even pursue someone who did this to me via legal routes, which—I hope obviously—are separate from ethical condemnation. In any event I’m not rigging this to suit myself.)
But supposing it were the other way around, your acquaintance R might still have a point. Assuming it’s known that elected politicians are treated differently with respect to the use of their photographs, pursuing a career as a politician might constitute waiving one’s right not to be photographed (if we had that right, which I concluded above we probably don’t). In this counterfactual, this would apply to the mayor of Toronto, but not people who became public figures without doing anything wrong (that would be a potential rights forfeiture, particularly if they are a threat to the photograph-viewing population) or choosing to become public figures on purpose.
Ok, this is quite a detailed response and I appreciate the thought that went into writing it. However, from my perspective, it raises more questions than it answers. For example, you say things like this:
But why not ? I’m not asking this just to be a contrarian. For example, many people believe that all of us have a fundamental right to privacy; this would imply that you do, in fact, have the right to “prohibit someone from photographing you and publishing a photo of you”. Presumably you disagree, but on what basis ?
Furthermore, you say that you
I don’t see how that works. If you believe that a person has a right to photograph you and publish the photo without your permission; and yet you are launching a legal challenge against him for doing so; then are you not engaging in an immoral attack ? Sure, it’s not a physical attack, but it’s an attack nonetheless, and previously you stated (assuming I understood you correctly) that attacks on people who have violated no rights are immoral.
Can you tell me where I lost you in the description of why I do disagree?
No. This isn’t how the right is framed. They don’t have a right to do it; no one must behave in a way to protect their ability to do so; I don’t have to stand still so they can get a clear shot, I don’t have to go out in public, if I happen to own a news outlet I don’t have to give them a platform, if I acquire a device that makes me look like a smudge of static to nearby cameras I can carry it around without this being morally problematic. (Perhaps unless I’m under some sort of agreement to be visible to security cameras, or something like that.) I just don’t have the right to not be photographed. Remember that rights are overwhelmingly negative. The fact that someone commits no wrong by doing X does not imply that others commit a wrong by making X inconvenient or impossible.
(You’re also being kind of overbroad in understanding my use of the word “attack”, which was intended broadly, but not so broadly as to include “seeking legal recompense in response to an upsetting, by stipulation illegal, behavior which does not happen to constitute a violation of any intrinsic moral rights”.)
You state that “you do not have a right to prohibit someone from photographing you”, but I don’t understand where this rule comes from. You expand on it in your explanation that follows, but again, I don’t fully understand your reasoning. You say:
That makes sense to me, in that your rule is consistent with the rest of your system. I may even agree that it’s a good idea from a consequentialist point of view. However, I still do not understand where the rule comes from. Is photographing you qualitatively different from murdering you (which would presumably be immoral), and if so, why ? Come to think of it, why are all rights negative ?
I may have misunderstood your goals. In launching a legal challenge, what do you hope to accomplish in terms of “recompense” ? Are you seeking to extract a fine from the photographer, or perhaps to restrict his freedom in some way, or both ?
Let’s say that you seek a fine, and you succeed. How is that different, morally speaking, from breaking into his house and stealing the money ? In one case you use a lockpick; in the other one you use a lawyer; but in the end you still deprive the photographer of some of his money. Why does one action count as an attack, and the other does not ?
Now that I think of it, perhaps you would consider neither action to be an attack ? Once again, I’m not entirely sure I understand your position.
Are all rights negative? What about, say, the right to life, or the right to -not-starving-to-death?
Many people seem pretty jazzed about the idea of a “right to marriage” or a “right to insert-euphemism-for-abortion-here”, based largely on the fact that our (as in, their and my) tribe considers the policies these imply applause lights. I have no idea what Alicorn thinks of this, though.
I’m fine with that kind of right existing in the legal sense and encourage all the ones you listed. I don’t think anyone has a fundamental moral obligation to feed you or perform abortions for you or conduct a marriage ceremony for you, though you can often get them to agree to it anyway, empirically, with the use of money.
If I may, I’m curious on what basis you consider those rights a good idea? Is it just a whim? Are you worried real rights might be violated?
I’m not usually in favour of calling all these various things “rights”, since it rather confuses things—as you’re probably aware—but I must admit the “right to not-starving-to-death” sounds important.
Are you saying you would be OK with letting people starve to death? Or am I misunderstanding?
I think those rights-in-the-legal-sense are good for political and social reasons. Basically, I think they’re prudent.
I don’t think I am doing something wrong, right now, by having liquid savings while there exist charities that feed the hungry, some of whom are sufficiently food-insecure for that to make a difference to their survival. I bite the bullet if the starving person happens to be nearby: this doesn’t affect their rights, and only rights have a claim on my moral obligations. I might choose to feed a starving person. I will support political policy that seems like it will get more starving people fed. I will tend to find myself emotionally distraught on contemplating that people are starving, so I’d resent the “OK with it” description. Also, when I have children, they will have positive claims on me and I will be morally obligated to see to it that they are fed. Other than that? I don’t think we have to feed each other. It’s super-erogatory.
I see. And what are those reasons?
Yeah, that’s a common problem with consequentialists. Obviously, we have various instincts about this and it’s both hard and dangerous to ignore them.
I’m actually somewhat pleased to hear that, because it’s not the first time a deontologist has told me that. I was too speechless to respond, and they changed their mind before I could find out more.
Ah, here we go. That’s good!
You do realize that sort of thing is usually part of what’s referred to by “morality”? So leaving it out seems … incomplete.
Postscript:
I’m not sure, but it may be that there’s something causing some confusion, by the way. I’ve seen it happen before in similar discussions.
There seem to be two functions people use “morality” for—judging people, and judging actions.
Consequentialists, or at least standard-lesswrong-utilitarian-consequentialists, resolve this by not judging people, except in order to predict them—and even then, “good” or “bad” tend to be counterproductive labels for people, epistemically.
Instead, they focus entirely on judging their actions—asking, which of my options is the correct one?
But I gather you (like a lot of people) do judge people—if someone violates your moral code, that makes them an acceptable casualty in defending the rights of innocents. (Although you’re not a total virtue ethicist, who only judges people, not actions; the polar opposite of LW standard.)
I’m not really going anywhere with this, it just seems possibly relevant to the discussion at hand.
I’m gonna decline to discuss politics on this platform. If you really want to talk politics with me we can do it somewhere else, I guess.
I would tend to describe myself as judging actions and not people directly, though I think you can produce an assessment of a person that is based on their moral behavior and not go too far wrong given how humans work.
Oh! Oh, OK. Sorry, I just assumed those were so vague as to avoid mindkilling. Of course we shouldn’t derail this into a political debate.
In fairness, you’re clearly smart enough to disregard all the obvious mistakes where we make prisons as awful as possible (or, more likely, just resist making them better) because we Hate Criminals.
This is a more abstract idea, and much more limited. I’m not criticising it (here). Just noting that I’ve seen it cause confusion in the past, if left unnoticed.
Actually, what the heck, while I’m here I may as well criticize slightly.
This is little more than my first thought on reviewing what I think I understand as your moral system.
It’s discovered Hitler’s brain was saved all those years ago. Given a giant robot body by mad scientists, he is rapidly re-elected and repurposes the police force into stormtroopers for rounding up Jews.
You, being reasonably ethical, have some Jews hidden in your house. Since the stormtroopers have violated relevant rights (actually, they could be new on the job, but you can just tell they’re thinging about it—good enough), so their relevant right not to be lied to is waived and you tell them “no Jews hidden here!” quite cheerfully before shutting the door.
However, naturally, they know Baye’s theorem and they’ve read some of your writings, so they know you’re allowed to lie and your words aren’t evidence either way—although silence would be. So they devise a Cunning Plan.
They go around next door and talk to Mrs. Biddy, a sweet old lady who is, sadly, unaware of the recent political shift. She was always told to respect police officers and she’s not going to stop now at her age. The white-haired old grandmother comes around to repeat the question the nice men asked her to.
She’s at the door, with the two stormtroopers standing behind her grinning. What do you do?
I mean, obviously, you betray the Jews and they get dragged off to a concentration camp somewhere. Can’t lie to the innocent old lady, can’t stay silent because that’s strong Bayesian evidence too.
But you’re reasonably intelligent, you must have considered an isomorphic case when constructing this system, right? Casualties in a Just War or something? Two people manipulated into attacking the other while dressed as bears (“so the bear wont see you coming”)? Do you bite this bullet? Have I gone insane from lack of sleep? I’m going to bed.
Why are all these hypothetical people so well-versed in one oddball deontologist’s opinions? If they’re that well-read they probably know I’m half Jewish and drag me off without asking me anything.
Mrs. Biddy sounds culpably ignorant to me, anyway.
You may or may not have gone insane from lack of sleep.
Um, the purpose of prisons is to punish criminals, so yes, prisons should be awful, not necessarily “as awful as possible”, but for sufficiently serious crimes quite possibly.
EDIT: Wait, you mean “punish” in the consequentialist sense of punishing defection, right?
Yes, but this does not imply that the current level of awfulness is optimal. It certainly does not mean we should increase the awfulness beyond the optimal level.
But if someone proposes that the current level is too high (whether on consequentialist or legal grounds), one of the arguments they will encounter is “you want to help rapists and murderers?! Why? Those bastards deserve it.”
(The consequentialist argument for, say, the current state of US prisons is of course undercut by the existence of other countries with much less awful prisons.)
If you want to look at optimal awfulness, there is a much better way to test, look at the crime rate. The currant crime rate is extremely high by historic standards. Furthermore, the recent drop from its peak in the 1970′s has accomplished by basically turning major cities into Orwellian surveillance states. I think increasing the awfulness of prisons would be a better solution, at the very least in puts the burden on the criminals rather than the innocent.
That really isn’t a good argument for the current state of US prisons, is it? Clearly, even openly allowing institutional rape has failed to help; yet other, less harsh countries have not seen soaring crime rates by comparison.
I’ve seen studies suggesting that certainty of punishment is much more important for determining behavior than the extremity of it—it’s more a question of a strong justice system, a respect for authority (or fear, one might say), than people performing expected utility calculation in their heads.
Personally I’m in favor of corporal punishment, cheaper than prisons and you don’t have the problem of long term prisoners getting used to it.
It is known that lots of people enjoy inflicting pain on the helpless. Anyone who punishes prisoners because they enjoy doing so is in a conflict of interest, at least if he has any discretion in how to carry out the punishment.
Also, it’s possible to take that effect into account when deciding punishment.
More so than existing prison guards?
I don’t know if it’s more so because comparing degrees here is hard, but I would say that we should not hire prison guards who enjoy punishing prisoners and have discretion in doing so.
That is one possible purpose to have prisons, but not the only one.
You can rephrase “punishing criminals” in terms of quasi-consequentialist decision theory as deterrent/counter-factual crime prevention. Al the other reasons I’ve heard are little more than rationalizations by people who want to punish/deter criminals but feel icky about the word “punishment”.
What possible reasons there could plausibly be for jailing people, and what actually in fact motivates most people to support jailing people, are not the same thing.
Some possibilities for the former include:
Retribution (i.e., punishing criminals because they deserve it)
Closure/satisfaction for the victim(s), or for family/friends of the victims(s).
Deterrence, i.e. protecting society from counterfactual future crimes we expect other people to otherwise perpetrate.
Protecting society from counterfactual future crimes we expect this same criminal to otherwise perpetrate.
Rehabilitation.
… (other things I am not thinking of at the moment)
None of those things are the same as any of the others. Some fit the rather imprecise term “punishment” closely (1, 2), others not so closely (3, 4), still others not at all (5).
I would argue that (1) and (2) are in fact the same thing just formulated at different meta-levels, and that (3) and (4) are the quasi-consequentialist decision theory “translations” of (1) and (2). Rehabilitation (5) is what I called a fake reason, as can be seen by the fact that the people promoting it are remarkably uninterested in whether their rehabilitation methods actually work.
I’m not entirely sure what you mean by this. Are you suggestions that people who advocate (3) and (4) as actual justifications for having prisons do not have those things as their true, internal motivations, but are only claiming them for persuasion purposes, and actually (1) and/or (2) are their real reasons? Or are you saying something else?
That may well be, but that doesn’t make it not an actual good reason to have prisons.
Your comment which prompted me to start this subthread spoke about what should be the case. If you say “this-and-such are the actual motivations people have for advocating/supporting the existance of prisons”, fine and well. But when you talk about what should happen or what should exist, then people’s actual internal motivations for advocating what should happen/exist don’t enter into it.
Something else, see my reply to hen. For where I go into more detail about this.
See hen’s comment for the problem I have with rehabilitation.
With respect, both hen’s comment and your reply read to me like nonsense. I can neither make sense of what either of you are saying, nor, to the degree that I can, see any reason why you would claim the things you seem to be claiming. Of course, I could merely be misunderstanding your points.
However, I think we have now gone on a tangent far removed from anything resembling the original topic, and so I will refrain from continuing this subthread. (I’ll read any responses you make, though.)
I think Eugine_Nier might be trying to say that the reason we evolved the emotions of anger and thirst for vengeance is because being known to be vengeful (even irrationally so) is itself a good deterrent. And possibly that this therefore makes these the same thing. But I’m not sure about that because that seems to me like a straightforward case of mixing up adaptation executors and fitness maximizers.
You mean hen’s comment about the dignity of moral agents, or my statement about how deterrence is the quasi-consequentialist translation retribution?
Both, I’m afraid.
To see what I mean by the dignity of moral agents think of a criminal as a moral agent, rather then a defective object to be fixed. The idea of rehabilitation should acquires a certain Orwellian/totalitarian aura, i.e., this is the kind of thing the Ministry of Love does.
As for my statement about deterrence and retribution, I believe we’re having that discussion here.
A datapoint: I think the purpose of prisons is the institutional expression of anger, and insofar as they do this, they are an expression of respect for the criminal as a moral agent. In fact, I think that the use of prisons as a deterrent or to modify behavior is downright evil: you’re not allowed to put people in a box and not let them out just to change the way they act, and especially not to communicate something to other people.
(For the record, it looks like you may not be a consequentialist, but it seems worth asking.)
Um … why not? I mean, when we all agree it’s a good idea, there are reasonable safeguards in place, we’ve checked it really does reduce rapes, murders, thefts, brutal beatings … why not?
Is it OK to lock someone in a box because you’re angry? Isn’t that, in fact, evil? Does it become OK if you “respect” them (I’m not sure what this refers to, I admit.)
I should probably mention that hen has answered me via PM, and they are, in fact, basing this on consequentialist (more or less) concerns.
I more-or-less agree with your world view, with the caveat that I would interpret contrafactual crime prevention as anger translated into decision theory language (it helps to think about the reason we evolved the emotion of anger). Deterrent as applied to other people is a version of the contrafactual crime prevention where we restrict our thinking to other people in this event branch as opposed to all event branches.
To a VNM consequentialist in every situation there is a unique “best action”, my contrast for a deontologist or virtue ethicist their morality doesn’t specify a single action to take. Thus you are allowed (and possibly encouraged) to help the starving man, but aren’t required to.
And? I should hope anyone reading this thread has already figured that out—from all the times it was mentioned.
Is there some sort of implication of this I’m too stupid to see?
That it doesn’t require bullet biting to say that you are not moral obligated to help the starving person.
How so? It’s an unpleasant thing to say, and conflicts with our raw intuition on the matter. It sounds evil. That’s all biting a bullet is.
Remember, it’s sometimes correct to bite bullets.
What do you mean our intuition on the matter? My intuition says that at least it depends on how the man came to be starving.
Well, since Alicorn’s system does not take account of that, this is in any case biting a bullet for you as well.
[With that acknowledged, I am curious about those intuitions of yours. Is this about punishing defection? The standard “well, if they’re a Bad Person, they deserve what’s coming to them”? Or more “it’s their own fault, they made their bed let them lie in it, why should we be responsible for their foolishness”, that sort of thing?]
As you may have noticed, I’m not Alicorn.
Both, also I have some more examples, that could fall under one or both depending on how one defines “defection” and “foolishness”. If someone decided that they’d rather not work and rely on my charity to get food, they won’t be getting my charity. Also if CronoDAS comes by my house begging for food, the answer is no.
Another example, is that my response to the famous train dilemma depends on what the people were doing on the track. If they were say picking up pennies, I’m letting them get run over.
Well … yeah. Because you’re replying to something I said to Alicorn.
Is this for game-theoretic reasons, or more of a virtue-ethics “lazy people don’t deserve food” thing?
Are we killing people for stupidity now? I mean, I guess if the numbers were equal, the group drawn from the general population is a better bet to save than the group selected for “plays on train tracks”—but I don’t think that’s what you meant.
Wait, is this a signalling thing? Y’know, sophisticated despair at the foolish masses? If it is, there’s no need to reply to this part; I’ll drop it.
Did you take click on my link? “Picking up pennies on railroad tracks/in front of a steam roller” is a well known metaphor for taking certain types of risks in economic circles.
However, to answer your question: no, I (normally) won’t kill someone for his stupidity, but I see no reason to save them, and certainly no reason to kill other people to save them.
Yes, I clicked the link.
OK, that’s a little scary (or would be, anyway). Um … why don’t you care about the suffering and death of someone “stupid” (or risk-taking)?
What I find scary is that you appear to be willing to sacrifice innocent bystanders to save stupid people from their own stupidity.
Why should I?
If they chose to take that kind of risk, they are responsible for its consequences.
Would you prefer that others care about your suffering and death, if something happened such that you became (temporarily or permanently) “stupid”?
In many cases, people are not aware of the risks they are taking; in many other cases, people may not have less-risky alternatives. Should they still be entirely responsible for their consequences? Because that seems to lean towards “just-world hypothesis” thinking, and if that’s where this is going, we may want to just go there and be done with it.
Would you like to be the innocent bystander sacrificed to save an idiot from the consequences of his own stupidity.
Me in particular, or people in general? Because there is a particular class of idiot that most people would GLADLY be sacrificed to save; they’re called “children”.
As for me, personally, that depends on the calculus. Am I saving one idiot, or ten? Are they merely idiotic in this circumstance, or idiotic in general (i.e., in most situations a normal human being might reasonably find themselves in)? Are we talking about a well-medicated version of me with a good chance of contributing meaningfully to society, or a cynical, hopeless, clinically depressed version of me that would gladly take ANY reason to die? Because I think at this point, we’re talking about weighted values, and I quite imagine that there’s a certain number of certain kinds of idiots that I would absolutely consider more worth saving than certain versions of myself, if I was doing the calculus honestly.
And if I’m not doing the calculus honestly, then I’m an idiot.
I think that here, “idiot” refers to idiocy for which the person is to blame. Children are not generally to blame for being idiots.
Can you describe the mechanism by which children are not to blame for their stupidity, but other beings are?
People do not choose to be children. People do choose to be careless or to refuse to learn. Idiocy that is caused by carelessness or refusal to learn is therefore the person’s fault.
In the unlikely case of someone who has, for instance, been infected by nanobots that force his brain to act carelessly, I would of course not hold him to blame.
As opposed to, say, just a reduced capacity for impulse control or learning? Or an ingrained aversion to thinking before acting?
EDIT: Heh. Actually… It looks like your specific example is more plausible than I thought.
Put more bluntly: are there some classes of people which are less a product of their environments and biologies than others?
(And I’m not merely saying this from the perspective of “why do you hold idiots accountable”; I’m also asking “why do children get a free pass?”)
I don’t give children a free pass. If an adult is sufficiently incompetent, I wouldn’t blame him, either.
However, I would not classify an adult as sufficiently incompetent for these purposes unless his impulse control is so bad that he can’t safely live on his own. (There is no inconsistency between this and considering children incompetent, since children cannot safely live on their own either.)
In the example given, I think if people are incompetent enough to risk themselves physical injury or death for the sake of picking up pennies, that’s pretty good evidence that they can’t safely live on their own without supervision.
If they managed to survive long enough to get to the railroad track to pick up the pennies, they’re probably able to live on their own without supervision unless there was an extreme stroke of luck involved (such as having been released from custody fifteen minutes ago).
They don’t quite choose to live in places with lots of lead, more omega-6 than omega-3 fats, and little lithium either, for that matter.
Would the government not be violating rights (rights-not-in-the-legal-sense) if it forces people to feed those whom they don’t want to feed?
My feelings about governments are complicated by the guarantee-of-exit thing I mentioned elsethread, but with that understood, I’m not opposed on any level to systematic taxation. If a government were rounding people up to work in agriculture or soup kitchens or what have you against their will, that would be wrong.
In the absence of guarantee-of-exit (and guarantee-of-entrance-into-something else?), is taxation a violation of people’s rights? If not, why not?
No; there doesn’t have to be a society that wants you, or for that matter one that is agreeable to your preferences.
I don’t think so. I think failing to provide guarantee-of-exit is a failing on the part of various governments and it does make some things they do less defensible, but I’m not opposed to taxes. Part of it is actually that it’s not a person collecting taxes.
I’m confused. It’s not a person collecting taxes? Are tax collectors, cops (if it comes to force), etc, not people?
I’m pretty sure the overwhelming majority of taxes are not collected in the tax-collector-based way depicted in the Disney version of “Robin Hood”. I do object when force comes to be involved. (I don’t have any suggestions on what to do about it. Something being wrong doesn’t, actually, stop anyone from doing it.)
They’re not collected in the tax-collector-based way because there’s no need to—there’s enough of a threat of force to get people to comply. If it’s a credible threat, the government would use force on non-compliers, presumably thus violating their rights. As you said, something being wrong doesn’t stop anyone from doing it—but it does license you to say that they shouldn’t do it, and it licenses the victims to resist.
Okay. Elsewhere in thread when I was walking through the photography example, I said that if there were a right to not be photographed but it were generally known that the customs were different for public figures, becoming a public figure on purpose might constitute consent. This is why I think guaranteed exit is so important—if it were in place, you could move to whatever country had the taxation setup you could best tolerate if they’d have you, and that would be that.
Even without guaranteed exit, countries can have a price of admission, though. (Sort of like even if there is no universal healthcare, your doctor can charge, and even if there is no food bank, so can the grocery store.)
I really doubt that anyone is waiting for me to license them to tax dodge or pick fights with cops.
This assumes that staying implies consent, which is a questionable assumption. It presupposes that the State has the right to do whatever it wants on its territory as long as it lets people leave (even if the only other state in the world is even more authoritarian). For example, if half of the world were ruled by North Korea and the other half by China, would you say that China’s policies were morally justified because people would be free to leave and move to North Korea?
No, but they may like morality to license them to avoid taxes or resist cops. (Although I do like the image of someone thinking, “Damn, taxes suck, if only that person who wrote that Twilight fanific said I don’t have to pay them.”)
No kidding it’s questionable, hence my thing about guaranteed exit. But likewise the various agents of the government do not necessarily consent to freeloading. If the Red Cross puts out juice and cookies for blood donors, and you are not a donor, and you take some, you are stealing even if there is nowhere else for you to get food.
No, it does not imply that. They can’t do things suddenly, in particular (because then that particular aspect of the interaction hasn’t been consented to). Consent is also revocable at any time even if standing permission is granted. They also have to stick to contextual relevance in attempting to enforce laws. (Also, a government that was operating under Alicorn Morality couldn’t lie, which I think all by itself would shake some things up.)
I am unqualified to have an opinion on the details of the political situations in most countries. If I just read this as “Bad Country With Guarantee of Exit” and “Worse Country With Guarantee of Exit”, well, that sounds like a pretty lousy situation to be in, but nothing about this situation means the countries involved have to “charge less” or require different standards of behavior from their citizens.
Imagine that the world is divided between Fascistia and Communistan. One day, the Duce of Fascistia announces that in a year, all the wealth of the residents of Fascistia will be confiscated to build statues of Mussolini, but before then, they’re perfectly free to take their stuff and move to Communistan. The General Secretary of the Communist Party of Communistan announces that he’ll happily accept all new immigrants, but warns that in a year, all the wealth of residents of Communistan will be confiscated to build statues of Lenin. In this case, the change is not sudden (if you consider this sudden, change “in a year” to “in ten years”) and it doesn’t prevent either country’s residents from leaving. Is this a rights violation?
Or consider another scenario. One day you’re checking your mail and find a letter from famed thief Arsene Lupin, informing you that in a year he will be breaking into your house to steal a recent painting you’ve acquired. M. Lupin happens to read LessWrong from time to time, so he’s read your writings on morality. He writes that you are free to leave your house and take your possessions with you, thwarting him. Nevertheless, you don’t leave. In doing so, have you consented to the painting being stolen?
I am entertained by your examples.
Assuming the residents of Fascistia and Communistan have no wherewithal to create separate states (including by declaring subregions independent and declining to accept further services from the parent countries, thereby ending the transactional relationship; forming seasteads; flying to the Moon; etc.) it sure looks like they are in a pickle, unless they manage to use this year to become sculptor suppliers, or attempt to convince the leaders in question to change their minds. This is sort of like my version of the utility monster—sure, in real life, there are large numbers of diverse people and institutions you could choose to interact with, but what if your choices were Bad and Also Bad?! - and I have to bite the bullet here. (I do think it’s probably hard to construct a situation where nobody is, for whatever reason, capable of declaring independence, but if you cut off that route...)
I don’t consent to interact with M. Lupin or allow him into my house on any level. We are not in a transactional relationship of some kind that would imply this.
This seems a strange place to bite the bullet. Why can the state seize property (with ample warning) but M. Lupin can’t? The state is made of people, and if no person is permitted to seize it, then the state isn’t either. Alternatively, if the state is permitted to seize it, then some person must be as well, so it seems that people would then be allowed to make demands that entitle them to to your stuff.
Why is this different from the state? Is it because it provides services? Would this be any different if M. Lupin broke into your house every day to do your laundry, without your consent, and then claimed that he had a right to the painting as payment for his services?
The services thing is key, but so is consent (of some kind, with guaranteed exit, etc etc caveat caveat). I don’t consent to M. Lupin coming into my house even to do my laundry, you can’t throw a book through somebody’s open window and demand ten dollars for it, if I make a batch of cookies I cannot charge my neighbors for the smell. If the people of Communistan declare independence of Provinceland, and Communistan officials commence visiting Provinceland without permission continuing to maintain the roads even if Provincelanders wish they would go away, then Communistan is conducting a (bizarre) invasion, not a consensual transaction.
How many people does it take to secede? Would it be permissible for California to secede from the US? What about the Bay Area—would it be morally permissible for it to be its own country? What about a small suburb? One house? Can I unilaterally secede, then claim that tax collectors/cops are invading my country of Blacktransylvania?
I don’t have a minimum number in mind, although you’d certainly need a fair number for this to be advisable. I will bemusedly support your solo efforts at secession if that is meaningful to you, provided that the land you’re trying to secede with belongs to you or someone amenable to the project.
Thank you for explaining your position. It’s surprisingly radical, if your last sentence is to be taken literally. I have one last question. Assume a few of my neighbors and I secede, and say that tax collectors are unwelcome. May we then amend our permission to say that tax collectors are welcome, but only if they’re collecting up to X amount of taxes, where X is the amount needed to fund [list of US government services we support], in return for receiving those services?
I don’t see why not, but the United States is not obliged to offer the services a la carte.
What do you mean “comes from”? The rule in question fails to exist; it doesn’t have to come from anywhere, it just has to not be. Do you think that it does be?
Someone photographing you has a different intention from someone murdering you. (If the photographer believed that taking a picture of you would, say, steal your soul, then I would hold them responsible for this bad behavior even though they are factually mistaken.)
I don’t think literally all rights are negative. Positive rights are generally acquired when someone makes you a promise, or brings you into existence on purpose. (I think children have a lot of claim on their parents barring unusual circumstances.). But nothing has happened to create a positive obligation between you and a random photographer.
I don’t actually know any jurisdiction’s laws about publishing nonconsensual photographs. What I’d be looking for would probably depend on what I could reasonably expect to succeed at getting. This entire endeavor has left the moral sphere as long as I don’t violate the photographer or anyone else’s rights. My goal would probably be to discourage non-consensual photography of me and in general, as it’s kind of a dick move, and to compensate myself (perhaps with money, since it’s nicely fungible) for the unpleasantness of having been nonconsensually photographed. If I do not, in so doing, violate any rights, I can seek whatever is available, no problem.
This entire thing is actually complicated by the fact that I think political entities should guarantee an opportunity of exit—that if you can’t live with your society’s set of rules you should be shown out to any other place that will have you. Without that, there’s definitely some tension on my moral system where it interacts with the law. If we had proper guarantee of exit, the photographer being around me at all constitutes agreement to live by applicable shared rules, which in this hypothetical include not nonconsenusally photographing each other (I don’t know if that’s a real rule, but supposing it is) and also not breaking into each other’s houses and also producing fines when legally obliged to do so. In the absence of guarantee of exit it’s complicated and annoying. This also gets really stupid around intellectual property laws, actually, if you really want to put the squeeze on my system. I’m just gonna say here that any system will stop working as nicely when people aren’t cooperating with it.
I don’t think I’d characterize burglary as “attack”, but I already listed “stealing” separately in that shortlist of things.
I… am not sure what that paragraph means at all. In more detail, my question is twofold:
What are deontological rules in general, and rights in particular ? Are they, for example, laws of nature such as gravity or electromagnetism; are they heuristics (and if so, heuristics for what); or are they something else ?
How do we know which deontological rules we should follow in general; and which rights people have specifically ? For example, you mentioned earlier that people do not have a right to not be photographed. How do you know this ?
Once again, how do you know this ?
Fair enough; I was using “attack” in the general sense, meaning “an action whose purpose is to diminish an actor’s well-being in some way”.
That said, I’m not sure I understand your model of how the legal system interacts with morality. At one point, you said that the legal system is ethically neutral; I interpreted this to mean that you see the legal system as a tool, similar to a knife or a lockpick. Thus, when you said that you’d wield the legal system as a weapon against the photographer (or, more specifically, his money), I questioned the difference between doing that and wielding a lockpick to accomplish the same end. But now I’m beginning to suspect that my assumption was wrong, and that you see the legal system differently from a tool—is that right ?
Depending on how you define “purpose”, burglary still might not qualify. The purpose of a burglary isn’t to harm its victims, it’s to acquire their valuables; harm is a side effect.
Good point; in this case, the fact that the victims lose said valuables is merely a side effect of how physical reality works.
Perhaps a better definition would be something like, “an action at least one of whose unavoidable and easily predictable effects includes the diminishing of another actor’s well-being”.
Rights are a characteristic of personhood. Personhood emerges out of general intelligence and maybe other factors that I don’t fully understand. Rights are that which it is wrong to violate; they are neither laws of physics nor heuristic approximations of anything. They are their own thing. I do think they are necessary-given-personhood.
Can you tell me where I lost you in the detailed description of what my process to determine that people don’t have that right was? I wrote down the whole thing as best I could.
Promises are the sort of thing that generates positive rights because that’s what “promise” means. If it doesn’t do that, it’s something other than a promise. (At least formally. You could have other definitions for the same word. The particular sense in which I use “promise” is this thing, though.)
I think if I were you I’d be really careful with my paraphrasing. I’m not going to object to this one in particular, but it brought me up short.
The legal system is many things; it definitely works as a tool for problems like collective action, coordination problems, deterrence of disrupting the social order, and more. I’m not sure what you’re reading into the word “tool” so I’m not sure whether I want to claim to see it exclusively as a tool or not.
I want to ask “why”, because I don’t fully understand this answer, but I fear that I must ask the more difficult question first: what do you mean by “personhood” ? I know it can be a tricky question, but I don’t think I’ll be able to figure out your position, otherwise. However, this next line gave me pause, as well:
Since I am not a deontologist (as far as I know, at least) I read this as saying: “rights are sets of rules that describe actions which any person (pending Alicorn’s definition of personhood) must avoid at all costs”. Is that what “wrong to violate” means ?
I’m having trouble with the “process” part. From my perspective, whenever I ask you, “how do you know whether a person has the right X”, you either list a bunch of additional rights that would be violated if people didn’t have the right X; or derive right X from other rights, whose origin I don’t fully understand, either. Clearly I’m missing something, but I’m not sure what it is.
I do acknowledge that your system of rights makes a sort of sense; but the only way I know of interpreting this system is to look at it and ask, “will these rules, if implemented, result in a world that is better, or at least as good as, the world we live in now ?” That is, from my perspective, the rules are instrumental but not terminal values. As far as I understand, deontologists treat rights as terminal values—is that correct ?
I did not want to make it sound like I’m putting words in your mouth. Whenever I say something like, “you, Alicorn, believe X”; I only mean something like, “to the best of my understanding, which may be incorrect or incomplete, Alicorn believes X, please correct me if this is not so”.
By “tool”, I mean something like, “a non-sapient entity which a sapient agent may use in order to more easily accomplish a limited set of tasks”. For example, a hammer is a tool for driving nails into wood (or other materials). The “grep” command is a tool for searching text files. The civil legal system could be seen as a tool for extracting damages from parties who wronged you in some way.
I believe (and you might disagree) that most tools (arguably, all tools, though weapons are a borderline case) are morally neutral. A hammer is neither good nor evil; it’s just a hammer. I can use it to build shelter for a homeless man, thus performing a good act; or I could use it to smash that man’s skull, thus performing an evil act; but it is the act (and possibly the person performing it) who is good or evil, not the hammer.
I don’t have a really thorough account of personhood. It includes but is not limited to paradigmatic adult humans.
I definitely wouldn’t have chosen that phrasing, but it doesn’t seem obviously wrong?
I’m not sure where you want me to ground this. Where do you ground your morality?
I wouldn’t choose the word “value”, but they definitely are non-instrumental in nature.
I will tentatively classify the legal system as a tool in this sense, albeit a tool for doing some very abstract things like “solve coordination problems”.
So how do you know that rights “naturally fall out of” personhood, if you don’t really know what personhood even is ?
Ok, so in this case my problem is with the prescriptive nature of rights. What does “must avoid” mean in this case ? I personally can think of only three (well, maybe 2.5) reasons why an action must be executed or avoided:
The action will lead to some highly undesirable consequences. For example, jumping off of very high places must be avoided at all costs, because doing so will result in your death.
The preference or aversion to the action is hardwired into the person (via genetics, in case of humans). For example, most humans—even newborn ones—will instinctively attempt to stay away from ledges.
The action is part of the laws of nature that act upon all physical objects. For example, humans on Earth can’t help but fall down, should they find themselves in mid-air with no support. The same is true of rocks.
I’m not sure, but I don’t think any of these points adequately describe deontological rules. Point #1 is conditional: if your death becomes highly desirable, you may find jumping off a cliff to be a reasonable action to take. Points #2 and #3 are more descriptive than prescriptive. Regarding #2, yes we are wired to avoid ledges, but we are also wired to desire fatty foods, and in the modern world some of us must fight that compulsion every day or face highly undesirable consequences. Point #3, of course, is entirely descriptive; yes, objects fall down, but what you do with that knowledge is up to you.
Note also that there is a clear strategy for learning about reasons #1, 2, and 3: we look at the evidence and attempt to adjust our belief based on it. Again, I don’t understand how we can learn about deontological rules at all.
I have some sort of a utility function which is hardwired into my personality. Lacking perfect introspection, I can’t determine what it is exactly, but based on available evidence I’m reasonably sure that it includes things like “seek pleasure, avoid pain” and “increase the pleasure and reduce the pain of other people in your tribe”. Based on this, I can evaluate the fitness of each action and act (or choose not to act) to maximize fitness.
Obviously, in practice, I don’t apply this reasoning explicitly to every action; just like you don’t apply the full Bayesian reasoning machinery to every rustling noise that you hear from the bushes. It would take too long, and by the time you figure out P(tiger | rustling), you’d be tiger-food. Still, that’s merely an optimization strategy, which is reducible to the underlying reasoning.
I’m starting to get concerned that you have some intractable requirements for completeness of a philosophical theory before one can say anything about it at all. Do you think your ethics would withstand a concerted hammering like this? Do you know how to compare utility between agents? What are your feelings on population ethics? How do you deal with logical uncertainty and Pascal’s muggings in complex Omega-related thought experiments? I’m not planning to make you solve these peripheral problems before allowing you to say that you endorse actions that have the best consequences over their alternatives (or whatever framing you prefer).
It means “if you don’t avoid it, you will be doing something wrong”. That’s all. Your guesses are wrong. Did you read Deontology for Consequentialists?
I wasn’t trying to Gish Gallop you if that’s what you’re implying. That said, I think you are underestimating the inferential distance here. When you say, “rights naturally fall out of personhood”, I literally have no idea what that means. As you saw from my previous comments, I tried to stay away from defining personhood as long as possible, but I’m not sure I can continue to do that if your only answer to “what are rights” is something like “an integral part of personhood”.
Pretty much the only possible ways I can translate the word “wrong” are a). “will lead to highly undesirable consequences”, and b). “is physically impossible”. You ask,
Yes I did, and I failed to fully understand it, as well. As I said before, I agree with most (or possibly all) of the rights you listed in your comments, as well as in your article; I just don’t understand what process you used to come up with those rights. For example, I agree with you that “killing people is wrong” is a good rule; what I don’t understand is why you think so, or why you think that “photographing people without permission is wrong” is not a good rule. Your article, as far as I can tell, does not address this.
Treat “acting in a way that violates a right” as “undesirable consequences”—that is, negative utility—and everything else as neutral or positive utility (but not positive enough to outweigh rights violations).
“Wrong” here is, essentially, “carrying negative utility”—not instrumentally, terminally.
Disclaimer: I am not a deontologist, and I’m certainly not Alicorn.
Well, I’m out of ideas for bridging the gap. Sorry.
Fair enough; I appreciate the effort nonetheless.
Ok, now how would you translate “undesirable”?
Where does your utility function come from?
These are good questions. It seems like deontologists have difficulty reconciling seeming conflicting rights.
In my main reply to the original post, I discuss some of the conflicts between truthfulness and privacy. If people have a right to not be lied to, and people also have privacy rights, then these rights could clash in some situations.
Why these rights, and not others? For example, why a right to not be murdered, instead of a right to murder one person per year? A once-a-year right to murder can be formulated as a negative right, i.e. non-interference as you murder one person.
(I agree with the listed criteria for rights, BTW.)
Quantity limitations on rights are inelegant (what does one Earth year have to do with personhood-in-general?), so there’s that. Even if you frame it as “the right to go uninterfered-with during the course of one murder per year”, that has a heck of a lot of fiddly bits.
It also doesn’t interact with itself very well. Suppose you are trying to murder me, and I’m the first person you’ve tried to murder all year, and I haven’t murdered anyone all year either, so I try to murder you back—am I interfering with you? It sure looks like it, but I’m just exercising my own right… “Right not to be murdered” doesn’t do that sort of self-defeating.
I have other reasons to prefer the “right not to be murdered” version but they are failing to come verbally clear. Something about self-containedness that I’m having trouble explicating.
Consequentialist reasoning which seems to align fairly well with Alicorn’s conclusions (at least the one about it being in some situations correct to hide the truth by being selective even when this in some sense deceives the listener, and at the same time being less correct to directly lie) are touched on here if that’s useful to you.
Essentially: You don’t know for sure if a person wants general encouragement/niceties or a genuine critique. One way to deal with this is to say something nice+encouraging+true which leaves room for you to switch to “okay but here is what you could do better” mode without contradicting your previous nicety if and only if they communicate clearly they want your full opinion after hearing your careful wording.
I find the reaction to this comment, both in the downvotes and some of the responses, interesting in light of the recent discussion about Tell Culture. That post was highly upvoted, but some people in the comments expressed the opinion that even the people who claim to endorse Tell culture really don’t, and that people who actually consistently operated on Tell Culture would end up getting punished, even in a community where most people claimed to endorse Tell.
As far as I can tell, the reactions to this comment are support for that hypothesis, as I see you as a person who consistently operates on Tell, and then (as in this case) occasionally gets censured for that, even in a community where a lot of people previously claimed that Tell sounds awesome.
I think you have it backwards. Chris Told, Alicorn punished him for it, and the community retaliated. This is a great victory for Tell culture and radical honesty, as long as you don’t believe Alicorn embodies them.
A key difference is that the community is incrementalist and consequentialist, while Alicorn is absolutist and deontologist. A lot of the comments don’t believe that Alicorn accurately identifies liars. Expelling him is a step backwards from her claimed goal of honest associates. And, indeed, she did specify it was instrumental to this goal and not just a rule she follows without regards to consequences. But it’s probably also that. The community’s failure to grasp the deontological aspects may make its reaction unfair; but I cannot judge for the same reason. The basic reaction is that she is a very strong instance of Guess culture, where her associates have to guess how much to lie to her and are strongly discouraged from talking about it.
I don’t think that follows. The fact that we punish people for telling others about X, and we don’t punish them if we don’t, doesn’t mean we’re punishing them for telling; it means we’re punishing them for X. We’d really like to punish them for X whether they tell or not, it’s just that telling makes it easier.
It may be more understandable to think about it as cheating. You can either lose, or cheat and win. If you lose, you suffer all the effects of a loss. If you cheat, you may not suffer at all. But we don’t describe that as “punishment for not cheating”. It’s the same here: you can lose (have your opinions judged poorly) or cheat (conceal your opinions by not telling anyone, and escape being judged for them).
Retracted comment
You favor lying to people to scam money out of them because it would be inconvenient for your education plans to not be able to scam money out of them? That seems unethical.
This seems like a wilfully unfair description of Chris’s position.
It’s a scam if you take someone’s money intending to do something other than what you tell them you’ll do with it, or (maybe) intending to do it for very different reasons from the ones you give them, or with very different prospects of success. But Chris’s hypothetical youngster is doing with the money exactly what his/or her parents expect (getting educated), with the same purpose and the same likely outcomes as if s/he were straight. Where’s the scam?
And the donors in question aren’t generic “people”. They’re hypothetical-youngster’s parents. Maybe that makes it worse (“you’d lie to your own flesh and blood?”), maybe it makes it better (arguably they owe him/her an education, if they can afford it and s/he would genuinely gain from it), but it certainly makes a difference.
I think there is an argument to be made against Chris’s position along those lines, but such tendentious language isn’t the way to start it.
The parents are presumably intending to support their child along a particular path, which leads through college, and involves a good career, marriage to a nice woman, and grandchildren.
Another factor is that the student is protecting their parents from doing something that they will likely later regret.
I’ve known a number of folks who came out to their parents and got fearful and hostile responses — which the parents later apologized for and tried to make amends for. This seems to be a pretty common pattern, in fact. Broadly, people want to have good relations with their families, but they may not always act that way in the moment — and they come to regret actions that harm those relations.
Putting people in situations where they will predictably behave in ways they will later regret is widely regarded to be pretty crappy social behavior. It’s certainly not the sort of thing that people endorse doing to those they love. If avoiding that situation requires a certain amount of narrowly targeted deception, so be it.
Adopting a deontological-style rule of not explicitly lying (using evasion or refusing to answer, for instance) may be worthwhile. Avoiding deception in general is a good idea for consensual relationships willingly entered and willingly left. Parent/child is not that kind of relationship, though — not in our society and economy. Even though it would be desirable to cultivate a world in which there were no violent outbursts in response to true facts, it would be negligence to the point of malice to advise people in dependent social situations to pretend that they live in such a world.
People whose families eventually realized the error of their ways are probably rather more comfortable talking about their experiences than people whose families really did reject them permanently. Suspecting there may be some availability bias going on here.
Um. Think about that statement for a second, if you don’t see what’s wrong with it try replacing “gay” with “pedophile” or “rapist” in your example.
In a world of consensual social relations, there aren’t rapists or violent homophobes, yes. I think you’re reading the boundaries around the hypothetical scenario differently from how I intended them there, although I see how the phrasing is unclear.
You present a compelling argument that “scamming money out of people because it would be inconvenient not to” can be an entirely ethical and appropriate course of action.
Lumping a particular scenario already analysed on merit seems reasonable into a despised reference class serves to change the reference class, not the instance.
Really, where? Or does “analysed on merit” now mean asserted?
Extensively in the thread, with people having various opinions both regarding effectiveness and ethical appropriateness. The conversation seems to have been of an acceptable quality. Also, presumably, by the people people thought and analysed their intuitions before making the assertions. This isn’t me claiming here that whatever position people have is necessarily ‘right’—people disagreed after all. I am just suggesting that the reference class labelling is rather irrelevant when screened off by the specific details already.
By way of explanation by anecdote I have fond memories of hearing the observation “You make a compelling argument for eating babies” in response to a similar pattern. ie. “A is in X. X is evil therefore A is evil.” --> “A is in X. A isn’t evil therefore not all X are evil.”
Link please. I haven’t seen anything resembling “analyzing the scenario on its merits”.
And the worst argument in the world rears its ugly head once more.
Is there a named fallacy of using words which radically downplay or upplay the seriousness of a situation?
Teenagers sometimes get thrown out of their families for coming out. This is more than an inconvenience, and affects more than their educational plans.
If there is a fallacy here, I would say it’s the fallacy of the “loaded question” or the use of “loaded language.” Here, the question presupposes that it’s a “scam” to lie to one’s parents about sexual orientation in order to obtain their financial support for college.
Nominull makes an interesting argument but he ruins it by loading by his use of the word “scam.”
Here’s a charitable interpretation of the point:
You don’t have an entitlement to educational support from your parents and your parents have the right to withhold that support for any reason. So by lying to them about your sexual orientation, you are fraudulently depriving them of their rights; in effect you are scamming your own parents.
I still disagree with this argument but I think it’s a close call. Part of the problem is that in determining financial aid, colleges assume there will support from one’s parents. If you tell the college financial aid office that your parents have cut you off because they disapprove of homosexuality, chances are the college won’t step up and help you. So there is kind of a quasi-right to college support from one’s parents.
The other thing is that the parents probably already know at some level that their child is a homosexual just like fat people already know that they are fat and cheated-on spouses often know that they are being cheated on. So there’s something to be said for allowing the person to continue in their state of denial or at least not reminding them of things they prefer not to know.
And last, there is an idea that it’s wrong to discriminate based on sexual orientation. I’m not sure how strong this argument is in the context of personal and family relations.