That the plan would fail if the lie is detected is not under contest, I think. However, it is, in my opinion, a relatively trivial failure mode, where “trivial” is meant to be taken in the sense that it is obvious, not that it is necessarily easy to avoid. For instance, equations of the form a^n + b^n = c^n have trivial solutions in the form (a,b,c) = (0,0,0), but those are not interesting. My original statement was meant to be applied more as a disclaimer than anything else, i.e. “Well obviously this is an easy way for the plan to fail, but getting past that...” The reason for this was because there might be more intricate/subtle failure modes that I’ve not yet thought of, and my statement was intended more as an invitation to think of some of these less trivial failure modes than as an argument for the plan’s success. This, incidentally, is why I think your analogies don’t apply; the failure modes that you mention in those cases are so broad as to be considered blanket statements, which prevents the existence of more interesting failure modes. A better statement in your sports analogy, for example, might be, “Well, if our star player isn’t sick, we stand a decent chance of winning,” with the unstated implication being that of course there might be other complications independent of the star player being sick. (Unless, of course, you think the possibility of the lie being detected is the only failure mode, in which case I’d say you’re being unrealistically optimistic.)
Also, it tends to be my experience that lies of omission are much easier to cover up than explicit lies, and the sort suggested in the original scenario seem to be closer to the former than to the latter. Any comments here?
(I also think that the main problem with lying from a moral perspective is that not just that it causes epistemic inaccuracy on the part of the person being lied to, but that it causes inaccuracies in such a way that it interferes with them instrumentally. Lying omissively about one’s mental state, which is unlikely to be instrumentally important anyway, in an attempt to improve the other person’s epistemic accuracy with regard to the world around them, a far more instrumentally useful task, seems like it might actually be morally justifiable.)
Lying also does heavy damage to one’s credibility. The binary classification of other people into “honest folk” and “liars” is quite widespread in the real world. You get classified into “liars”, pretty hard to get out of there.
Well, you never actually say anything untrue; you’re just acting uncertain in order to have a better chance of getting through to the other person. It seems intuitively plausible that the reputational effects from that might not be as bad as the reputational effects that would come from, say, straight-out lying; I accept that this may be untrue, but if it is, I’d want to know why. Moreover, all of this is contingent upon you being found out. In a scenario like this, is that really that likely? How is the other person going to confirm your mental state?
YMMV, of course, but I think what matters is the intent to deceive. Once it manifests itself, the specific forms the deception takes do not matter much (though their “level” or magnitude does).
How is the other person going to confirm your mental state?
This is not a court of law, no proof required—“it looks like” is often sufficient, if only for direct questions which will put you on the spot.
This is not a court of law, no proof required—“it looks like” is often sufficient, if only for direct questions which will put you on the spot.
Well, yes, but are they really going to jump right to “it looks like” without any prior evidence? That seems like majorprivileging the hypothesis. I mean, if you weren’t already primed by this conversation, would you automatically think “They might be lying about being unconvinced” if someone starts saying something skeptical about, say, cryonics? The only way I could see that happening is if the other person lets something slip, and when the topic in question is your own mental state, it doesn’t sound too hard to keep the fact that you already believe something concealed. It’s just like passing the Ideological Turing Test, in a way.
but are they really going to jump right to “it looks like” without any prior evidence?
Humans, in particular neurotypical humans, are pretty good at picking up clues (e.g. nonverbal) that something in a social situation is not quite on the up-and-up. That doesn’t necessarily rise to the conscious level of a verbalized thought “They might be lying...”, but manifests itself as a discomfort and unease.
it doesn’t sound too hard
It’s certainly possible and is easy for a certain type of people. I expect it to be not so easy for a different type of people, like ones who tend to hang out at LW… You need not just conceal your mental state, you need to actively pretend to have a different mental state.
Hm. I don’t know. I think it’s true when comparing a face-to-face conversation with an online one, but I have no idea whether that can be extended to a general rule.
Moreover, all of this is contingent upon you being found out. In a scenario like this, is that really that likely?
Yes. It is.
That’s not very helpful, though. Could you go into specifics?
In general, any argument for the success of a plan that sounds like “how likely is it that it could go wrong?” is a planningfallacy waiting to bite you.
Specifically, people can be quite good at detecting lies. On one theory, that’s what we’ve evolved these huge brains for: an arms race of lying vs. detecting lies. If you lie as well as you possibly can, you’re only keeping up with everyone else detecting lies as well as they can. On internet forums, I see concern trolls and fake friends being unmasked pretty quickly. Face to face, when person A tells me something about person B not present, I have sometimes had occasion to think, “ok, that’s your story, but just how much do I actually believe it?”, or “that was the most inept attempt to plant a rumour I’ve ever heard; I shall be sure to do exactly what you ask and not breathe a word of this to anyone, especially not to the people you’re probably hoping I’ll pass this on to.” If it’s a matter that does not much concern me, I won’t even let person A know they’ve been rumbled.
In the present case, the result of being found out is not only that your relationship ends with the person whose religion you were trying to undermine, but they will think that an atheist tried to subvert their religion with lies, and they will be completely right. “As do all atheists”, their co-religionists will be happy to tell them afterwards, in conversations you will not be present at.
On internet forums, I see concern trolls and fake friends being unmasked pretty quickly.
In what manner do you think it is most likely for this to occur?
Face to face, when person A tells me something about person B not present, I have sometimes had occasion to think, “ok, that’s your story, but just how much do I actually believe it?”, or “that was the most inept attempt to plant a rumour I’ve ever heard; I shall be sure to do exactly what you ask and not breathe a word of this to anyone, especially not to the people you’re probably hoping I’ll pass this on to.” If it’s a matter that does not much concern me, I won’t even let person A know they’ve been rumbled.
If possible, could you outline some contributing factors that led to you spotting the lie?
If possible, could you outline some contributing factors that led to you spotting the lie?
That’s a bit like asking how I recognise someone’s face, or how I manage to walk in a straight line. Sometimes things just “sound a bit off”, as one says, which of course is not an explanation, just a description of what it feels like. That brings to my attention the distinction between what has been said and whether it is true, and then I can consider what other ways there are of joining up the dots.
Of course, that possibility is always present when one person speaks to another, and having cultivated consciousness of abstraction, it requires little activation energy to engage. In fact, that’s my default attitude whenever person A tells me anything negatively charged about B: not to immediately think “what a bad person B is!”, although they may be, but “this is the story that A has told me; what does it seem to me likely to be true?”
Suspending judgement is not a false positive. And even from such a limited interaction as seeing the name and subject line of an email, I am almost never wrong in detecting spam, and that’s the spam that got past the automatic filters. I don’t think I’m exceptional; people are good at this sort of thing.
My hobby: looking at the section of the sidebar called “Recent on rationality blogs”, and predicting before mousing over the links whether the source is SlateStarCodex, Overcoming Bias, an EA blog, or other. I get above 90% there, and while “Donor coordination” is obviously an EA subject, I can’t explain what makes “One in a Billion?” and “On Stossel Tonight” clearly OB tiles, while “Framing for Light Instead of Heat” could only be SSC.
Deliberately uninformative title. Robin Hanson does this fairly often, Scott much less so. Very short, which is highly characteristic of OB. Very large number is suggestive of “large-scale” concerns, more characteristic of OB than of Scott. Nothing that obviously suggests EAism.
On Stossel Tonight
Self-promoting (RH frequently puts up things about his public appearances; other sidebarry folks don’t). Very short. Assumes you know what “Stossel” is; if you don’t this reads as “deliberately uninformative” (somewhat typical of OB), and if you do it reads as “right-wing and businessy connections” (very typical of OB).
I don’t think I’m exceptional; people are good at this sort of thing.
Huh. I must just be unusually stupid with respect to “this sort of thing”, then, as I’m rarely able to discern a plausible-sounding lie from the truth based on nonverbal cues. (As a result, my compensation heuristic is “ignore any and all rumors, especially negative ones”.) Ah, well. It looks like I implicitly committed the typical mind fallacy in assuming that everyone would have a similar level of difficulty as I do when detecting “off-ness”.
My hobby: looking at the section of the sidebar called “Recent on rationality blogs”, and predicting before mousing over the links whether the source is SlateStarCodex, Overcoming Bias, an EA blog, or other. I get above 90% there, and while “Donor coordination” is obviously an EA subject, I can’t explain what makes “One in a Billion?” and “On Stossel Tonight” clearly OB tiles, while “Framing for Light Instead of Heat” could only be SSC.
That sounds like an awesome hobby, and one that I feel like I should start trying. Would you say you’ve improved at doing this over time, or do you think your level of skill has remained relatively constant?
Would you say you’ve improved at doing this over time, or do you think your level of skill has remained relatively constant?
I couldn’t really say. Back when I read OB, I’d often think, “Yes, that’s a typical OB title”, but of course I knew I was looking at OB. When the sidebar blogroll was introduced here, I realised that I could still tell the OB titles from the rest. The “X is not about Y” template is a giveaway, of course, but Hanson hasn’t used that for some time. SSC tends to use more auxiliary words, OB leaves them out. Where Scott writes “Framing For Light Instead Of Heat”, Hanson would have written “Light Not Heat”, or perhaps “Light Or Heat?”.
It sounds like you’re implying that most lies are easily found, and consequently, most unchallenged statements are truths.
That’s, really really really stretching my capacity to believe. Either you’re unique with this ability, or you’re also committing the typical mind fallacy, w.r.t thinking all people are only as good at lying (at max) as you are at sniffing them out.
Moreover, all of this is contingent upon you being found out. In a scenario like this, is that really that likely?
Yes. It is.
It sounds like you’re implying that most lies are easily found
In a scenario like this, i.e. pretending to be undergoing a deep crisis of faith in order to undermine someone else’s. My observation is that in practice, concern trolling is rapidly found out, and the bigger the audience, the shorter the time to being nailed.
thinking all people are only as good at lying (at max) as you are at sniffing them out.
On the whole, people are as good at lying as, on the whole, people are at finding them out, because it’s an arms race. Some will do better, some worse; anyone to whom the idea, “why not just lie!” has only just occurred is unlikely to be in the former class.
That the plan would fail if the lie is detected is not under contest, I think. However, it is, in my opinion, a relatively trivial failure mode, where “trivial” is meant to be taken in the sense that it is obvious, not that it is necessarily easy to avoid. For instance, equations of the form a^n + b^n = c^n have trivial solutions in the form (a,b,c) = (0,0,0), but those are not interesting. My original statement was meant to be applied more as a disclaimer than anything else, i.e. “Well obviously this is an easy way for the plan to fail, but getting past that...” The reason for this was because there might be more intricate/subtle failure modes that I’ve not yet thought of, and my statement was intended more as an invitation to think of some of these less trivial failure modes than as an argument for the plan’s success. This, incidentally, is why I think your analogies don’t apply; the failure modes that you mention in those cases are so broad as to be considered blanket statements, which prevents the existence of more interesting failure modes. A better statement in your sports analogy, for example, might be, “Well, if our star player isn’t sick, we stand a decent chance of winning,” with the unstated implication being that of course there might be other complications independent of the star player being sick. (Unless, of course, you think the possibility of the lie being detected is the only failure mode, in which case I’d say you’re being unrealistically optimistic.)
Also, it tends to be my experience that lies of omission are much easier to cover up than explicit lies, and the sort suggested in the original scenario seem to be closer to the former than to the latter. Any comments here?
(I also think that the main problem with lying from a moral perspective is that not just that it causes epistemic inaccuracy on the part of the person being lied to, but that it causes inaccuracies in such a way that it interferes with them instrumentally. Lying omissively about one’s mental state, which is unlikely to be instrumentally important anyway, in an attempt to improve the other person’s epistemic accuracy with regard to the world around them, a far more instrumentally useful task, seems like it might actually be morally justifiable.)
Lying also does heavy damage to one’s credibility. The binary classification of other people into “honest folk” and “liars” is quite widespread in the real world. You get classified into “liars”, pretty hard to get out of there.
Well, you never actually say anything untrue; you’re just acting uncertain in order to have a better chance of getting through to the other person. It seems intuitively plausible that the reputational effects from that might not be as bad as the reputational effects that would come from, say, straight-out lying; I accept that this may be untrue, but if it is, I’d want to know why. Moreover, all of this is contingent upon you being found out. In a scenario like this, is that really that likely? How is the other person going to confirm your mental state?
YMMV, of course, but I think what matters is the intent to deceive. Once it manifests itself, the specific forms the deception takes do not matter much (though their “level” or magnitude does).
This is not a court of law, no proof required—“it looks like” is often sufficient, if only for direct questions which will put you on the spot.
Well, yes, but are they really going to jump right to “it looks like” without any prior evidence? That seems like major privileging the hypothesis. I mean, if you weren’t already primed by this conversation, would you automatically think “They might be lying about being unconvinced” if someone starts saying something skeptical about, say, cryonics? The only way I could see that happening is if the other person lets something slip, and when the topic in question is your own mental state, it doesn’t sound too hard to keep the fact that you already believe something concealed. It’s just like passing the Ideological Turing Test, in a way.
Humans, in particular neurotypical humans, are pretty good at picking up clues (e.g. nonverbal) that something in a social situation is not quite on the up-and-up. That doesn’t necessarily rise to the conscious level of a verbalized thought “They might be lying...”, but manifests itself as a discomfort and unease.
It’s certainly possible and is easy for a certain type of people. I expect it to be not so easy for a different type of people, like ones who tend to hang out at LW… You need not just conceal your mental state, you need to actively pretend to have a different mental state.
Fair enough. How about online discourse, then? I doubt you’d be able to pick up much nonverbal content there.
It is much easier to pretend online, but it’s also harder to convince somebody of something.
Would you say the difficulty of convincing someone scales proportionally with the ease of pretending?
Hm. I don’t know. I think it’s true when comparing a face-to-face conversation with an online one, but I have no idea whether that can be extended to a general rule.
Yes. It is.
That’s not very helpful, though. Could you go into specifics?
In general, any argument for the success of a plan that sounds like “how likely is it that it could go wrong?” is a planning fallacy waiting to bite you.
Specifically, people can be quite good at detecting lies. On one theory, that’s what we’ve evolved these huge brains for: an arms race of lying vs. detecting lies. If you lie as well as you possibly can, you’re only keeping up with everyone else detecting lies as well as they can. On internet forums, I see concern trolls and fake friends being unmasked pretty quickly. Face to face, when person A tells me something about person B not present, I have sometimes had occasion to think, “ok, that’s your story, but just how much do I actually believe it?”, or “that was the most inept attempt to plant a rumour I’ve ever heard; I shall be sure to do exactly what you ask and not breathe a word of this to anyone, especially not to the people you’re probably hoping I’ll pass this on to.” If it’s a matter that does not much concern me, I won’t even let person A know they’ve been rumbled.
In the present case, the result of being found out is not only that your relationship ends with the person whose religion you were trying to undermine, but they will think that an atheist tried to subvert their religion with lies, and they will be completely right. “As do all atheists”, their co-religionists will be happy to tell them afterwards, in conversations you will not be present at.
In what manner do you think it is most likely for this to occur?
If possible, could you outline some contributing factors that led to you spotting the lie?
That’s a bit like asking how I recognise someone’s face, or how I manage to walk in a straight line. Sometimes things just “sound a bit off”, as one says, which of course is not an explanation, just a description of what it feels like. That brings to my attention the distinction between what has been said and whether it is true, and then I can consider what other ways there are of joining up the dots.
Of course, that possibility is always present when one person speaks to another, and having cultivated consciousness of abstraction, it requires little activation energy to engage. In fact, that’s my default attitude whenever person A tells me anything negatively charged about B: not to immediately think “what a bad person B is!”, although they may be, but “this is the story that A has told me; what does it seem to me likely to be true?”
Well, based on that description, would I be accurate in saying that it seems as though your “method” would generate a lot of false positives?
You can always trade of specificity for sensitivity. It also possible to ask additional questions when you are suspicious.
Suspending judgement is not a false positive. And even from such a limited interaction as seeing the name and subject line of an email, I am almost never wrong in detecting spam, and that’s the spam that got past the automatic filters. I don’t think I’m exceptional; people are good at this sort of thing.
My hobby: looking at the section of the sidebar called “Recent on rationality blogs”, and predicting before mousing over the links whether the source is SlateStarCodex, Overcoming Bias, an EA blog, or other. I get above 90% there, and while “Donor coordination” is obviously an EA subject, I can’t explain what makes “One in a Billion?” and “On Stossel Tonight” clearly OB tiles, while “Framing for Light Instead of Heat” could only be SSC.
Deliberately uninformative title. Robin Hanson does this fairly often, Scott much less so. Very short, which is highly characteristic of OB. Very large number is suggestive of “large-scale” concerns, more characteristic of OB than of Scott. Nothing that obviously suggests EAism.
Self-promoting (RH frequently puts up things about his public appearances; other sidebarry folks don’t). Very short. Assumes you know what “Stossel” is; if you don’t this reads as “deliberately uninformative” (somewhat typical of OB), and if you do it reads as “right-wing and businessy connections” (very typical of OB).
(As you may gather, I share your hobby.)
Huh. I must just be unusually stupid with respect to “this sort of thing”, then, as I’m rarely able to discern a plausible-sounding lie from the truth based on nonverbal cues. (As a result, my compensation heuristic is “ignore any and all rumors, especially negative ones”.) Ah, well. It looks like I implicitly committed the typical mind fallacy in assuming that everyone would have a similar level of difficulty as I do when detecting “off-ness”.
That sounds like an awesome hobby, and one that I feel like I should start trying. Would you say you’ve improved at doing this over time, or do you think your level of skill has remained relatively constant?
I couldn’t really say. Back when I read OB, I’d often think, “Yes, that’s a typical OB title”, but of course I knew I was looking at OB. When the sidebar blogroll was introduced here, I realised that I could still tell the OB titles from the rest. The “X is not about Y” template is a giveaway, of course, but Hanson hasn’t used that for some time. SSC tends to use more auxiliary words, OB leaves them out. Where Scott writes “Framing For Light Instead Of Heat”, Hanson would have written “Light Not Heat”, or perhaps “Light Or Heat?”.
It sounds like you’re implying that most lies are easily found, and consequently, most unchallenged statements are truths.
That’s, really really really stretching my capacity to believe. Either you’re unique with this ability, or you’re also committing the typical mind fallacy, w.r.t thinking all people are only as good at lying (at max) as you are at sniffing them out.
Emphasis added:
In a scenario like this, i.e. pretending to be undergoing a deep crisis of faith in order to undermine someone else’s. My observation is that in practice, concern trolling is rapidly found out, and the bigger the audience, the shorter the time to being nailed.
On the whole, people are as good at lying as, on the whole, people are at finding them out, because it’s an arms race. Some will do better, some worse; anyone to whom the idea, “why not just lie!” has only just occurred is unlikely to be in the former class.