Moreover, all of this is contingent upon you being found out. In a scenario like this, is that really that likely?
Yes. It is.
That’s not very helpful, though. Could you go into specifics?
In general, any argument for the success of a plan that sounds like “how likely is it that it could go wrong?” is a planningfallacy waiting to bite you.
Specifically, people can be quite good at detecting lies. On one theory, that’s what we’ve evolved these huge brains for: an arms race of lying vs. detecting lies. If you lie as well as you possibly can, you’re only keeping up with everyone else detecting lies as well as they can. On internet forums, I see concern trolls and fake friends being unmasked pretty quickly. Face to face, when person A tells me something about person B not present, I have sometimes had occasion to think, “ok, that’s your story, but just how much do I actually believe it?”, or “that was the most inept attempt to plant a rumour I’ve ever heard; I shall be sure to do exactly what you ask and not breathe a word of this to anyone, especially not to the people you’re probably hoping I’ll pass this on to.” If it’s a matter that does not much concern me, I won’t even let person A know they’ve been rumbled.
In the present case, the result of being found out is not only that your relationship ends with the person whose religion you were trying to undermine, but they will think that an atheist tried to subvert their religion with lies, and they will be completely right. “As do all atheists”, their co-religionists will be happy to tell them afterwards, in conversations you will not be present at.
On internet forums, I see concern trolls and fake friends being unmasked pretty quickly.
In what manner do you think it is most likely for this to occur?
Face to face, when person A tells me something about person B not present, I have sometimes had occasion to think, “ok, that’s your story, but just how much do I actually believe it?”, or “that was the most inept attempt to plant a rumour I’ve ever heard; I shall be sure to do exactly what you ask and not breathe a word of this to anyone, especially not to the people you’re probably hoping I’ll pass this on to.” If it’s a matter that does not much concern me, I won’t even let person A know they’ve been rumbled.
If possible, could you outline some contributing factors that led to you spotting the lie?
If possible, could you outline some contributing factors that led to you spotting the lie?
That’s a bit like asking how I recognise someone’s face, or how I manage to walk in a straight line. Sometimes things just “sound a bit off”, as one says, which of course is not an explanation, just a description of what it feels like. That brings to my attention the distinction between what has been said and whether it is true, and then I can consider what other ways there are of joining up the dots.
Of course, that possibility is always present when one person speaks to another, and having cultivated consciousness of abstraction, it requires little activation energy to engage. In fact, that’s my default attitude whenever person A tells me anything negatively charged about B: not to immediately think “what a bad person B is!”, although they may be, but “this is the story that A has told me; what does it seem to me likely to be true?”
Suspending judgement is not a false positive. And even from such a limited interaction as seeing the name and subject line of an email, I am almost never wrong in detecting spam, and that’s the spam that got past the automatic filters. I don’t think I’m exceptional; people are good at this sort of thing.
My hobby: looking at the section of the sidebar called “Recent on rationality blogs”, and predicting before mousing over the links whether the source is SlateStarCodex, Overcoming Bias, an EA blog, or other. I get above 90% there, and while “Donor coordination” is obviously an EA subject, I can’t explain what makes “One in a Billion?” and “On Stossel Tonight” clearly OB tiles, while “Framing for Light Instead of Heat” could only be SSC.
Deliberately uninformative title. Robin Hanson does this fairly often, Scott much less so. Very short, which is highly characteristic of OB. Very large number is suggestive of “large-scale” concerns, more characteristic of OB than of Scott. Nothing that obviously suggests EAism.
On Stossel Tonight
Self-promoting (RH frequently puts up things about his public appearances; other sidebarry folks don’t). Very short. Assumes you know what “Stossel” is; if you don’t this reads as “deliberately uninformative” (somewhat typical of OB), and if you do it reads as “right-wing and businessy connections” (very typical of OB).
I don’t think I’m exceptional; people are good at this sort of thing.
Huh. I must just be unusually stupid with respect to “this sort of thing”, then, as I’m rarely able to discern a plausible-sounding lie from the truth based on nonverbal cues. (As a result, my compensation heuristic is “ignore any and all rumors, especially negative ones”.) Ah, well. It looks like I implicitly committed the typical mind fallacy in assuming that everyone would have a similar level of difficulty as I do when detecting “off-ness”.
My hobby: looking at the section of the sidebar called “Recent on rationality blogs”, and predicting before mousing over the links whether the source is SlateStarCodex, Overcoming Bias, an EA blog, or other. I get above 90% there, and while “Donor coordination” is obviously an EA subject, I can’t explain what makes “One in a Billion?” and “On Stossel Tonight” clearly OB tiles, while “Framing for Light Instead of Heat” could only be SSC.
That sounds like an awesome hobby, and one that I feel like I should start trying. Would you say you’ve improved at doing this over time, or do you think your level of skill has remained relatively constant?
Would you say you’ve improved at doing this over time, or do you think your level of skill has remained relatively constant?
I couldn’t really say. Back when I read OB, I’d often think, “Yes, that’s a typical OB title”, but of course I knew I was looking at OB. When the sidebar blogroll was introduced here, I realised that I could still tell the OB titles from the rest. The “X is not about Y” template is a giveaway, of course, but Hanson hasn’t used that for some time. SSC tends to use more auxiliary words, OB leaves them out. Where Scott writes “Framing For Light Instead Of Heat”, Hanson would have written “Light Not Heat”, or perhaps “Light Or Heat?”.
In general, any argument for the success of a plan that sounds like “how likely is it that it could go wrong?” is a planning fallacy waiting to bite you.
Specifically, people can be quite good at detecting lies. On one theory, that’s what we’ve evolved these huge brains for: an arms race of lying vs. detecting lies. If you lie as well as you possibly can, you’re only keeping up with everyone else detecting lies as well as they can. On internet forums, I see concern trolls and fake friends being unmasked pretty quickly. Face to face, when person A tells me something about person B not present, I have sometimes had occasion to think, “ok, that’s your story, but just how much do I actually believe it?”, or “that was the most inept attempt to plant a rumour I’ve ever heard; I shall be sure to do exactly what you ask and not breathe a word of this to anyone, especially not to the people you’re probably hoping I’ll pass this on to.” If it’s a matter that does not much concern me, I won’t even let person A know they’ve been rumbled.
In the present case, the result of being found out is not only that your relationship ends with the person whose religion you were trying to undermine, but they will think that an atheist tried to subvert their religion with lies, and they will be completely right. “As do all atheists”, their co-religionists will be happy to tell them afterwards, in conversations you will not be present at.
In what manner do you think it is most likely for this to occur?
If possible, could you outline some contributing factors that led to you spotting the lie?
That’s a bit like asking how I recognise someone’s face, or how I manage to walk in a straight line. Sometimes things just “sound a bit off”, as one says, which of course is not an explanation, just a description of what it feels like. That brings to my attention the distinction between what has been said and whether it is true, and then I can consider what other ways there are of joining up the dots.
Of course, that possibility is always present when one person speaks to another, and having cultivated consciousness of abstraction, it requires little activation energy to engage. In fact, that’s my default attitude whenever person A tells me anything negatively charged about B: not to immediately think “what a bad person B is!”, although they may be, but “this is the story that A has told me; what does it seem to me likely to be true?”
Well, based on that description, would I be accurate in saying that it seems as though your “method” would generate a lot of false positives?
You can always trade of specificity for sensitivity. It also possible to ask additional questions when you are suspicious.
Suspending judgement is not a false positive. And even from such a limited interaction as seeing the name and subject line of an email, I am almost never wrong in detecting spam, and that’s the spam that got past the automatic filters. I don’t think I’m exceptional; people are good at this sort of thing.
My hobby: looking at the section of the sidebar called “Recent on rationality blogs”, and predicting before mousing over the links whether the source is SlateStarCodex, Overcoming Bias, an EA blog, or other. I get above 90% there, and while “Donor coordination” is obviously an EA subject, I can’t explain what makes “One in a Billion?” and “On Stossel Tonight” clearly OB tiles, while “Framing for Light Instead of Heat” could only be SSC.
Deliberately uninformative title. Robin Hanson does this fairly often, Scott much less so. Very short, which is highly characteristic of OB. Very large number is suggestive of “large-scale” concerns, more characteristic of OB than of Scott. Nothing that obviously suggests EAism.
Self-promoting (RH frequently puts up things about his public appearances; other sidebarry folks don’t). Very short. Assumes you know what “Stossel” is; if you don’t this reads as “deliberately uninformative” (somewhat typical of OB), and if you do it reads as “right-wing and businessy connections” (very typical of OB).
(As you may gather, I share your hobby.)
Huh. I must just be unusually stupid with respect to “this sort of thing”, then, as I’m rarely able to discern a plausible-sounding lie from the truth based on nonverbal cues. (As a result, my compensation heuristic is “ignore any and all rumors, especially negative ones”.) Ah, well. It looks like I implicitly committed the typical mind fallacy in assuming that everyone would have a similar level of difficulty as I do when detecting “off-ness”.
That sounds like an awesome hobby, and one that I feel like I should start trying. Would you say you’ve improved at doing this over time, or do you think your level of skill has remained relatively constant?
I couldn’t really say. Back when I read OB, I’d often think, “Yes, that’s a typical OB title”, but of course I knew I was looking at OB. When the sidebar blogroll was introduced here, I realised that I could still tell the OB titles from the rest. The “X is not about Y” template is a giveaway, of course, but Hanson hasn’t used that for some time. SSC tends to use more auxiliary words, OB leaves them out. Where Scott writes “Framing For Light Instead Of Heat”, Hanson would have written “Light Not Heat”, or perhaps “Light Or Heat?”.