For example, by writing a very popular fanfiction (HPMOR)
For anyone who hasn’t read HP and thinks fantasy is weird, he lost points for that.
One way to get more points is to listen to other people’s weird ideas. In fact, if someone else proposes a weird idea that you already agree with, it may be a good idea not to let on, but publicly “get convinced”, to gain points. (Does that count as Dark Arts?)
I have actually thought of that, but in relation to a different problem: not that of seeming less “weird”, but that of convincing someone of an unpopular idea. It seems like the best way to convince people of something is to act like you’re still in the process of being convinced yourself; for instance, I don’t remember where, but I do remember reading an anecdote on how someone was able to convince his girlfriend of atheism while in a genuine crisis of faith himself. Incidentally, I should emphasize that his crisis of faith was genuine at the time—but it should work even if it’s not genuine, as long as the facade is convincing. I theorize that this may be due to in-group affiliation, i.e. if you’re already sure of something and trying to convince me, then you’re an outsider pushing an agenda, but if you yourself are unsure and are coming to me for advice, you’re on “my side”, etc. It’s easy to become entangled in just-so stories, so obviously take all of this speculation with a generous helping of salt, but it seems at least worth a try. (I do agree, however, that this seems borderline Dark Arts, so maybe not that great of an idea, especially if you value your relationship with that person enough to care if you’re found out.)
I was not aware that it had a name; thank you for telling me.
It isn’t “borderline Dark Arts”, it’s straight-out lying.
Agreed. The question, however, is whether or not this is sometimes justified.
This imagines the plan working, and uses that as argument for the plan working.
Well, no. It assumes that the plan doesn’t fall prey to an obvious failure mode, and suggests that if it does not, it has a high likelihood of success. (The idea being that if failure mode X is avoided, then the plan should work, so we should be careful to avoid failure mode X when/if enacting the plan.)
This imagines the plan working, and uses that as argument for the plan working.
Well, no. It assumes that the plan doesn’t fall prey to an obvious failure mode
The failure mode (people detecting the lie) is what it would be for this plan to fail. It’s like the empty sort of sports commentary that says “if our opponents don’t get any more goals than us, we can’t lose”, or the marketing plan that amounts to “if we get just 0.001% of this huge market, we’ll be rich.”
See also. Lying is hard, and likely beyond the capability of anyone who has just discovered the idea “I know, why not just lie!”
That the plan would fail if the lie is detected is not under contest, I think. However, it is, in my opinion, a relatively trivial failure mode, where “trivial” is meant to be taken in the sense that it is obvious, not that it is necessarily easy to avoid. For instance, equations of the form a^n + b^n = c^n have trivial solutions in the form (a,b,c) = (0,0,0), but those are not interesting. My original statement was meant to be applied more as a disclaimer than anything else, i.e. “Well obviously this is an easy way for the plan to fail, but getting past that...” The reason for this was because there might be more intricate/subtle failure modes that I’ve not yet thought of, and my statement was intended more as an invitation to think of some of these less trivial failure modes than as an argument for the plan’s success. This, incidentally, is why I think your analogies don’t apply; the failure modes that you mention in those cases are so broad as to be considered blanket statements, which prevents the existence of more interesting failure modes. A better statement in your sports analogy, for example, might be, “Well, if our star player isn’t sick, we stand a decent chance of winning,” with the unstated implication being that of course there might be other complications independent of the star player being sick. (Unless, of course, you think the possibility of the lie being detected is the only failure mode, in which case I’d say you’re being unrealistically optimistic.)
Also, it tends to be my experience that lies of omission are much easier to cover up than explicit lies, and the sort suggested in the original scenario seem to be closer to the former than to the latter. Any comments here?
(I also think that the main problem with lying from a moral perspective is that not just that it causes epistemic inaccuracy on the part of the person being lied to, but that it causes inaccuracies in such a way that it interferes with them instrumentally. Lying omissively about one’s mental state, which is unlikely to be instrumentally important anyway, in an attempt to improve the other person’s epistemic accuracy with regard to the world around them, a far more instrumentally useful task, seems like it might actually be morally justifiable.)
Lying also does heavy damage to one’s credibility. The binary classification of other people into “honest folk” and “liars” is quite widespread in the real world. You get classified into “liars”, pretty hard to get out of there.
Well, you never actually say anything untrue; you’re just acting uncertain in order to have a better chance of getting through to the other person. It seems intuitively plausible that the reputational effects from that might not be as bad as the reputational effects that would come from, say, straight-out lying; I accept that this may be untrue, but if it is, I’d want to know why. Moreover, all of this is contingent upon you being found out. In a scenario like this, is that really that likely? How is the other person going to confirm your mental state?
YMMV, of course, but I think what matters is the intent to deceive. Once it manifests itself, the specific forms the deception takes do not matter much (though their “level” or magnitude does).
How is the other person going to confirm your mental state?
This is not a court of law, no proof required—“it looks like” is often sufficient, if only for direct questions which will put you on the spot.
This is not a court of law, no proof required—“it looks like” is often sufficient, if only for direct questions which will put you on the spot.
Well, yes, but are they really going to jump right to “it looks like” without any prior evidence? That seems like majorprivileging the hypothesis. I mean, if you weren’t already primed by this conversation, would you automatically think “They might be lying about being unconvinced” if someone starts saying something skeptical about, say, cryonics? The only way I could see that happening is if the other person lets something slip, and when the topic in question is your own mental state, it doesn’t sound too hard to keep the fact that you already believe something concealed. It’s just like passing the Ideological Turing Test, in a way.
but are they really going to jump right to “it looks like” without any prior evidence?
Humans, in particular neurotypical humans, are pretty good at picking up clues (e.g. nonverbal) that something in a social situation is not quite on the up-and-up. That doesn’t necessarily rise to the conscious level of a verbalized thought “They might be lying...”, but manifests itself as a discomfort and unease.
it doesn’t sound too hard
It’s certainly possible and is easy for a certain type of people. I expect it to be not so easy for a different type of people, like ones who tend to hang out at LW… You need not just conceal your mental state, you need to actively pretend to have a different mental state.
Hm. I don’t know. I think it’s true when comparing a face-to-face conversation with an online one, but I have no idea whether that can be extended to a general rule.
Moreover, all of this is contingent upon you being found out. In a scenario like this, is that really that likely?
Yes. It is.
That’s not very helpful, though. Could you go into specifics?
In general, any argument for the success of a plan that sounds like “how likely is it that it could go wrong?” is a planningfallacy waiting to bite you.
Specifically, people can be quite good at detecting lies. On one theory, that’s what we’ve evolved these huge brains for: an arms race of lying vs. detecting lies. If you lie as well as you possibly can, you’re only keeping up with everyone else detecting lies as well as they can. On internet forums, I see concern trolls and fake friends being unmasked pretty quickly. Face to face, when person A tells me something about person B not present, I have sometimes had occasion to think, “ok, that’s your story, but just how much do I actually believe it?”, or “that was the most inept attempt to plant a rumour I’ve ever heard; I shall be sure to do exactly what you ask and not breathe a word of this to anyone, especially not to the people you’re probably hoping I’ll pass this on to.” If it’s a matter that does not much concern me, I won’t even let person A know they’ve been rumbled.
In the present case, the result of being found out is not only that your relationship ends with the person whose religion you were trying to undermine, but they will think that an atheist tried to subvert their religion with lies, and they will be completely right. “As do all atheists”, their co-religionists will be happy to tell them afterwards, in conversations you will not be present at.
On internet forums, I see concern trolls and fake friends being unmasked pretty quickly.
In what manner do you think it is most likely for this to occur?
Face to face, when person A tells me something about person B not present, I have sometimes had occasion to think, “ok, that’s your story, but just how much do I actually believe it?”, or “that was the most inept attempt to plant a rumour I’ve ever heard; I shall be sure to do exactly what you ask and not breathe a word of this to anyone, especially not to the people you’re probably hoping I’ll pass this on to.” If it’s a matter that does not much concern me, I won’t even let person A know they’ve been rumbled.
If possible, could you outline some contributing factors that led to you spotting the lie?
If possible, could you outline some contributing factors that led to you spotting the lie?
That’s a bit like asking how I recognise someone’s face, or how I manage to walk in a straight line. Sometimes things just “sound a bit off”, as one says, which of course is not an explanation, just a description of what it feels like. That brings to my attention the distinction between what has been said and whether it is true, and then I can consider what other ways there are of joining up the dots.
Of course, that possibility is always present when one person speaks to another, and having cultivated consciousness of abstraction, it requires little activation energy to engage. In fact, that’s my default attitude whenever person A tells me anything negatively charged about B: not to immediately think “what a bad person B is!”, although they may be, but “this is the story that A has told me; what does it seem to me likely to be true?”
Suspending judgement is not a false positive. And even from such a limited interaction as seeing the name and subject line of an email, I am almost never wrong in detecting spam, and that’s the spam that got past the automatic filters. I don’t think I’m exceptional; people are good at this sort of thing.
My hobby: looking at the section of the sidebar called “Recent on rationality blogs”, and predicting before mousing over the links whether the source is SlateStarCodex, Overcoming Bias, an EA blog, or other. I get above 90% there, and while “Donor coordination” is obviously an EA subject, I can’t explain what makes “One in a Billion?” and “On Stossel Tonight” clearly OB tiles, while “Framing for Light Instead of Heat” could only be SSC.
Deliberately uninformative title. Robin Hanson does this fairly often, Scott much less so. Very short, which is highly characteristic of OB. Very large number is suggestive of “large-scale” concerns, more characteristic of OB than of Scott. Nothing that obviously suggests EAism.
On Stossel Tonight
Self-promoting (RH frequently puts up things about his public appearances; other sidebarry folks don’t). Very short. Assumes you know what “Stossel” is; if you don’t this reads as “deliberately uninformative” (somewhat typical of OB), and if you do it reads as “right-wing and businessy connections” (very typical of OB).
I don’t think I’m exceptional; people are good at this sort of thing.
Huh. I must just be unusually stupid with respect to “this sort of thing”, then, as I’m rarely able to discern a plausible-sounding lie from the truth based on nonverbal cues. (As a result, my compensation heuristic is “ignore any and all rumors, especially negative ones”.) Ah, well. It looks like I implicitly committed the typical mind fallacy in assuming that everyone would have a similar level of difficulty as I do when detecting “off-ness”.
My hobby: looking at the section of the sidebar called “Recent on rationality blogs”, and predicting before mousing over the links whether the source is SlateStarCodex, Overcoming Bias, an EA blog, or other. I get above 90% there, and while “Donor coordination” is obviously an EA subject, I can’t explain what makes “One in a Billion?” and “On Stossel Tonight” clearly OB tiles, while “Framing for Light Instead of Heat” could only be SSC.
That sounds like an awesome hobby, and one that I feel like I should start trying. Would you say you’ve improved at doing this over time, or do you think your level of skill has remained relatively constant?
Would you say you’ve improved at doing this over time, or do you think your level of skill has remained relatively constant?
I couldn’t really say. Back when I read OB, I’d often think, “Yes, that’s a typical OB title”, but of course I knew I was looking at OB. When the sidebar blogroll was introduced here, I realised that I could still tell the OB titles from the rest. The “X is not about Y” template is a giveaway, of course, but Hanson hasn’t used that for some time. SSC tends to use more auxiliary words, OB leaves them out. Where Scott writes “Framing For Light Instead Of Heat”, Hanson would have written “Light Not Heat”, or perhaps “Light Or Heat?”.
It sounds like you’re implying that most lies are easily found, and consequently, most unchallenged statements are truths.
That’s, really really really stretching my capacity to believe. Either you’re unique with this ability, or you’re also committing the typical mind fallacy, w.r.t thinking all people are only as good at lying (at max) as you are at sniffing them out.
Moreover, all of this is contingent upon you being found out. In a scenario like this, is that really that likely?
Yes. It is.
It sounds like you’re implying that most lies are easily found
In a scenario like this, i.e. pretending to be undergoing a deep crisis of faith in order to undermine someone else’s. My observation is that in practice, concern trolling is rapidly found out, and the bigger the audience, the shorter the time to being nailed.
thinking all people are only as good at lying (at max) as you are at sniffing them out.
On the whole, people are as good at lying as, on the whole, people are at finding them out, because it’s an arms race. Some will do better, some worse; anyone to whom the idea, “why not just lie!” has only just occurred is unlikely to be in the former class.
I should emphasize that his crisis of faith was genuine at the time—but it should work even if it’s not genuine, as long as the facade is convincing.
Most people are not able to have the kind of strength of emotions that come with a genuine crisis of faith via conscious choice. Pretending to have them might come of as creepy even if the other person can’t exactly pinpoint what’s wrong.
Fair enough. Are there any subjects about which there might not be as high an emotional backlash? Cryonics, maybe? Start off acting unconvinced and then visibly think about it over a period of time, coming to accept it later on. That doesn’t seem like a lot of emotion is involved; it seems entirely intellectual, and the main factor against cryonics is the “weirdness factor”, so if there’s someone alongside you getting convinced, it might make it easier, especially due to conformity effects.
It’s true that cryonics is about death, but I don’t think that necessarily means there’s “a lot of emotion involved”. Most forms of rejection to cryonics that I’ve seen seem to be pretty intellectual, actually; there’s a bunch of things like cost-benefit analysis and probability estimates going on, etc. I personally think it’s likely that there is some motivated cognition going on, but I don’t think it’s due to heavy emotions. As I said in my earlier comment, I think that the main factor against cryonics is the fact that it seems “weird”, and therefore the people who are signed up for it also seem “weird”. If that’s the case, then it may be to the advantage of cryonics advocates to place themselves in the “normal” category first by acting skeptical of a crankish-sounding idea, before slowly getting “convinced”. Compare that approach to the usual approach: “Hey, death sucks, wanna sign up to get your head frozen so you’ll have a chance at getting thawed in the future?” Comparatively speaking, I think that the “usual” approach is significantly more likely to get you landed in the “crackpot” category.
Most forms of rejection to cryonics that I’ve seen seem to be pretty intellectual, actually; there’s a bunch of things like cost-benefit analysis and probability estimates going on, etc
That’s really not how most people make their decisions.
Compare that approach to the usual approach: “Hey, death sucks, wanna sign up to get your head frozen so you’ll have a chance at getting thawed in the future?”
There are plenty of ways to tell someone about cryonics that don’t involve a direct plea for them to take action.
That’s really not how most people make their decisions.
Maybe it’s not how most people make their decisions, but I have seen a significant number of people who do reject cryonics on a firmly intellectual basis, both online and in real life. I suppose you could argue that it’s not their true rejection (in fact, it almost certainly isn’t), but even so, that’s evidence against heavy emotions playing a significant part in their decision process.
There are plenty of ways to tell someone about cryonics that don’t involve a direct plea for them to take action.
Yes, but most of them still suffer from the “weirdness factor”.
For anyone who hasn’t read HP and thinks fantasy is weird, he lost points for that.
One way to get more points is to listen to other people’s weird ideas. In fact, if someone else proposes a weird idea that you already agree with, it may be a good idea not to let on, but publicly “get convinced”, to gain points. (Does that count as Dark Arts?)
I have actually thought of that, but in relation to a different problem: not that of seeming less “weird”, but that of convincing someone of an unpopular idea. It seems like the best way to convince people of something is to act like you’re still in the process of being convinced yourself; for instance, I don’t remember where, but I do remember reading an anecdote on how someone was able to convince his girlfriend of atheism while in a genuine crisis of faith himself. Incidentally, I should emphasize that his crisis of faith was genuine at the time—but it should work even if it’s not genuine, as long as the facade is convincing. I theorize that this may be due to in-group affiliation, i.e. if you’re already sure of something and trying to convince me, then you’re an outsider pushing an agenda, but if you yourself are unsure and are coming to me for advice, you’re on “my side”, etc. It’s easy to become entangled in just-so stories, so obviously take all of this speculation with a generous helping of salt, but it seems at least worth a try. (I do agree, however, that this seems borderline Dark Arts, so maybe not that great of an idea, especially if you value your relationship with that person enough to care if you’re found out.)
This is called “concern trolling”.
It isn’t “borderline Dark Arts”, it’s straight-out lying.
This imagines the plan working, and uses that as argument for the plan working.
I was not aware that it had a name; thank you for telling me.
Agreed. The question, however, is whether or not this is sometimes justified.
Well, no. It assumes that the plan doesn’t fall prey to an obvious failure mode, and suggests that if it does not, it has a high likelihood of success. (The idea being that if failure mode X is avoided, then the plan should work, so we should be careful to avoid failure mode X when/if enacting the plan.)
The failure mode (people detecting the lie) is what it would be for this plan to fail. It’s like the empty sort of sports commentary that says “if our opponents don’t get any more goals than us, we can’t lose”, or the marketing plan that amounts to “if we get just 0.001% of this huge market, we’ll be rich.”
See also. Lying is hard, and likely beyond the capability of anyone who has just discovered the idea “I know, why not just lie!”
That the plan would fail if the lie is detected is not under contest, I think. However, it is, in my opinion, a relatively trivial failure mode, where “trivial” is meant to be taken in the sense that it is obvious, not that it is necessarily easy to avoid. For instance, equations of the form a^n + b^n = c^n have trivial solutions in the form (a,b,c) = (0,0,0), but those are not interesting. My original statement was meant to be applied more as a disclaimer than anything else, i.e. “Well obviously this is an easy way for the plan to fail, but getting past that...” The reason for this was because there might be more intricate/subtle failure modes that I’ve not yet thought of, and my statement was intended more as an invitation to think of some of these less trivial failure modes than as an argument for the plan’s success. This, incidentally, is why I think your analogies don’t apply; the failure modes that you mention in those cases are so broad as to be considered blanket statements, which prevents the existence of more interesting failure modes. A better statement in your sports analogy, for example, might be, “Well, if our star player isn’t sick, we stand a decent chance of winning,” with the unstated implication being that of course there might be other complications independent of the star player being sick. (Unless, of course, you think the possibility of the lie being detected is the only failure mode, in which case I’d say you’re being unrealistically optimistic.)
Also, it tends to be my experience that lies of omission are much easier to cover up than explicit lies, and the sort suggested in the original scenario seem to be closer to the former than to the latter. Any comments here?
(I also think that the main problem with lying from a moral perspective is that not just that it causes epistemic inaccuracy on the part of the person being lied to, but that it causes inaccuracies in such a way that it interferes with them instrumentally. Lying omissively about one’s mental state, which is unlikely to be instrumentally important anyway, in an attempt to improve the other person’s epistemic accuracy with regard to the world around them, a far more instrumentally useful task, seems like it might actually be morally justifiable.)
Lying also does heavy damage to one’s credibility. The binary classification of other people into “honest folk” and “liars” is quite widespread in the real world. You get classified into “liars”, pretty hard to get out of there.
Well, you never actually say anything untrue; you’re just acting uncertain in order to have a better chance of getting through to the other person. It seems intuitively plausible that the reputational effects from that might not be as bad as the reputational effects that would come from, say, straight-out lying; I accept that this may be untrue, but if it is, I’d want to know why. Moreover, all of this is contingent upon you being found out. In a scenario like this, is that really that likely? How is the other person going to confirm your mental state?
YMMV, of course, but I think what matters is the intent to deceive. Once it manifests itself, the specific forms the deception takes do not matter much (though their “level” or magnitude does).
This is not a court of law, no proof required—“it looks like” is often sufficient, if only for direct questions which will put you on the spot.
Well, yes, but are they really going to jump right to “it looks like” without any prior evidence? That seems like major privileging the hypothesis. I mean, if you weren’t already primed by this conversation, would you automatically think “They might be lying about being unconvinced” if someone starts saying something skeptical about, say, cryonics? The only way I could see that happening is if the other person lets something slip, and when the topic in question is your own mental state, it doesn’t sound too hard to keep the fact that you already believe something concealed. It’s just like passing the Ideological Turing Test, in a way.
Humans, in particular neurotypical humans, are pretty good at picking up clues (e.g. nonverbal) that something in a social situation is not quite on the up-and-up. That doesn’t necessarily rise to the conscious level of a verbalized thought “They might be lying...”, but manifests itself as a discomfort and unease.
It’s certainly possible and is easy for a certain type of people. I expect it to be not so easy for a different type of people, like ones who tend to hang out at LW… You need not just conceal your mental state, you need to actively pretend to have a different mental state.
Fair enough. How about online discourse, then? I doubt you’d be able to pick up much nonverbal content there.
It is much easier to pretend online, but it’s also harder to convince somebody of something.
Would you say the difficulty of convincing someone scales proportionally with the ease of pretending?
Hm. I don’t know. I think it’s true when comparing a face-to-face conversation with an online one, but I have no idea whether that can be extended to a general rule.
Yes. It is.
That’s not very helpful, though. Could you go into specifics?
In general, any argument for the success of a plan that sounds like “how likely is it that it could go wrong?” is a planning fallacy waiting to bite you.
Specifically, people can be quite good at detecting lies. On one theory, that’s what we’ve evolved these huge brains for: an arms race of lying vs. detecting lies. If you lie as well as you possibly can, you’re only keeping up with everyone else detecting lies as well as they can. On internet forums, I see concern trolls and fake friends being unmasked pretty quickly. Face to face, when person A tells me something about person B not present, I have sometimes had occasion to think, “ok, that’s your story, but just how much do I actually believe it?”, or “that was the most inept attempt to plant a rumour I’ve ever heard; I shall be sure to do exactly what you ask and not breathe a word of this to anyone, especially not to the people you’re probably hoping I’ll pass this on to.” If it’s a matter that does not much concern me, I won’t even let person A know they’ve been rumbled.
In the present case, the result of being found out is not only that your relationship ends with the person whose religion you were trying to undermine, but they will think that an atheist tried to subvert their religion with lies, and they will be completely right. “As do all atheists”, their co-religionists will be happy to tell them afterwards, in conversations you will not be present at.
In what manner do you think it is most likely for this to occur?
If possible, could you outline some contributing factors that led to you spotting the lie?
That’s a bit like asking how I recognise someone’s face, or how I manage to walk in a straight line. Sometimes things just “sound a bit off”, as one says, which of course is not an explanation, just a description of what it feels like. That brings to my attention the distinction between what has been said and whether it is true, and then I can consider what other ways there are of joining up the dots.
Of course, that possibility is always present when one person speaks to another, and having cultivated consciousness of abstraction, it requires little activation energy to engage. In fact, that’s my default attitude whenever person A tells me anything negatively charged about B: not to immediately think “what a bad person B is!”, although they may be, but “this is the story that A has told me; what does it seem to me likely to be true?”
Well, based on that description, would I be accurate in saying that it seems as though your “method” would generate a lot of false positives?
You can always trade of specificity for sensitivity. It also possible to ask additional questions when you are suspicious.
Suspending judgement is not a false positive. And even from such a limited interaction as seeing the name and subject line of an email, I am almost never wrong in detecting spam, and that’s the spam that got past the automatic filters. I don’t think I’m exceptional; people are good at this sort of thing.
My hobby: looking at the section of the sidebar called “Recent on rationality blogs”, and predicting before mousing over the links whether the source is SlateStarCodex, Overcoming Bias, an EA blog, or other. I get above 90% there, and while “Donor coordination” is obviously an EA subject, I can’t explain what makes “One in a Billion?” and “On Stossel Tonight” clearly OB tiles, while “Framing for Light Instead of Heat” could only be SSC.
Deliberately uninformative title. Robin Hanson does this fairly often, Scott much less so. Very short, which is highly characteristic of OB. Very large number is suggestive of “large-scale” concerns, more characteristic of OB than of Scott. Nothing that obviously suggests EAism.
Self-promoting (RH frequently puts up things about his public appearances; other sidebarry folks don’t). Very short. Assumes you know what “Stossel” is; if you don’t this reads as “deliberately uninformative” (somewhat typical of OB), and if you do it reads as “right-wing and businessy connections” (very typical of OB).
(As you may gather, I share your hobby.)
Huh. I must just be unusually stupid with respect to “this sort of thing”, then, as I’m rarely able to discern a plausible-sounding lie from the truth based on nonverbal cues. (As a result, my compensation heuristic is “ignore any and all rumors, especially negative ones”.) Ah, well. It looks like I implicitly committed the typical mind fallacy in assuming that everyone would have a similar level of difficulty as I do when detecting “off-ness”.
That sounds like an awesome hobby, and one that I feel like I should start trying. Would you say you’ve improved at doing this over time, or do you think your level of skill has remained relatively constant?
I couldn’t really say. Back when I read OB, I’d often think, “Yes, that’s a typical OB title”, but of course I knew I was looking at OB. When the sidebar blogroll was introduced here, I realised that I could still tell the OB titles from the rest. The “X is not about Y” template is a giveaway, of course, but Hanson hasn’t used that for some time. SSC tends to use more auxiliary words, OB leaves them out. Where Scott writes “Framing For Light Instead Of Heat”, Hanson would have written “Light Not Heat”, or perhaps “Light Or Heat?”.
It sounds like you’re implying that most lies are easily found, and consequently, most unchallenged statements are truths.
That’s, really really really stretching my capacity to believe. Either you’re unique with this ability, or you’re also committing the typical mind fallacy, w.r.t thinking all people are only as good at lying (at max) as you are at sniffing them out.
Emphasis added:
In a scenario like this, i.e. pretending to be undergoing a deep crisis of faith in order to undermine someone else’s. My observation is that in practice, concern trolling is rapidly found out, and the bigger the audience, the shorter the time to being nailed.
On the whole, people are as good at lying as, on the whole, people are at finding them out, because it’s an arms race. Some will do better, some worse; anyone to whom the idea, “why not just lie!” has only just occurred is unlikely to be in the former class.
Most people are not able to have the kind of strength of emotions that come with a genuine crisis of faith via conscious choice. Pretending to have them might come of as creepy even if the other person can’t exactly pinpoint what’s wrong.
Fair enough. Are there any subjects about which there might not be as high an emotional backlash? Cryonics, maybe? Start off acting unconvinced and then visibly think about it over a period of time, coming to accept it later on. That doesn’t seem like a lot of emotion is involved; it seems entirely intellectual, and the main factor against cryonics is the “weirdness factor”, so if there’s someone alongside you getting convinced, it might make it easier, especially due to conformity effects.
The topic of cryonics is about dealing with death. There a lot of emotion involved for most people.
It’s true that cryonics is about death, but I don’t think that necessarily means there’s “a lot of emotion involved”. Most forms of rejection to cryonics that I’ve seen seem to be pretty intellectual, actually; there’s a bunch of things like cost-benefit analysis and probability estimates going on, etc. I personally think it’s likely that there is some motivated cognition going on, but I don’t think it’s due to heavy emotions. As I said in my earlier comment, I think that the main factor against cryonics is the fact that it seems “weird”, and therefore the people who are signed up for it also seem “weird”. If that’s the case, then it may be to the advantage of cryonics advocates to place themselves in the “normal” category first by acting skeptical of a crankish-sounding idea, before slowly getting “convinced”. Compare that approach to the usual approach: “Hey, death sucks, wanna sign up to get your head frozen so you’ll have a chance at getting thawed in the future?” Comparatively speaking, I think that the “usual” approach is significantly more likely to get you landed in the “crackpot” category.
That’s really not how most people make their decisions.
There are plenty of ways to tell someone about cryonics that don’t involve a direct plea for them to take action.
Maybe it’s not how most people make their decisions, but I have seen a significant number of people who do reject cryonics on a firmly intellectual basis, both online and in real life. I suppose you could argue that it’s not their true rejection (in fact, it almost certainly isn’t), but even so, that’s evidence against heavy emotions playing a significant part in their decision process.
Yes, but most of them still suffer from the “weirdness factor”.