“actually, X” is never a good way to sell anything. Scientists are quite prone to this kind of speech which from their perspective is fully justified ( because they’ve exhaustively studied a certain topic ) - but what the average person hears is the “you don’t know what you’re talking about” half of the implication which makes them deaf to the “I do know what I’m talking about” half. If you just place the fruits of rationality on display; anyone with a brain will be able to recognize them for what they are and they’ll adjust their judgements accordingly.
Here’s an interesting exercise—find anyone in the business of persuasion ( a lawyer, a salesman, a con artist ) and see how often you hear them say things like “no, actually...” ( or how often you hear them not saying these things ).
My impression: a major issue is that other people get the idea that LessWrong comes from a few people preaching their ideas, when in reality, it’s people who mostly preach the ideas that have been discovered by and are widely agreed upon by academic experts. Just saying, “it comes from academics” seems to not directly address this major issue directly enough.
That said, I see what you mean about “actually, X” being a pattern that may lead people to instinctively argue the other way. So I see that there is a cost, but my impression is that the cost doesn’t outweigh the benefit that comes with directly addressing a major concern that others have. For most audiences; there are certainly some less charitable audiences who need to be approached more gently.
I’d consider my confidence in this to be moderate. Getting your data point has lead to me shift downwards a bit.
Hate to have to say this but directly addressing a concern is social confirmation of a form that the concern deserves to be addressed, and thus that it’s based in something real. Imagine a Scientologist offering to explain to you why Scientology isn’t a cult.
Of the people I know of who are outright hostile to LW, it’s mostly because of basilisks and polyamory and other things that make LW both an easy and a fun target for derision. And we can’t exactly say that those things don’t exist.
Hate to have to say this but directly addressing a concern is social confirmation of a form that the concern deserves to be addressed, and thus that it’s based in something real.
I could see some people responding that way. But I could see others responding with, “oh, ok—that makes sense”. Or maybe, “hm, I can’t tell whether this is legit—let me look into it further”. There are lots of citations and references in the LessWrong writings, so it’s hard to argue with the fact that it’s heavily based off of existing science.
Still, there is the risk of some people just responding with, “Jeez, this guy is getting defensive already. I’m skeptical. This LessWrong stuff is not for me.” I see that directly addressing a concern can signal bad things and cause this reaction, but for whatever reason, my brain is producing a feeling that this sort of reaction will be the minority in this context (in other contexts, I could see the pattern being more harmful). I’m starting to feel less confident in that, though. I have to be careful not to Typical Mind here. I have an issue with Typical Minding too much, and know I need to look out for it.
The good thing is that user research could totally answer this question. Maybe that’d be a good activity for a meet-up group or something. Maybe I’ll give it a go.
If you just place the fruits of rationality on display; anyone with a brain will be able to recognize them for what they are and they’ll adjust their judgements accordingly.
“actually, X” is never a good way to sell anything. Scientists are quite prone to this kind of speech which from their perspective is fully justified ( because they’ve exhaustively studied a certain topic ) - but what the average person hears is the “you don’t know what you’re talking about” half of the implication which makes them deaf to the “I do know what I’m talking about” half. If you just place the fruits of rationality on display; anyone with a brain will be able to recognize them for what they are and they’ll adjust their judgements accordingly.
Here’s an interesting exercise—find anyone in the business of persuasion ( a lawyer, a salesman, a con artist ) and see how often you hear them say things like “no, actually...” ( or how often you hear them not saying these things ).
My impression: a major issue is that other people get the idea that LessWrong comes from a few people preaching their ideas, when in reality, it’s people who mostly preach the ideas that have been discovered by and are widely agreed upon by academic experts. Just saying, “it comes from academics” seems to not directly address this major issue directly enough.
That said, I see what you mean about “actually, X” being a pattern that may lead people to instinctively argue the other way. So I see that there is a cost, but my impression is that the cost doesn’t outweigh the benefit that comes with directly addressing a major concern that others have. For most audiences; there are certainly some less charitable audiences who need to be approached more gently.
I’d consider my confidence in this to be moderate. Getting your data point has lead to me shift downwards a bit.
Hate to have to say this but directly addressing a concern is social confirmation of a form that the concern deserves to be addressed, and thus that it’s based in something real. Imagine a Scientologist offering to explain to you why Scientology isn’t a cult.
Of the people I know of who are outright hostile to LW, it’s mostly because of basilisks and polyamory and other things that make LW both an easy and a fun target for derision. And we can’t exactly say that those things don’t exist.
I could see some people responding that way. But I could see others responding with, “oh, ok—that makes sense”. Or maybe, “hm, I can’t tell whether this is legit—let me look into it further”. There are lots of citations and references in the LessWrong writings, so it’s hard to argue with the fact that it’s heavily based off of existing science.
Still, there is the risk of some people just responding with, “Jeez, this guy is getting defensive already. I’m skeptical. This LessWrong stuff is not for me.” I see that directly addressing a concern can signal bad things and cause this reaction, but for whatever reason, my brain is producing a feeling that this sort of reaction will be the minority in this context (in other contexts, I could see the pattern being more harmful). I’m starting to feel less confident in that, though. I have to be careful not to Typical Mind here. I have an issue with Typical Minding too much, and know I need to look out for it.
The good thing is that user research could totally answer this question. Maybe that’d be a good activity for a meet-up group or something. Maybe I’ll give it a go.
Behold LW!
:-)