Not sure if it makes any difference, but instead of “stupid people” I think about people reading articles about ‘life hacking’ as “people who will probably have little benefit from the advice, because they will most likely immediately read hundred more articles and never apply the advice”; and also that the format of the advice completely ignores the inferential distances, so pretty much the only useful thing such article could give you is a link to a place that provides the real value. And if you are really really lucky, you will notice the link, follow the link, stay there, and get some of the value.
If I’d believe the readers were literally stupid, then of course I wouldn’t see much value in advertising LW to them. LW is not useful for stupid people, but can be useful to people… uhm… like I used to be before I found LW.
Which means, I used to spend a lot of time browsing random internet pages, a few times I found a link to some LW article that I read and moved on, and only after some time I realized: “Oh, I have already found a few interesting articles on the same website. Maybe instead of randomly browsing the web, reading this one website systematically could be better!” And that was my introduction to the rationalist community; these days I regularly attend LW meetups.
Could Gleb’s articles provide the same gateway for someone else (albeit only for a tiny fraction of the readership)? I don’t see a reason why not.
Yes, the clickbait site will make money. Okay. If instead someone would make paper flyers for LW, then the printing company would make money.
Indeed, the people who read one of our articles, for example the Lifehack article, are not inherently stupid. They have that urge for self-improvement that all of us here on Less Wrong have. They just way less education and access to information, and also of course different tastes, preferences, and skills. Moreover, the inferential gap is huge, as you correctly note.
The question is what will people do: will they actually follow the links to get more deep engagement? Let’s take the Lifehack article as an example to describe our broader model, which assumes that once people check out our content on other websites and venues, some will then visit the Intentional Insights website to engage with its content. So after the Lifehack article on 6 Science-Based Hacks for Growing Mentally Stronger appeared, it was shared over 2K times on social media, so it probably had views in the tens of thousands if not hundreds.
Then, over 1K people visited the Intentional Insights website directly from the Lifehack website. In other words, they were interested enough to not only skim the article, but also follow the links to Intentional Insights, which was listed in my bio and elsewhere. Of those, some will want to engage with our content further. As an example, we had a large wave of new people follow us on Facebook and other social media and subscribe to our newsletter in the week after the article came out. I can’t say how many did so as a result of seeing the article or other factors, but there was a large bump. So there is evidence of people wanting to get more thoroughly engaged.
The articles are meant to provide a gateway, in other words. And there is evidence of people following the breadcrumbs. Eventually, after they receive enough education, we would introduce them to ClearerThinking, CFAR, and LW. We are careful to avoid Endless September scenarios by not explicitly promoting Less Wrong heavily. For more on our strategy, see my comment below.
They are intended to not appeal to you, and that’s the point :-) If something feels cognitively easy to you and does not make you cringe at how low-level it is, then you are not the target audience. Similarly, you are not the target audience if something is overwhelming for you to read. Try to read them from the perspective of someone who does not know about rationality. A sample of evidence: this article was shared over 2K times by its readers, which means that tens and maybe thousands of people read it.
It might be useful to identify what exactly trips your snake-oil sensors here. Mine were tripped when it claimed to be science based but referenced no research papers, but other than that it looked okay to me.
Unless you mean simply the site that it is posted on smells of snake oil. In that case I agree, but at the same time, so what? The people that read articles on that site don’t smell snake oil, whether they should or not. If the site provides its own filter for its audience, that only makes it easier for us to present more highly targeted cognitive altruism.
To clarify about the science-based point, I tried to put in links to research papers, but unfortunately the editors cut most of them out. I was able to link to one peer-reviewed book, but the rest of the links had to be to other articles that contained research, such as this one from Intentional Insights itself.
Yup, very much agreed on the point of the site smelling like snake oil, and this enabling highly targeted cognitive altruism.
Yup, I hear you. I cringed at that when I was learning how to write that way, too. You can’t believe how weird that feels to an academic. My Elephant kicks and screams and tries to throw off my Rider whenever I do that. It’s very ughy.
However, having calculated the trade-offs and done a Bayesian-style analysis combined with a MAUT, it seems that the negative feelings we at InIn get, and mostly me at this point as others are not yet writing these types of articles for fear of this kind of backlash, are worth the rewards of raising the sanity waterline of people who read those types of websites.
I cringed at that when I was learning how to write that way, too.
So, why do you think this is necessary? Do you believe that proles have an unyielding “tits or GTFO” mindset so you have to provide tits in order to be heard? That ideas won’t go down their throat unless liberally coated in slime?
It may look to you like you’re raising the waterline, but from the outside it looks like all you’re doing is contributing to the shit tsunami.
for fear of this kind of backlash
I think “revulsion” is a better word.
Wasn’t there a Russian intellectual fad, around the end of XIX century, about “going to the people” and “becoming of the people” and “teaching the people”? I don’t think it ended well.
are worth the rewards of raising the sanity waterline
How do you know? What do you measure that tells you you are actually raising the sanity waterline?
Look, we can choose to wall ourselves off from the shit tsunami out there, and stay in our safe Less Wrong corner. Or we can try to go into the shit tsunami, provide stuff that’s less shitty than what people are used to consuming, and then slowly build them up. That’s the purpose of Intentional Insights—to reach out and build people up to growing more rational over time. You don’t have to be the one doing it, of course. I’m doing it. Others are doing it. But do you think it’s better to improve the shit tsunami or put our hands in our ears and pretend it’s not there and not do anything about it? I think it’s better to improve the shit tsunami of Lifehack and other such sites.
The measures we use and the methods we decided on and our reasoning behind them is described in my comment here.
Look, we can choose to wall ourselves off from the shit tsunami out there, and stay in our safe Less Wrong corner. Or we can try to go into the shit tsunami, provide stuff that’s less shitty than what people are used to consuming, and then slowly build them up.
Well, first of all I can perfectly well stay out of the shit tsunami even without hiding in the LW corner. The world does not consist of two parts only: LW and shit.
Second, you contribute to the shit tsunami, the stuff you provide is not less shitty. It is exactly what the tsunami consists of.
That’s the purpose … it’s better to improve the shit tsunami
The problem is not with the purpose. The problem is with what you are doing. Contributing your personal shit to the tsunami does not improve it.
The measures we use
You measure, basically, impressions—clicks and eyeballs. That tells you whether the stuff you put out gets noticed. It does not tell you whether that stuff raises the sanity waterline.
the stuff you provide is not less shitty. It is exactly what the tsunami consists of
Do you truly believe the article I wrote was no less shitty than the typical Lifehack article, for example this article currently on their front page? Is this what a reasonable outside observer would say? I’m willing to take a $1000 bet that more than 5 out of 10 neutral reasonable outside observers would evaluate my article as higher quality. Are you up for that bet? If not, please withdraw your claims. Thanks!
I am not terribly interested in distinguishing the shades of brown or aroma nuances. To answer your question, yes, I do believe you wrote a typical Lifehack article of the typical degree of shittiness. In fact, I think your mentioned on LW your struggles in producting something sufficiently shitty for Lifehack to accept and, clearly, you have succeeded in achieving the necessary level.
As to the bet, please specify what is a “neutral reasonable” observer and how do you define “quality” in this context. Also, do I take it you are offering 1:1 odds? That implies you believe the probability you will lose is just under 50%, y’know...
That implies you believe the probability you will lose is just under 50%
Only if $1000 is an insignificant fraction of Gleb’s wealth, or his utility-from-dollars function doesn’t show the sort of decreasing marginal returns most people’s do.
$1000 is not an insignificant portion of my wealth, as gjm notes. I certainly do not want to lose it.
We can take 10 LessWrongers who are not friends with you or I and have not participated in this thread and do not know about this debate as neutral observers. Should be relatively easy to gather through posting on the open thread or elsewhere.
We can have gjm or another external observer recruit people just in case one of us doing it might bias the results.
Sorry, I don’t enjoy gambling. I am still curious about “quality” which you say your article has and the typical Lifehacker swill doesn’t. How do you define that “quality”?
As an example this article, as do others, cites links to and describes studies, gives advice that is informed by research, and conveys frames of thinking likely to lead to positive outcomes besides building willpower, such as self-forgiveness, commitment, goal setting, etc.
As I said, I’m not interested in gambling. Your bet, from my point of view, is on whether a random selection of people will find one piece of shit to be slightly better or slightly worse than another piece of shit. I am not particularly interested in shades of brown, this establishes no objective facts, and will not change my position. So why bother?
Ah, alright, thanks for clarifying. So it sounds like you acknowledge that there are different shades. Now, how do you cross the inference gap from people who like the darkest shade into lighter shades? That’s the project of raising the sanity waterline.
You seem to have made two contradicting statements, or maybe we’re miscommunicating.
1) Do you believe that raising the sanity waterline of those in the murk—those who like the dark shade because of their current circumstances and knowledge, but are capable of learning and improving—is still raising the sanity waterline?
2) If you believe it is still raising the sanity waterline, how do you raise their sanity waterline if you do not produce slightly less shitty content intentionally in order to cross the inference gap?
Do you believe that raising the sanity waterline of those in the murk
I don’t think you can raise their sanity waterline by writing slightly lighter-shade articles on Lifehacker and such. I think you’re deluding yourself.
Is it worth introducing one reader by poisoning nine, however? First impressions do matter, and if the first impression rationalism gives people is that of a cult making pseudoscientific pop-self-help-ish promises about improving their lives, you’re trading short-term gains for long-term difficulties overcoming that reputation (which, I’ll note, the rationalist community already struggles with).
Please avoid using terms like “poisoning” and other vague claims. It’s an argument-style Dark Arts, which is your skills set as you previously clearly acknowledged, and attack Intentional Insights through pattern-matching and making vague claims. Instead, please consider using rational communication. For example, be specific and concrete about how our articles, for example this one, posion nine readers out of ten, and introduce one reader to rationality. Thanks!
Not sure if it makes any difference, but instead of “stupid people” I think about people reading articles about ‘life hacking’ as “people who will probably have little benefit from the advice, because they will most likely immediately read hundred more articles and never apply the advice”; and also that the format of the advice completely ignores the inferential distances, so pretty much the only useful thing such article could give you is a link to a place that provides the real value. And if you are really really lucky, you will notice the link, follow the link, stay there, and get some of the value.
If I’d believe the readers were literally stupid, then of course I wouldn’t see much value in advertising LW to them. LW is not useful for stupid people, but can be useful to people… uhm… like I used to be before I found LW.
Which means, I used to spend a lot of time browsing random internet pages, a few times I found a link to some LW article that I read and moved on, and only after some time I realized: “Oh, I have already found a few interesting articles on the same website. Maybe instead of randomly browsing the web, reading this one website systematically could be better!” And that was my introduction to the rationalist community; these days I regularly attend LW meetups.
Could Gleb’s articles provide the same gateway for someone else (albeit only for a tiny fraction of the readership)? I don’t see a reason why not.
Yes, the clickbait site will make money. Okay. If instead someone would make paper flyers for LW, then the printing company would make money.
Indeed, the people who read one of our articles, for example the Lifehack article, are not inherently stupid. They have that urge for self-improvement that all of us here on Less Wrong have. They just way less education and access to information, and also of course different tastes, preferences, and skills. Moreover, the inferential gap is huge, as you correctly note.
The question is what will people do: will they actually follow the links to get more deep engagement? Let’s take the Lifehack article as an example to describe our broader model, which assumes that once people check out our content on other websites and venues, some will then visit the Intentional Insights website to engage with its content. So after the Lifehack article on 6 Science-Based Hacks for Growing Mentally Stronger appeared, it was shared over 2K times on social media, so it probably had views in the tens of thousands if not hundreds.
Then, over 1K people visited the Intentional Insights website directly from the Lifehack website. In other words, they were interested enough to not only skim the article, but also follow the links to Intentional Insights, which was listed in my bio and elsewhere. Of those, some will want to engage with our content further. As an example, we had a large wave of new people follow us on Facebook and other social media and subscribe to our newsletter in the week after the article came out. I can’t say how many did so as a result of seeing the article or other factors, but there was a large bump. So there is evidence of people wanting to get more thoroughly engaged.
The articles are meant to provide a gateway, in other words. And there is evidence of people following the breadcrumbs. Eventually, after they receive enough education, we would introduce them to ClearerThinking, CFAR, and LW. We are careful to avoid Endless September scenarios by not explicitly promoting Less Wrong heavily. For more on our strategy, see my comment below.
Not that I belong to his target demographic, but his articles would make me cringe and rapidly run in the other direction.
They are intended to not appeal to you, and that’s the point :-) If something feels cognitively easy to you and does not make you cringe at how low-level it is, then you are not the target audience. Similarly, you are not the target audience if something is overwhelming for you to read. Try to read them from the perspective of someone who does not know about rationality. A sample of evidence: this article was shared over 2K times by its readers, which means that tens and maybe thousands of people read it.
I don’t cringe at the level. I cringe at the slimy feel and the strong smell of snake oil.
It might be useful to identify what exactly trips your snake-oil sensors here. Mine were tripped when it claimed to be science based but referenced no research papers, but other than that it looked okay to me.
Unless you mean simply the site that it is posted on smells of snake oil. In that case I agree, but at the same time, so what? The people that read articles on that site don’t smell snake oil, whether they should or not. If the site provides its own filter for its audience, that only makes it easier for us to present more highly targeted cognitive altruism.
To clarify about the science-based point, I tried to put in links to research papers, but unfortunately the editors cut most of them out. I was able to link to one peer-reviewed book, but the rest of the links had to be to other articles that contained research, such as this one from Intentional Insights itself.
Yup, very much agreed on the point of the site smelling like snake oil, and this enabling highly targeted cognitive altruism.
The overwhelming stench trips them.
This stuff can’t be edited to make it better, it can only be dumped and completely rewritten from scratch. Fisking it is useless.
Yup, I hear you. I cringed at that when I was learning how to write that way, too. You can’t believe how weird that feels to an academic. My Elephant kicks and screams and tries to throw off my Rider whenever I do that. It’s very ughy.
However, having calculated the trade-offs and done a Bayesian-style analysis combined with a MAUT, it seems that the negative feelings we at InIn get, and mostly me at this point as others are not yet writing these types of articles for fear of this kind of backlash, are worth the rewards of raising the sanity waterline of people who read those types of websites.
So, why do you think this is necessary? Do you believe that proles have an unyielding “tits or GTFO” mindset so you have to provide tits in order to be heard? That ideas won’t go down their throat unless liberally coated in slime?
It may look to you like you’re raising the waterline, but from the outside it looks like all you’re doing is contributing to the shit tsunami.
I think “revulsion” is a better word.
Wasn’t there a Russian intellectual fad, around the end of XIX century, about “going to the people” and “becoming of the people” and “teaching the people”? I don’t think it ended well.
How do you know? What do you measure that tells you you are actually raising the sanity waterline?
Look, we can choose to wall ourselves off from the shit tsunami out there, and stay in our safe Less Wrong corner. Or we can try to go into the shit tsunami, provide stuff that’s less shitty than what people are used to consuming, and then slowly build them up. That’s the purpose of Intentional Insights—to reach out and build people up to growing more rational over time. You don’t have to be the one doing it, of course. I’m doing it. Others are doing it. But do you think it’s better to improve the shit tsunami or put our hands in our ears and pretend it’s not there and not do anything about it? I think it’s better to improve the shit tsunami of Lifehack and other such sites.
The measures we use and the methods we decided on and our reasoning behind them is described in my comment here.
Well, first of all I can perfectly well stay out of the shit tsunami even without hiding in the LW corner. The world does not consist of two parts only: LW and shit.
Second, you contribute to the shit tsunami, the stuff you provide is not less shitty. It is exactly what the tsunami consists of.
The problem is not with the purpose. The problem is with what you are doing. Contributing your personal shit to the tsunami does not improve it.
You measure, basically, impressions—clicks and eyeballs. That tells you whether the stuff you put out gets noticed. It does not tell you whether that stuff raises the sanity waterline.
So I repeat: how do you know?
Do you truly believe the article I wrote was no less shitty than the typical Lifehack article, for example this article currently on their front page? Is this what a reasonable outside observer would say? I’m willing to take a $1000 bet that more than 5 out of 10 neutral reasonable outside observers would evaluate my article as higher quality. Are you up for that bet? If not, please withdraw your claims. Thanks!
I am not terribly interested in distinguishing the shades of brown or aroma nuances. To answer your question, yes, I do believe you wrote a typical Lifehack article of the typical degree of shittiness. In fact, I think your mentioned on LW your struggles in producting something sufficiently shitty for Lifehack to accept and, clearly, you have succeeded in achieving the necessary level.
As to the bet, please specify what is a “neutral reasonable” observer and how do you define “quality” in this context. Also, do I take it you are offering 1:1 odds? That implies you believe the probability you will lose is just under 50%, y’know...
Only if $1000 is an insignificant fraction of Gleb’s wealth, or his utility-from-dollars function doesn’t show the sort of decreasing marginal returns most people’s do.
Indeed, $1000 is a quite significant portion of my wealth.
$1000 is not an insignificant portion of my wealth, as gjm notes. I certainly do not want to lose it.
We can take 10 LessWrongers who are not friends with you or I and have not participated in this thread and do not know about this debate as neutral observers. Should be relatively easy to gather through posting on the open thread or elsewhere.
We can have gjm or another external observer recruit people just in case one of us doing it might bias the results.
So, going through with it?
Sorry, I don’t enjoy gambling. I am still curious about “quality” which you say your article has and the typical Lifehacker swill doesn’t. How do you define that “quality”?
As an example this article, as do others, cites links to and describes studies, gives advice that is informed by research, and conveys frames of thinking likely to lead to positive outcomes besides building willpower, such as self-forgiveness, commitment, goal setting, etc.
And I imagine that based on your response, you take your words back. Thanks!
I am sorry to disappoint you. I do not.
Well, what kind of odds would you give me to take the bet?
As I said, I’m not interested in gambling. Your bet, from my point of view, is on whether a random selection of people will find one piece of shit to be slightly better or slightly worse than another piece of shit. I am not particularly interested in shades of brown, this establishes no objective facts, and will not change my position. So why bother?
Four out of five dentists recommend… X-)
Ah, alright, thanks for clarifying. So it sounds like you acknowledge that there are different shades. Now, how do you cross the inference gap from people who like the darkest shade into lighter shades? That’s the project of raising the sanity waterline.
I am not interested in crossing the inference gap to people who like the darkest shade. They can have it.
I don’t think that raising the sanity waterline involves producting shit, even of particular colours.
You seem to have made two contradicting statements, or maybe we’re miscommunicating.
1) Do you believe that raising the sanity waterline of those in the murk—those who like the dark shade because of their current circumstances and knowledge, but are capable of learning and improving—is still raising the sanity waterline?
2) If you believe it is still raising the sanity waterline, how do you raise their sanity waterline if you do not produce slightly less shitty content intentionally in order to cross the inference gap?
I don’t think you can raise their sanity waterline by writing slightly lighter-shade articles on Lifehacker and such. I think you’re deluding yourself.
Ok, I will agree to disagree on this one.
Is it worth introducing one reader by poisoning nine, however? First impressions do matter, and if the first impression rationalism gives people is that of a cult making pseudoscientific pop-self-help-ish promises about improving their lives, you’re trading short-term gains for long-term difficulties overcoming that reputation (which, I’ll note, the rationalist community already struggles with).
Please avoid using terms like “poisoning” and other vague claims. It’s an argument-style Dark Arts, which is your skills set as you previously clearly acknowledged, and attack Intentional Insights through pattern-matching and making vague claims. Instead, please consider using rational communication. For example, be specific and concrete about how our articles, for example this one, posion nine readers out of ten, and introduce one reader to rationality. Thanks!