I really appreciate you sharing your concerns. It helps me and other involved in the project learn more about what to avoid going forward and optimize our methods. Thank you for laying them out so clearly! I think this comment will be something that I will come back to in the future as I and others create content.
I want to see if I can address some of the concerns you expressed.
In my writing for venues like Lifehack, I do not speak of rationality explicitly as something we are promoting. As in this post, I talk about growing mentally stronger or being intentional—euphemisms that do not associate rationality as such with what we’re doing. I only incidentally mention rationality, such as when I speak of Rationality Dojo as a noun. I also generally do not talk of cognitive biases, and use other euphemistic language, such as referring to thinking errors, as in this article for Salon. So this gets at the point of watering down rationality.
I would question the point about arguing from authority. One of the goals of Intentional Insights is to convey what science-based itself means. For example, in this article, I specifically discuss research studies as a key way of validating truth claims. Recall that we are all suffering from a position of curse of knowledge on this point. How can we expect to teach people who do not know what science-based means without teaching it to them in the first place? Do you remember when you were at a stage when you did not know the value of scientific studies, and then came to learn about them as a useful way of validating evidence? This is what I’m doing in that article above. Hope this helps address some of the concerns about arguing from authority.
I hear you about the inauthentic feeling writing style. As I told Lumifer in my comment below, I cringed at that when I was learning how to write that way, too. You can’t believe how weird that feels to an academic. My Elephant kicks and screams and tries to throw off my Rider whenever I do that. It’s very ughy. This writing style is much more natural for me. So is this.
However, this inauthentic-feeling writing style is the writing style needed to get into Lifehack. I have been trying to change my writing style to get into venues like that for the last year and a half, and only succeeded in changing my writing style in the last couple of months sufficiently to be published in Lifehack. Unfortunately, when trying to spread good ideas to the kind of people who read Lifehack, it’s necessary to use the language and genre and format that they want to read, and that the editors publish. Believe me, I also had my struggles with editors there who cut out more complex points and links to any scientific papers as too complex for their audience.
This gets at the broader point of who reads these articles. I want to quote a comment that Tem42 made in response to Lumifer:
Unless you mean simply the site that it is posted on smells of snake oil. In that case I agree, but at the same time, so what? The people that read articles on that site don’t smell snake oil, whether they should or not. If the site provides its own filter for its audience, that only makes it easier for us to present more highly targeted cognitive altruism.
Indeed, the site itself provides a filter. The people who read that site are not like you and me. Don’t fall for the typical mind fallacy here. They have complete cognitive ease with this content. They like to read it. They like to share it. This is the stuff they go for. My articles are meant to go higher than their average, such as this or this, conveying both research-based tactics applicable to daily life and frameworks of thinking conducive to moving toward rationality (without using the word, as I mentioned above). Hope this helps address the concerns about the writing style and the immunization of people to good ideas, since the readers of this content are specifically looking for this kind of writing style.
Does this cause any updating in decreasing the likelihood of nightmare scenarios like the one you described?
I’ll second the suggestion of introducing people to overconfidence early on, because (hopefully) it leads to a more questioning mindset.
I would note that the otherwise-awesome Adventures in Cognitive Biases’ calibration is heavily geared towards a particular geographic demographic, and that several of my peers that I’ve introduced this to were a little off-put by it, so consider encouraging them to stick through the calibration into the more meaty subject matter of the Adventure itself.
Does this cause any updating in decreasing the likelihood of nightmare scenarios like the one you described?
Effectively no. I understand that you’re aware of these risks and are able to list mitigating arguments, but the weight of those arguments does not resolve my worries. The things you’ve just said aren’t different in gestalt from what I’ve read from you.
To be potentially more helpful, here’s a few ways the arguments you just made fall flat for me:
I only incidentally mention rationality, such as when I speak of Rationality Dojo as a noun. I also generally do not talk of cognitive biases, and use other euphemistic language, such as referring to thinking errors, as in this article for Salon. So this gets at the point of watering down rationality.
Connectivity to the rationalist movement or “rationality” keyword isn’t necessary to immunize people against the ideas. You’re right that if you literally never use the word “bias” then it’s unlikely my nightmare imaginary conversational partner will have a strong triggered response against the word “bias”, but if they respond the same way to the phrase “thinking errors” or realize at some point that’s the concept I’m talking about, it’s the same pitfall. And in terms of catalyzing opposition, there is enough connectivity for motivated antagonists to make such connections and use every deviation from perfection as ammunition against even fully correct forms of good ideas.
For example, in this article, I specifically discuss research studies as a key way of validating truth claims. Recall that we are all suffering from a position of curse of knowledge on this point. How can we expect to teach people who do not know what science-based means without teaching it to them in the first place? Do you remember when you were at a stage when you did not know the value of scientific studies, and then came to learn about them as a useful way of validating evidence? This is what I’m doing in that article above. Hope this helps address some of the concerns about arguing from authority.
I can’t find any discussion in the linked article about why research is a key way of validating truth claims; did you link the correct article? I also don’t know if I understand what you’re trying to say; to reflect back, are you saying something like “People first need to be convinced that scientific studies are of value, before we can teach them why scientific studies are of value.” ? I … don’t know about that, but I won’t critique that position here since I may not be understanding.
(...) Hope this helps address the concerns about the writing style and the immunization of people to good ideas, since the readers of this content are specifically looking for this kind of writing style.
You seem to be saying that since the writing is of the form needed to get on Lifehack, and since in fact people are reading it on Lifehack, that they will then not suffer from any memetic immunization via the ideas. First, not all immunization is via negative reactions; many people think science is great, but have no idea how to do science. Such people can be in a sense immunized from learning to understand the process; their curiosity is already sated, and their decisions made. Second, as someone mentioned somewhere else on this comment stream, it’s not obvious that the Lifehack readers who end up looking at your article will end up liking or agreeing with your article.
You’re clearly getting some engagement, which is suggestive of positive responses, but what if the distribution of response is bimodal, with some readers liking it a little bit and some readers absolutely loathing it to the point of sharing their disgust with friends? Google searches reveal negative reactions to your materials as well. The net impact is not obviously positive.
use every deviation from perfection as ammunition against even fully correct forms of good ideas.
As a professional educator and communicator, I have a deep visceral experience with how “fully correct forms of good ideas” are inherently incompatible with bridging the inferential distance of how far the ordinary Lifehack reader is from the kind of thinking space on Less Wrong. Believe me, I have tried to explain more complex ideas from rationality to students many times. Moreover, I have tried to get more complex articles into Lifehack and elsewhere many times. They have all been rejected.
This is why it’s not possible for the lay audience to read scientific papers, or even the Sequences. This is why we have to digest the material for them, and present it in sugar-coated pills.
To be clear, I am not speaking of talking down to audiences. I like sugar-coated pills myself when I take medicine. To use an example related to knowledge, when I am offered information on a new subject, I first have to be motivated to want to engage with the topic, then learn the basic broad generalities, and only then go on to learn more complex things that represent the “fully correct forms of good ideas.”
This is the way education works in general. This is especially the case for audiences who are not trapped in the classroom like my college students. They have to be motivated to invest their valuable time into learning about a new topic. They have to really feel it’s worth their time and energy.
This is why the material has to be presented in an entertaining and engaging way, while also containing positive memes. Listicles are simply the most entertaining and engaging way that deal with the inferential gap at the same time. The listicles offer bread crumbs in the form of links for more interested readers to follow to get to the more complex things, and develop their knowledge over time, slowly bridging that inference gap. More on how we do this in my comment here
I can’t find any discussion in the linked article about why research is a key way of validating truth claims
The article doesn’t discuss why research is a key way of validating truth claims. Instead of telling, it shows that research is a key way of validating truth claims. Here is a section from the article:
Smiling and other mood-lifting activities help improve willpower. In a recent study, scientists first drained the willpower of participants through having them resist temptation. Then, for one group, they took steps to lift people’s moods, such as giving them unexpected gifts or showing them a funny video. For another group, they just let them rest. Compared to people who just rested for a brief period, those whose moods were improved did significantly better in resisting temptation later! So next time you need to resist temptation, improve your mood!
This discussion of a study as validating the truth claim proposition of “improving mood=higher willpower” demonstrates—not tells but shows—the value of scientific studies as a way to validate truth claims. This is the first point in the article. In the rest of the article, I link to studies or articles linking to studies without going over the study, since I already discussed a study and demonstrated to Lifehack readers that studies are a powerful form of evidence for determining truth claims.
Now, I hear you when you say that while some people may benefit by trying to think like scientists more and consider how to study the world in order to validate claims, others will be simply content to rely on science as a source of truth. While I certainly prefer the former, I’ll take the latter as well. How many global warming or evolution deniers are there, including among Lifehack readers? How many refuse to follow science-informed advice on not smoking and other matters? In general, if the lesson they learn is to follow the advice of scientists, instead of religious preachers or ideological politicians from any party, this will be a better outcome for the world, I would say.
what if the distribution of response is bimodal, with some readers liking it a little bit and some readers absolutely loathing it to the point of sharing their disgust with friends
I have an easy solution for that one. Lifehack editors carefully monitor the sentiment reactance on social media to their articles, and if there are negative reactions, they let writers know that. They did not let me know of any significant negative reactions to my article that are above the baseline, which is an indication that the article has been highly positively received by their audience, and those they share it with.
I think I presented plenty of information in my two long comments to response to your concerns. So what are your probabilities of the worst-case scenario now and horrific long-term impact now? Still at 20%? Are your impressions of the net positive of my activities still at 30%? If so, what information would it take to shift your thinking?
I really appreciate you sharing your concerns. It helps me and other involved in the project learn more about what to avoid going forward and optimize our methods. Thank you for laying them out so clearly! I think this comment will be something that I will come back to in the future as I and others create content.
I want to see if I can address some of the concerns you expressed.
In my writing for venues like Lifehack, I do not speak of rationality explicitly as something we are promoting. As in this post, I talk about growing mentally stronger or being intentional—euphemisms that do not associate rationality as such with what we’re doing. I only incidentally mention rationality, such as when I speak of Rationality Dojo as a noun. I also generally do not talk of cognitive biases, and use other euphemistic language, such as referring to thinking errors, as in this article for Salon. So this gets at the point of watering down rationality.
I would question the point about arguing from authority. One of the goals of Intentional Insights is to convey what science-based itself means. For example, in this article, I specifically discuss research studies as a key way of validating truth claims. Recall that we are all suffering from a position of curse of knowledge on this point. How can we expect to teach people who do not know what science-based means without teaching it to them in the first place? Do you remember when you were at a stage when you did not know the value of scientific studies, and then came to learn about them as a useful way of validating evidence? This is what I’m doing in that article above. Hope this helps address some of the concerns about arguing from authority.
I hear you about the inauthentic feeling writing style. As I told Lumifer in my comment below, I cringed at that when I was learning how to write that way, too. You can’t believe how weird that feels to an academic. My Elephant kicks and screams and tries to throw off my Rider whenever I do that. It’s very ughy. This writing style is much more natural for me. So is this.
However, this inauthentic-feeling writing style is the writing style needed to get into Lifehack. I have been trying to change my writing style to get into venues like that for the last year and a half, and only succeeded in changing my writing style in the last couple of months sufficiently to be published in Lifehack. Unfortunately, when trying to spread good ideas to the kind of people who read Lifehack, it’s necessary to use the language and genre and format that they want to read, and that the editors publish. Believe me, I also had my struggles with editors there who cut out more complex points and links to any scientific papers as too complex for their audience.
This gets at the broader point of who reads these articles. I want to quote a comment that Tem42 made in response to Lumifer:
Indeed, the site itself provides a filter. The people who read that site are not like you and me. Don’t fall for the typical mind fallacy here. They have complete cognitive ease with this content. They like to read it. They like to share it. This is the stuff they go for. My articles are meant to go higher than their average, such as this or this, conveying both research-based tactics applicable to daily life and frameworks of thinking conducive to moving toward rationality (without using the word, as I mentioned above). Hope this helps address the concerns about the writing style and the immunization of people to good ideas, since the readers of this content are specifically looking for this kind of writing style.
Does this cause any updating in decreasing the likelihood of nightmare scenarios like the one you described?
One idea is to try to teach your audience about overconfidence first, e.g. the way this game does with the calibration questions up front. See also.
Nice idea! Thanks for the suggestion. Maybe also a Caplan Test.
I’ll second the suggestion of introducing people to overconfidence early on, because (hopefully) it leads to a more questioning mindset.
I would note that the otherwise-awesome Adventures in Cognitive Biases’ calibration is heavily geared towards a particular geographic demographic, and that several of my peers that I’ve introduced this to were a little off-put by it, so consider encouraging them to stick through the calibration into the more meaty subject matter of the Adventure itself.
Thanks!
Effectively no. I understand that you’re aware of these risks and are able to list mitigating arguments, but the weight of those arguments does not resolve my worries. The things you’ve just said aren’t different in gestalt from what I’ve read from you.
To be potentially more helpful, here’s a few ways the arguments you just made fall flat for me:
Connectivity to the rationalist movement or “rationality” keyword isn’t necessary to immunize people against the ideas. You’re right that if you literally never use the word “bias” then it’s unlikely my nightmare imaginary conversational partner will have a strong triggered response against the word “bias”, but if they respond the same way to the phrase “thinking errors” or realize at some point that’s the concept I’m talking about, it’s the same pitfall. And in terms of catalyzing opposition, there is enough connectivity for motivated antagonists to make such connections and use every deviation from perfection as ammunition against even fully correct forms of good ideas.
I can’t find any discussion in the linked article about why research is a key way of validating truth claims; did you link the correct article? I also don’t know if I understand what you’re trying to say; to reflect back, are you saying something like “People first need to be convinced that scientific studies are of value, before we can teach them why scientific studies are of value.” ? I … don’t know about that, but I won’t critique that position here since I may not be understanding.
You seem to be saying that since the writing is of the form needed to get on Lifehack, and since in fact people are reading it on Lifehack, that they will then not suffer from any memetic immunization via the ideas. First, not all immunization is via negative reactions; many people think science is great, but have no idea how to do science. Such people can be in a sense immunized from learning to understand the process; their curiosity is already sated, and their decisions made. Second, as someone mentioned somewhere else on this comment stream, it’s not obvious that the Lifehack readers who end up looking at your article will end up liking or agreeing with your article.
You’re clearly getting some engagement, which is suggestive of positive responses, but what if the distribution of response is bimodal, with some readers liking it a little bit and some readers absolutely loathing it to the point of sharing their disgust with friends? Google searches reveal negative reactions to your materials as well. The net impact is not obviously positive.
As a professional educator and communicator, I have a deep visceral experience with how “fully correct forms of good ideas” are inherently incompatible with bridging the inferential distance of how far the ordinary Lifehack reader is from the kind of thinking space on Less Wrong. Believe me, I have tried to explain more complex ideas from rationality to students many times. Moreover, I have tried to get more complex articles into Lifehack and elsewhere many times. They have all been rejected.
This is why it’s not possible for the lay audience to read scientific papers, or even the Sequences. This is why we have to digest the material for them, and present it in sugar-coated pills.
To be clear, I am not speaking of talking down to audiences. I like sugar-coated pills myself when I take medicine. To use an example related to knowledge, when I am offered information on a new subject, I first have to be motivated to want to engage with the topic, then learn the basic broad generalities, and only then go on to learn more complex things that represent the “fully correct forms of good ideas.”
This is the way education works in general. This is especially the case for audiences who are not trapped in the classroom like my college students. They have to be motivated to invest their valuable time into learning about a new topic. They have to really feel it’s worth their time and energy.
This is why the material has to be presented in an entertaining and engaging way, while also containing positive memes. Listicles are simply the most entertaining and engaging way that deal with the inferential gap at the same time. The listicles offer bread crumbs in the form of links for more interested readers to follow to get to the more complex things, and develop their knowledge over time, slowly bridging that inference gap. More on how we do this in my comment here
The article doesn’t discuss why research is a key way of validating truth claims. Instead of telling, it shows that research is a key way of validating truth claims. Here is a section from the article:
This discussion of a study as validating the truth claim proposition of “improving mood=higher willpower” demonstrates—not tells but shows—the value of scientific studies as a way to validate truth claims. This is the first point in the article. In the rest of the article, I link to studies or articles linking to studies without going over the study, since I already discussed a study and demonstrated to Lifehack readers that studies are a powerful form of evidence for determining truth claims.
Now, I hear you when you say that while some people may benefit by trying to think like scientists more and consider how to study the world in order to validate claims, others will be simply content to rely on science as a source of truth. While I certainly prefer the former, I’ll take the latter as well. How many global warming or evolution deniers are there, including among Lifehack readers? How many refuse to follow science-informed advice on not smoking and other matters? In general, if the lesson they learn is to follow the advice of scientists, instead of religious preachers or ideological politicians from any party, this will be a better outcome for the world, I would say.
I have an easy solution for that one. Lifehack editors carefully monitor the sentiment reactance on social media to their articles, and if there are negative reactions, they let writers know that. They did not let me know of any significant negative reactions to my article that are above the baseline, which is an indication that the article has been highly positively received by their audience, and those they share it with.
I think I presented plenty of information in my two long comments to response to your concerns. So what are your probabilities of the worst-case scenario now and horrific long-term impact now? Still at 20%? Are your impressions of the net positive of my activities still at 30%? If so, what information would it take to shift your thinking?
EDIT: added link to my other comment
EDIT: On reflection, I want to tap out of this conversation. Thanks for the responses.