Rationality drugs. Many nootropics can increase cognitive capacity, which according to Stanovich’s picture of the cognitive science of rationality, should help with performance on some rationality measures. However, good performance on many rationality measures requires not just cognitive capacity but also cognitive reflectiveness: the disposition to choose to think carefully about something and avoid bias. So: Are there drugs that increase cognitive reflectiveness / “need for cognition”?
Debiasing. I’m developing a huge, fully-referenced table of (1) thinking errors, (2) the normative models they violate, (3) their suspected causes, (4) rationality skills that can meliorate them, and (5) rationality exercises that can be used to develop those rationality skills. Filling out the whole thing is of course taking a while, and any help would be appreciated. A few places where I know there’s literature but I haven’t had time to summarize it yet include: how to debias framing effects, how to debias base rate neglect, and how to debias confirmation bias. (But I have, for example, already summarized everything on how to debias the planning fallacy.)
I did my high school science experiment on nootropics in rats, to see if it affected the time it took them to learn to navigate the maze compared to a control group that didn’t take them.
The test subjects I gave the drugs to all died. The control group eventually learned how to go through the maze without making any wrong turns.
I was given a B+ and an admonishment to never let anyone know the teacher had pre-approved my experimenting on mammals.
cognitive reflectiveness: the disposition to choose to think carefully about something and avoid bias
I sometimes worry that this disposition may be more important than everything we typically think of as “rationality skills” and more important than all the specific named biases that can be isolated and published, but that it’s underemphasized on LW because “I’ll teach you these cool thinking skills and you’ll be a strictly more awesome person” makes for a better website sales pitch than “please be cognitively reflective to the point of near-neuroticism, I guess one thing that helps is to have the relevant genes”.
Do you know of research supporting debiasing scope insensitivity by introducing differences in kind that approximately preserve the subjective quantitative relationship? If not I will look for it, but I don’t want to if you already have it at hand.
I am thinking in particular of Project Steve. Rather than counter a list of many scientists who “Dissent from Darwinism” with a list of many scientists who believe evolution works, they made a list of hundreds of scientists named Steve who believe evolution works.
Many people is approximately equal to many people in the mind, be it hundreds or thousands, but many people is fewer than many Steves. That’s the theory, anyway.
Intuitively it sounds like it should work, but I don’t know if there are studies supporting this.
There’s our solution to scope insensitivity about existential risks. “If unfriendly AI undergoes an intelligence explosion, millions of Steves will die. Won’t somebody please think of the Steves?”
Convert numbers and rates into equivalent traits or dispositions: Convert “85% of the taxis in the city are green” to “85% of previous accidents involved drivers of green cabs”. (Recent Kahneman interview)
Requisition social thinking: Convert “85%” to “85 out of 100″, or “Which cards must you turn over” to “which people must you check further” (Wason test).
how to debias framing effects
Have people been trained in automatically thinking of “mortality rates” as “survival rates” and such? A good dojo game to play would be practicing thinking in terms of an opposite framing as quickly as possible, until it became pre-conscious, and one consciously became aware of what one heard and its opposite at the same time.
An enduring concern about democracies is that citizens conform too readily to the policy views of
elites in their own parties, even to the point of ignoring other information about the policies in
question. This article presents two experiments that undermine this concern, at least under one
important condition. People rarely possess even a modicum of information about policies; but when they
do, their attitudes seem to be affected at least as much by that information as by cues from party elites.
The experiments also measure the extent to which people think about policy. Contrary to many accounts,
they suggest that party cues do not inhibit such thinking. This is not cause for unbridled optimism about
citizens’ ability to make good decisions, but it is reason to be more sanguine about their ability to use
information about policy when they have it.
(Emphasis mine.)
If one knew the extent one was biased by cues, and one knew one’s opinion based on cues and facts, it would be possible to calculate what one’s views would be without cues.
Now have a look at a very small variation that changes everything. There are two companies in the city; they’re equally large. Eighty-five percent of cab accidents involve blue cabs. Now this is not ignored. Not at all ignored. It’s combined almost accurately with a base rate. You have the witness who says the opposite. What’s the difference between those two cases? The difference is that when you read this one, you immediately reach the conclusion that the drivers of the blue cabs are insane, they’re reckless drivers. That is true for every driver. It’s a stereotype that you have formed instantly, but it’s a stereotype about individuals, it is no longer a statement about the ensemble. It is a statement about individual blue drivers. We operate on that completely differently from the way that we operate on merely statistical information that that cab is drawn from that ensemble.
...
A health survey was conducted in a sample of adult males in British Columbia of all ages and occupations. “Please give your best estimate of the following values: What percentage of the men surveyed have had one or more heart attacks? The average is 18 percent. What percentage of men surveyed both are over 55 years old, and have had one or more heart attacks? And the average is 30 percent.” A large majority says that the second is more probable than the first.
Here is an alternative version of that which we proposed, a health survey, same story. It was conducted in a sample of 100 adult males, so you have a number. “How many of the 100 participants have had one or more heart attacks, and how many of the 100 participants both are over 55 years old and have had one or more heart attacks?” This is radically easier. From a large majority of people making mistakes, you get to a minority of people making mistakes. Percentages are terrible; the number of people out of 100 is easy.
Regarding framing effects, one could write a computer program into which one could plug in numbers and have a decision converted into an Allais paradox.
Are you including inducing biases as part of “debiasing”? For example, if people are generally too impulsive in spending money, a mechanism that merely made people more restrained could counteract that, but would be vulnerable to overshooting or undershooting. Here is the relevant study:
In Studies 2 and 3, we found that higher levels of bladder pressure resulted in an increased ability to resist impulsive choices in monetary decision making.
I suggest making it a separate category, at least to start with. It will be easier to recombine them into debiasing later if it turns out the distinction makes little sense and there is a range of anti-biasing from debiasing to rebiasing, than it would be to separate them after everything is filled in.
Ah, I think you misunderstood me (on reflection, I wasn’t very clear) — I’m doing an experiment, not a research project in the sense of looking over the existing literature.
(For the record, I decided on conducting something along the lines of the studies mentioned in this post to look at how distraction influences retention of false information.)
Probably the latter. I’m reading through links from the links from the links of what you linked to, perhaps you could list all the biases you could use help on? I think my Arieli Lindt/Hersheys solution of imposing a self penalty whenever accepting free things was a clever way of debiasing that bias (though I would think so, wouldn’t I?) and in the course of reading through all kinds of these articles (in a topic I am interested in) I could provide similar things.
I really do go through a lot of this stuff independently, I had read the Bullock paper and Kahneman interview before you asked for help and only after you asked did I know I had information you wanted.
In any case my above comment was probably downvoted for it being perceived as posturing rather than because it isn’t a common concern. That interpretation best explains my getting downvoted for raising the issue and you being downvoted for not taking it maximally seriously.
Rationality drugs. Many nootropics can increase cognitive capacity, which according to Stanovich’s picture of the cognitive science of rationality, should help with performance on some rationality measures. However, good performance on many rationality measures requires not just cognitive capacity but also cognitive reflectiveness: the disposition to choose to think carefully about something and avoid bias. So: Are there drugs that increase cognitive reflectiveness / “need for cognition”?
Debiasing. I’m developing a huge, fully-referenced table of (1) thinking errors, (2) the normative models they violate, (3) their suspected causes, (4) rationality skills that can meliorate them, and (5) rationality exercises that can be used to develop those rationality skills. Filling out the whole thing is of course taking a while, and any help would be appreciated. A few places where I know there’s literature but I haven’t had time to summarize it yet include: how to debias framing effects, how to debias base rate neglect, and how to debias confirmation bias. (But I have, for example, already summarized everything on how to debias the planning fallacy.)
I did my high school science experiment on nootropics in rats, to see if it affected the time it took them to learn to navigate the maze compared to a control group that didn’t take them.
The test subjects I gave the drugs to all died. The control group eventually learned how to go through the maze without making any wrong turns.
I was given a B+ and an admonishment to never let anyone know the teacher had pre-approved my experimenting on mammals.
What drugs did you give them?
Ginkgo biloba extract.
A whole bunch of vasodillator. Yes, I can imagine that working well as rat poison. :)
Ah, that explains my confusion; I saw “nootropics” and expected a racetam or something along those lines.
I sometimes worry that this disposition may be more important than everything we typically think of as “rationality skills” and more important than all the specific named biases that can be isolated and published, but that it’s underemphasized on LW because “I’ll teach you these cool thinking skills and you’ll be a strictly more awesome person” makes for a better website sales pitch than “please be cognitively reflective to the point of near-neuroticism, I guess one thing that helps is to have the relevant genes”.
.
Do you know of research supporting debiasing scope insensitivity by introducing differences in kind that approximately preserve the subjective quantitative relationship? If not I will look for it, but I don’t want to if you already have it at hand.
I am thinking in particular of Project Steve. Rather than counter a list of many scientists who “Dissent from Darwinism” with a list of many scientists who believe evolution works, they made a list of hundreds of scientists named Steve who believe evolution works.
Many people is approximately equal to many people in the mind, be it hundreds or thousands, but many people is fewer than many Steves. That’s the theory, anyway.
Intuitively it sounds like it should work, but I don’t know if there are studies supporting this.
There’s our solution to scope insensitivity about existential risks. “If unfriendly AI undergoes an intelligence explosion, millions of Steves will die. Won’t somebody please think of the Steves?”
Convert numbers and rates into equivalent traits or dispositions: Convert “85% of the taxis in the city are green” to “85% of previous accidents involved drivers of green cabs”. (Recent Kahneman interview)
Requisition social thinking: Convert “85%” to “85 out of 100″, or “Which cards must you turn over” to “which people must you check further” (Wason test).
Have people been trained in automatically thinking of “mortality rates” as “survival rates” and such? A good dojo game to play would be practicing thinking in terms of an opposite framing as quickly as possible, until it became pre-conscious, and one consciously became aware of what one heard and its opposite at the same time.
Fresh off the presses at Yale’s American Political Science Review from August: http://bullock.research.yale.edu/papers/elite/elite.pdf
(Emphasis mine.)
If one knew the extent one was biased by cues, and one knew one’s opinion based on cues and facts, it would be possible to calculate what one’s views would be without cues.
Thanks! I knew some of that stuff, but not all. But for the table of thinking errors and debiasing techniques I need the references, too.
http://edge.org/conversation/the-marvels-and-flaws-of-intuitive-thinking
Regarding framing effects, one could write a computer program into which one could plug in numbers and have a decision converted into an Allais paradox.
One could commit to donating an amount of money to charity any time a free thing is acquired. (Arieli Lindt/Hershey’s experiment)
Are you including inducing biases as part of “debiasing”? For example, if people are generally too impulsive in spending money, a mechanism that merely made people more restrained could counteract that, but would be vulnerable to overshooting or undershooting. Here is the relevant study:
I probably should. This is usually called “rebiasing.”
I suggest making it a separate category, at least to start with. It will be easier to recombine them into debiasing later if it turns out the distinction makes little sense and there is a range of anti-biasing from debiasing to rebiasing, than it would be to separate them after everything is filled in.
That is surprising! I would have guessed the reverse effect.
Ah, I think you misunderstood me (on reflection, I wasn’t very clear) — I’m doing an experiment, not a research project in the sense of looking over the existing literature.
(For the record, I decided on conducting something along the lines of the studies mentioned in this post to look at how distraction influences retention of false information.)
Have you included racism or its sub-components as fallacies? If so, what are the sub-components the fixing of which would ameliorate racism?
I have not. I’m not familiar with that literature, but Google is. Lemme know if you find anything especially interesting!
Uh.… was I downvoted for replying with helpful links to a comment that was already below the 0 thresshold?
Or perhaps I was downvoted for not including racism as a cognitive bias on my developing table of biases?
My guess is that it smacked a bit too much of LMGTFY.
Probably the latter. I’m reading through links from the links from the links of what you linked to, perhaps you could list all the biases you could use help on? I think my Arieli Lindt/Hersheys solution of imposing a self penalty whenever accepting free things was a clever way of debiasing that bias (though I would think so, wouldn’t I?) and in the course of reading through all kinds of these articles (in a topic I am interested in) I could provide similar things.
I really do go through a lot of this stuff independently, I had read the Bullock paper and Kahneman interview before you asked for help and only after you asked did I know I had information you wanted.
In any case my above comment was probably downvoted for it being perceived as posturing rather than because it isn’t a common concern. That interpretation best explains my getting downvoted for raising the issue and you being downvoted for not taking it maximally seriously.