Things I’d like to see tested which may or may not have been tested before but I haven’t seen in the literature.
1) There’s a lot of evidence that people are wildly overconfident. The most classic version of this is how if you ask people to give a range for something such that they are 90% sure they got it right, and do it for a long list of things (like say the populations of various countries) they will get much lower than 90%. Will be people more properly calibrate when there is money at stake? (This is something that Mass Driver and I discussed a while back.) The way I’d test this is after they’ve given the various options see what bets they are willing to take about their being correct and how closely they match their estimated confidence.
2) Are people who have learned about cognitive biases less likely to be subject to them to any substantial degree? The one I’m most curious about is the conjunction fallacy. The obvious way to test this is to put people who have just finished a semester of intro psychology or something similar and see if they show less of a conjunction bias than students who have not done so.
3) Can training make one better at the color-word version of the Stroop interference test?
Yes. The Stroop test is, along with spaced repetition, one of the most confirmed and replicated tasks in all of psychology, so it would be deeply surprising if no one had come up with training to make you better at the Stroop test. (Heck, there’s plenty of training available for IQ tests—like taking a bunch of IQ tests.)
I’d put a very high confidence on that, but as it happens, I don’t have to since I recently saw discussion of one result on Stroop test and meditation:
. After training, subjects were tested on a variety of cognitive and personality tests, including associate learning, word fluency, depression, anxiety, locus of control, and of course Stroop. Results showed that the TM and MF groups together scored significantly higher on associate learning and word fluency than the no-training and relaxation-training groups. Perhaps most surprisingly, over a 36 month period, the survival rate for the TM and MF groups was significantly higher than for the relaxation and no-training groups (p<.00025). But more to the point, both TM and MF scored higher than MR and no-training on the Stroop task (p<.1; one-tailed test).
Or:
Incredibly, behavioral data showed that the standard stroop effect (again, a cost in reaction time when reading incongruent words relative to congruent words) was completely eliminated in terms of both reaction time and accuracy for both the experimental and control groups. [ERP analyses revealed decreased visual activity under suggestions , including suppression of early visual effects commonly known as the P100 and N100, while fMRI showed reductions in a variety of regions including anterior cingulate]. The bottom line, then, is that even strong suggestion is enough to accomplish some amount of deprogramming, as measured through the Stroop task.
1) I’m surprised this hasn’t already been done. Many economists like to argue that “people are rational when it counts” i.e. when there’s stronger incentives. Similar to your proposal, I’m interested in seeing how priming affects decisions with incentives, and to my knowledge, this hasn’t been done either (but IIRC it has been done without incentives).
2) IIRC the results have been replicated with economics and/or psychology graduate students (citation needed).
1) Different but related; people who trade stuff a lot suffer much less from the endowment effect, also while people are crap at randomising normally with money at stake they get better very quickly.
Things I’d like to see tested which may or may not have been tested before but I haven’t seen in the literature.
1) There’s a lot of evidence that people are wildly overconfident. The most classic version of this is how if you ask people to give a range for something such that they are 90% sure they got it right, and do it for a long list of things (like say the populations of various countries) they will get much lower than 90%. Will be people more properly calibrate when there is money at stake? (This is something that Mass Driver and I discussed a while back.) The way I’d test this is after they’ve given the various options see what bets they are willing to take about their being correct and how closely they match their estimated confidence.
2) Are people who have learned about cognitive biases less likely to be subject to them to any substantial degree? The one I’m most curious about is the conjunction fallacy. The obvious way to test this is to put people who have just finished a semester of intro psychology or something similar and see if they show less of a conjunction bias than students who have not done so.
3) Can training make one better at the color-word version of the Stroop interference test?
Yes. The Stroop test is, along with spaced repetition, one of the most confirmed and replicated tasks in all of psychology, so it would be deeply surprising if no one had come up with training to make you better at the Stroop test. (Heck, there’s plenty of training available for IQ tests—like taking a bunch of IQ tests.)
I’d put a very high confidence on that, but as it happens, I don’t have to since I recently saw discussion of one result on Stroop test and meditation:
Or:
Thanks.
1) I’m surprised this hasn’t already been done. Many economists like to argue that “people are rational when it counts” i.e. when there’s stronger incentives. Similar to your proposal, I’m interested in seeing how priming affects decisions with incentives, and to my knowledge, this hasn’t been done either (but IIRC it has been done without incentives).
2) IIRC the results have been replicated with economics and/or psychology graduate students (citation needed).
1) Different but related; people who trade stuff a lot suffer much less from the endowment effect, also while people are crap at randomising normally with money at stake they get better very quickly.
It is possible that 1) has been done but if so I haven’t seen the studies.