Illustrative of a common failure of rationality, with instrumental consequences such as buying sugar at ridiculous markups.
Does any of this undermine the need to use placebos
Nope.
or eliminate ‘the placebo effect’ as a shorthand of how experiments/trials can fail to show that their proponents would like them to show?
“Control response” would be a better shorthand (obviating the need for the derived term “nocebo”). Better yet, if you’re criticizing an experimental design, would be to pinpoint the specific criticism to a mechanism: regression to the mean, natural course of the disease, measurement error, expectancy effect, and so on.
Illustrative of a common failure of rationality, with instrumental consequences such as buying sugar at ridiculous markups.
What LWers or rationalists in general actually do this?
Better yet, if you’re criticizing an experimental design, would be to pinpoint the specific criticism to a mechanism: regression to the mean, natural course of the disease, measurement error, expectancy effect, and so on.
How often can one pinpoint this? Is it really helpful to insist that the shorthand be expanded on to even more speculative criticisms, or are we just letting the perfect be the enemy of the better here?
(This is a serious question. I speculated a great deal about why poorly controlled dual n-back experiments showed a large effect, but it wasn’t until dozen of studies & 4 years later that Redick et al surveyed the subjects and enabled me to say ‘ah, so part of it was expectancy effect!’)
Is it really helpful to insist that the shorthand be expanded on to even more speculative criticisms, or are we just letting the perfect be the enemy of the better here?
Part of the point I hoped to make was that “raising the sanity waterline” would be well served by better awareness of the processes of scientific inference—statistics, experimental design and so on. More people should know about regression to the mean, confounding, biases, unblinding, file drawer effects - specific criticisms.
As a specific example, take the Blackwell study which people elsewhere in this thread have pointed me to, supposedly showing that “chemically inert pills of different colors modulate vigilance”. (A claim counter to everyday experience, in which people take coffee to stay awake, rather than eat (red) strawberries.) I hope you’ll agree that “it’s the placebo effect” isn’t an appropriate shorthand to criticize that particular study.
In this case, “the placebo effect” acts as a semantic stopsign—it kills what legitimate curiosity people should have about this study, given that:
it was conducted in 1972 but appears not to have been replicated since
it fails to detail outcome measures, effect sizes, or significance levels
the sample is highly susceptible to selection bias (medical students)
the sample is highly susceptible to pressure (teacher is out to show something, students know it)
It’s very hard from the abstract to know what to make of the study, a fulltext can’t be found easily, yet people are citing this one “source” all over the place (and in some cases just making the bare claim without even bothering to cite a source). I’ve come to recognize these as red flags.
A claim counter to everyday experience, in which people take coffee to stay awake, rather than eat (red) strawberries.
At least one of us is confused. What conflict do you see between the following two propositions? (1) Eating an otherwise inert red thing can make you more alert. (2) Drinking coffee does more to make you alert than eating a red strawberry does.
(Even if there were a conflict between those, I’d have a problem with what you said, since it could be true that (2′) drinking coffee doesn’t really wake you up more than eating strawberries but (3) people drink coffee anyway, e.g. by habit or because it’s cheaper or because strawberries are more fattening or something. But my main objection is that I don’t see any conflict between 1 and 2.)
“it’s the placebo effect” isn’t an appropriate shorthand to criticize that particular study.
No, it isn’t, but what’s meant to follow from that? I don’t think anyone’s claiming that “it’s the placebo effect” is some kind of universally-insightful response to any observation that involves possible placebos. (Your example even seems to have the wrong sign, as it were; someone inclined to overuse “placebo effect” as an explanation is surely more likely to be defending the Blackwell study than attacking it, and indeed you go on to suggest that those people would fail to criticize the study, not that they would criticize it in an unhelpful way.)
Possibly me. I’m provisionally retracting that; my reasoning was that if eating red things, drinking from red cups etc. reliably increased alertness someone would have noticed and we would be exploiting this effect, not looking for it within the restricted context of eating a pill. However, I’m now remembering that there is just such a claim, called the “red room effect”, which I have no particular reason to disbelieve.
Er… Teaching people about placebo effect is raising the sanity waterline, or do you think >50% of the population knows it and why it is relevant, and at all discounts studies based on it? (I’m pretty sure they don’t, since this is a fine point of randomized studies, while most people read credulously just regular correlative studies and certainly don’t appreciate any specifics of the evidence hierarchy!)
So, you are letting the perfect be the enemy of the better: in objecting that a valid criticism can be even more precisely specified. Thanks for finally being clear about it; now I can downvote the post with a clear conscience.
You can teach people about placebos (or about control responses, or about proper controls in general) without needing to perpetuate the errors stemming from a poorly-named “placebo effect” and the hidden inferences that come with the term “effect”.
How often can one pinpoint this? Is it really helpful to insist that the shorthand be expanded on to even more speculative criticisms, or are we just letting the perfect be the enemy of the better here?
Some of these criticisms are speculative because there are no standards for placebo disclosure. There is some argument that it would be useful even to researchers themselves to taboo the term “placebo effect” and instead actually think about their experimental design down to apparently minor details such as what exactly they put in the “placebo” pill.
What LWers or rationalists in general actually do this?
If by “this” you mean buy actual homeopathic drugs, I have no evidence to offer—but the phrase “I knowingly use the placebo effect on myself” in this comment by mwengler (5 upvotes) strike me as representative of an LWer making the mistake I describe.
But is he wrong? You earlier agreed that many different effects & issues combined to yield a real placebo effect, and if mwengler expects the ibuprofen to work, then doesn’t this satisfy your criteria by neatly falling into one of those effects like subject-expectancy effect?
That depends on what relief he expects (other than analgesia), which his comment didn’t specify.
Hypothetically: suppose I sprain an ankle, and take ibuprofen with the theory that in addition to relieving the pain, ibuprofen will make the ankle heal faster. I may well convince myself that this is the case, and be inclined to report as much to anyone who asks (that’s the expectancy effect in action) - but I have no serious grounds to believe that the ibuprofen has in fact caused function to be restored to my ankle. In fact, my false belief may well make things worse, by encouraging me to put go for my next run sooner than I would otherwise have.
If that’s the kind of thing meant, then yes, that’s poor decision-making.
Illustrative of a common failure of rationality, with instrumental consequences such as buying sugar at ridiculous markups.
Nope.
“Control response” would be a better shorthand (obviating the need for the derived term “nocebo”). Better yet, if you’re criticizing an experimental design, would be to pinpoint the specific criticism to a mechanism: regression to the mean, natural course of the disease, measurement error, expectancy effect, and so on.
What LWers or rationalists in general actually do this?
How often can one pinpoint this? Is it really helpful to insist that the shorthand be expanded on to even more speculative criticisms, or are we just letting the perfect be the enemy of the better here?
(This is a serious question. I speculated a great deal about why poorly controlled dual n-back experiments showed a large effect, but it wasn’t until dozen of studies & 4 years later that Redick et al surveyed the subjects and enabled me to say ‘ah, so part of it was expectancy effect!’)
Part of the point I hoped to make was that “raising the sanity waterline” would be well served by better awareness of the processes of scientific inference—statistics, experimental design and so on. More people should know about regression to the mean, confounding, biases, unblinding, file drawer effects - specific criticisms.
As a specific example, take the Blackwell study which people elsewhere in this thread have pointed me to, supposedly showing that “chemically inert pills of different colors modulate vigilance”. (A claim counter to everyday experience, in which people take coffee to stay awake, rather than eat (red) strawberries.) I hope you’ll agree that “it’s the placebo effect” isn’t an appropriate shorthand to criticize that particular study.
In this case, “the placebo effect” acts as a semantic stopsign—it kills what legitimate curiosity people should have about this study, given that:
it was conducted in 1972 but appears not to have been replicated since
it fails to detail outcome measures, effect sizes, or significance levels
the sample is highly susceptible to selection bias (medical students)
the sample is highly susceptible to pressure (teacher is out to show something, students know it)
It’s very hard from the abstract to know what to make of the study, a fulltext can’t be found easily, yet people are citing this one “source” all over the place (and in some cases just making the bare claim without even bothering to cite a source). I’ve come to recognize these as red flags.
At least one of us is confused. What conflict do you see between the following two propositions? (1) Eating an otherwise inert red thing can make you more alert. (2) Drinking coffee does more to make you alert than eating a red strawberry does.
(Even if there were a conflict between those, I’d have a problem with what you said, since it could be true that (2′) drinking coffee doesn’t really wake you up more than eating strawberries but (3) people drink coffee anyway, e.g. by habit or because it’s cheaper or because strawberries are more fattening or something. But my main objection is that I don’t see any conflict between 1 and 2.)
No, it isn’t, but what’s meant to follow from that? I don’t think anyone’s claiming that “it’s the placebo effect” is some kind of universally-insightful response to any observation that involves possible placebos. (Your example even seems to have the wrong sign, as it were; someone inclined to overuse “placebo effect” as an explanation is surely more likely to be defending the Blackwell study than attacking it, and indeed you go on to suggest that those people would fail to criticize the study, not that they would criticize it in an unhelpful way.)
Possibly me. I’m provisionally retracting that; my reasoning was that if eating red things, drinking from red cups etc. reliably increased alertness someone would have noticed and we would be exploiting this effect, not looking for it within the restricted context of eating a pill. However, I’m now remembering that there is just such a claim, called the “red room effect”, which I have no particular reason to disbelieve.
Er… Teaching people about placebo effect is raising the sanity waterline, or do you think >50% of the population knows it and why it is relevant, and at all discounts studies based on it? (I’m pretty sure they don’t, since this is a fine point of randomized studies, while most people read credulously just regular correlative studies and certainly don’t appreciate any specifics of the evidence hierarchy!)
So, you are letting the perfect be the enemy of the better: in objecting that a valid criticism can be even more precisely specified. Thanks for finally being clear about it; now I can downvote the post with a clear conscience.
You can teach people about placebos (or about control responses, or about proper controls in general) without needing to perpetuate the errors stemming from a poorly-named “placebo effect” and the hidden inferences that come with the term “effect”.
Some of these criticisms are speculative because there are no standards for placebo disclosure. There is some argument that it would be useful even to researchers themselves to taboo the term “placebo effect” and instead actually think about their experimental design down to apparently minor details such as what exactly they put in the “placebo” pill.
If by “this” you mean buy actual homeopathic drugs, I have no evidence to offer—but the phrase “I knowingly use the placebo effect on myself” in this comment by mwengler (5 upvotes) strike me as representative of an LWer making the mistake I describe.
But is he wrong? You earlier agreed that many different effects & issues combined to yield a real placebo effect, and if mwengler expects the ibuprofen to work, then doesn’t this satisfy your criteria by neatly falling into one of those effects like subject-expectancy effect?
That depends on what relief he expects (other than analgesia), which his comment didn’t specify.
Hypothetically: suppose I sprain an ankle, and take ibuprofen with the theory that in addition to relieving the pain, ibuprofen will make the ankle heal faster. I may well convince myself that this is the case, and be inclined to report as much to anyone who asks (that’s the expectancy effect in action) - but I have no serious grounds to believe that the ibuprofen has in fact caused function to be restored to my ankle. In fact, my false belief may well make things worse, by encouraging me to put go for my next run sooner than I would otherwise have.
If that’s the kind of thing meant, then yes, that’s poor decision-making.