Fallacies of reification—the placebo effect
TL;DR: I align with the minority position that “there is a lot less to the so-called placebo effect than people tend to think there is (and the name is horribly misleading)”, a strong opinion weakly held.
The following post is an off-the cuff reply to a G+ post of gwern’s, but I’ve been thinking about this on and off for quite a while. Were I to expand this for posting to Main, I would: a) go into more detail about the published research, b) introduce a second fallacy of reification for comparison, the so-called “10X variance in programmer productivity”.
My agenda is to have this join my short series of articles on “software engineering as a diseased discipline”, which I view as my modest attempt at “using Less Wrong ideas in your secret identity” and is covered at greater length in my book-in-progress.
I would therefore appreciate your feedback and probing at weak points.
Most of the time, talk of placebo effects (or worse of “the” placebo effect) falls victim to the reification fallacy.
My position is roughly “there is a lot less to the so-called placebo effect than people think there is (and the name is horribly misleading)”.
More precisely: the term “placebo” in the context of “placebo controlled trial” has some usefulness, when used to mean a particular way of distinguishing between the null and test hypotheses in a trial: namely, that the test and control group receive exactly the same treatment, except that you substitute, in the control group, an inert substance (or inoperative procedure) for the putatively active substance being tested.
Whatever outcome measures are used, they will generally improve somewhat even in the control group: this can be due to many things, including regression to the mean, the disease running its course, increased compliance with medical instructions due to being in a study, expectancy effects leading to biased verbal self-reports.
None of these is properly speaking an “effect” causally linked to the inert substance (the “placebo pill”). The reification fallacy consists of thinking that because we give something a name (“the placebo effect”) then there must be a corresponding reality. The false inference is “the people who improved in the control group were healed by the power of the placebo effect”.
The further false inference is “there are ailments of which I could be cured by ingesting small sugar pills appropriately labeled”. Some of my friends actually leverage this into justification for buying sugar in pharmacies at a ridiculous markup. I confess to being aghast whenever this happens in my presence.
A better name has been suggested: the “control response”. This is experiment-specific, and encompasses all of the various mechanisms which make it look like “the control group improves when given a sugar pill / saline solution / sham treatment”. Moreover it avoids hinting at mysterious healing powers of the mind.
Meta-analyses of those few studies that were designed to find an actual “placebo effect” (i.e. studies with a non-treatment arm, or studies comparing objective outcome measures for different placebos) have not confirmed it, the few individual studies that find a positive effect are inconclusive for a variety of reasons.
Doubting the existence of the placebo effect will expose you to immediate contradiction from your educated peers. One explanation seems to be that the “placebo effect” is a necessary argumentative prop in the arsenal of two opposed “camps”. On the one hand proponents of CAM (Complementary and Alternative Medicine) will argue that “even if a herbal remedy is a placebo, who cares as long as it actually works” and must therefore assume that the placebo effect is real. On the other hand opponents of CAM will say “homeopathy or herbal remedies only seem to work because of the placebo effect, we can therefore dismiss all positive reports from people treating themselves with such”.
I don’t have a proper list of references yet, but see the following:
http://www.sciencebasedmedicine.org/index.php/the-placebo-myth/
http://www.skeptic.com/eskeptic/09-05-20/
http://www.skepdic.com/placebo.html
http://content.onlinejacc.org/article.aspx?articleid=1188659
I don’t have the time to dig right now, for I remembered seeing studies that measured the “placebo effect” in at least two ways :
The placebo effect is proportional to the expected effect. If you give, as painkiller, a placebo saying it is paracetamol it’ll have lower effect than if you give the same placebo saying it’s an opiate derivative.
The placebo effect depends of the mean of usage : a placebo pill will have less effect than a placebo injection, and a placebo sweet syrup will have less effect than a placebo bitter syrup.
If those two kind of studies are real, how do you account for them ?
Another point : there is also the nocebo effect, where people expecting side-effects of drugs do show some of them when taking a “placebo”. What’s your stance on that ?
Verbal self-reports of pain sensations are especially susceptible to expectancy effects: if I think I’m getting a painkiller, I might well report less pain than I actually feel.
I don’t doubt that this is a “real” effect but we still have to distinguish pain vs. suffering, the objective and subjective components; as the saying goes “pain is inevitable, suffering is optional”. So, I readily grant that some suffering is manipulable through expectancy effects. But there might exist better, more effective ways to alleviate suffering.
To me this still doesn’t justify, e.g. selling sugar at a hundred times its market price.
As for “injection has a larger effect than pills”, for instance, I doubt the strength of the data. The Wikipedia entry on placebo references a single 1961 article for this assertion, and the article would be ideal for this purpose since it focuses on an objective outcome measure (systolic and diastolic pressure) rather than subjective pain assessment.
But the Wikipedia commentary falls straight into a classic mistake of statistical inference: it confuses a difference of significance between two groups (the injection and oral placebo groups) with a significant difference. The oral placebo didn’t show a statistically significant effect and the injection group did: this doesn’t mean that there is a statistically significant difference between the two groups.
The citation for “acupuncture has a stronger effect than a pill” is to a Kaptchuk study that measured self-reported pain levels, so to me it’s only saying “there is a stronger expectancy effect from having needles stuck into you than from taking a pill”, which isn’t earthshaking. Tellingly, the objective outcome measures (e.g. grip strength) showed no significant differences—my arm may feel better with needle treatment but it doesnt work better.
I’m not finding a citation for the bitter vs sweet thing in a quick Google search.
It’s been claimed that color matters, but the one literature review I’ve looked at found these effects “inconsistent” across the studies examined. The studies were of poor quality in general and measured different things so a meta-analysis is not possible. Annoyingly the review reports things that strike me as irrelevant such as “the perceived action of different coloured drugs” which basically consists of asking people what they think a red drug will be most effective for. To me this sounded like a desperate bid to convince readers that “research contributing to a better understanding of the effect of the colour of drugs is warranted”, the conclusion of the abstract. Sigh.
There are a multitude of such effects listed in The Strange Powers of the Placebo Effect, which is a truly amazing video on the subject. Unfortunately, it doesn’t dive into details and doesn’t seem to have much in the way of citations. I would agree that the effects it describe cause problems for the argument in the article.
To add a bit more information, this book covers a number of studies on the placebo effect (since the citations are in the book, which I do not have access to now, I don’t know where to find the original studies.) These studies indicate that the strength of the placebo effect also varies according to the color of the medicine, with different colored pills acting as more effective placebos for different ailments, and that the placebo effect can outweigh the actual effects of a drug (so that a small dose of vomit-inducing medicine could cause less vomiting than the control group, which received no medicine, if offered as an anti-nausea medicine.)
Relevant excerpts on colour and vomit.
And here’s a relevant study on Pharmaceutical Packaging Color and Drug Expectancy which has some references.
Note that that study doesn’t itself have anything directly to do with the placebo effect. They made fake pictures of boxes of pills, with different colours, and asked people questions like “What do you think this drug would be used to treat?” and “How effective would you expect it to be?”. They didn’t give any drugs (real or fake) to anyone.
(That isn’t intended as a criticism of the study: it’s fine that it wasn’t studying the placebo effect—nor of acephalus’s citation of it: it does indeed have some relevant references. Just a cautionary note.)
I think there are two slightly different definitions of ‘placebo’ here, what we might call the ‘strong’ and ‘weak’ placebo effects.
The weak placebo effect, the existence of which I don’t think anyone denies, is that for phenomena which are largely mental in nature, in particular pain, the mere belief that treatment has happened is enough to ease symptoms.
The strong placebo effect, for which as far as I’m aware there is no evidence whatsoever, but which is the basis for much of so-called ‘evidence-based medicine’, is the claim that, for example, we have to compare new cancer drugs to a placebo because a placebo might cause some shrinkage of the tumour. This is, frankly, nonsense.
It is not correct that “we have to compare new cancer drugs to a placebo because a placebo might cause some shrinkage of the tumour”. (That would indeed be nonsense.)
Rather, it is warranted to compare the effect of a drug against a placebo because the improvements measured in patients taking the drug could be artifactual rather than real—they would have gotten better anyway. The placebo controlled design therefore constitues a severe test of the drug’s effectiveness.
In cancer trials, I understand placebos are rarely used (but gaining favor apparently); instead, the control group is given a different cancer drug, one which is known to work. The null hypothesis is “the new drug isn’t more beneficial than the old drug”.
It is not nonsense. Immune response is affected by a patient’s psychological state of mind.
The reason for cancer trials comparing against other cancer drugs has more to do with 1) those trials being part of costly clinical trials that aim for FDA/European approval which needs comparative data and 2) ethic board approvals being contingent on cancer patients not being treated on just placebo in any case.
Also, no such thing as objective pain. The patient may lie about the severity of the discomfort, but if he successfully convinces himself he’s feeling better, he is feeling better.
Lastly, most of your sources make no claim such as “no such thing as a placebo effect”. If you disagree with claims of strong placebo effects, you may find support for the inverse claim, but that would be “the placebo effect is not strong”, not that it doesn’t exist in the first place.
For a DIY, if you don’t have qualms about that sort of thing, give your ailing grandma a sugar pill some time, tell her it’s some leftover pain medication. See for yourself what happens. (Disclaimer: On second thought, don’t do this. There could be a significant nocebo effect involved.)
Incidentally, it is an ubiquitous occurance that patients report feeling the effects of medication long before it has even passed their liver. For relief of subjective symptoms, there is no deeper objective level. Compare Tinnitus treatment. As for cause-and-effect: If you give a pill and there is a shift in the patient’s subjective experience strictly depending on having taken that pill, that’s as much cause and effect as it gets.
I have come across, but have yet to investigate fully, claims that immune response can be manipulated through classical conditioning. One book claims this as a placebo response mechanism. This is a much narrower claim than “affected by a patient’s psychological state of mind”.
That was not the case in the acupuncture study mentioned earlier: patients reported feeling better, but they were still experiencing reduced grip strength. They were feeling subjectively better, but that was at odds with a functional measurement of their condition.
Conditions requiring medical treatment are not, to my knowledge, exclusively subjective. Tinnitus is very much involuntary, and responds to lidocaine (including in placebo-controlled trials conducted after it was observed that tinnitus patients also “respond” to placebo). In other words, people are not able to convince themselves that their tinnitus is gone, but lidocaine can manage that, at least temporarily.
Hence my using the broadest category, leaving open the specific etiology of such an effect. “Can be affected by a patient’s psychological state of mind” is necessarily a less burdensome assertion than “can be manipulated through classical conditioning”, because the former is true if the latter is true, but not vice versa (not iff).
I’m not making the claim that placebo works for objectively quantifiable symptoms that aren’t subject to the perception of the patients. Discomfort, however, is.
There are indeed kinds of e.g. tinnitus that have a component that can be objectively measured. However, if placebo can effectively treat subjective components, that in itself would justify their usage. For many disease complexes, medication will only address partial symptoms. Which is fine. No need for a panacea.
Of course there in an involuntary cause to subjective symptoms, at least involuntary to a first approximation. The effectiveness of placebos does not preclude the effectiveness of other, actual medicine, such as lidocaine. Also, effectiveness implies only a reduction, not a cessation (“not able to convince themselves that their tinnitus is gone”)
Damn, after all this tinnitus talk now I’m cognizant of my own tinnitus. Better take another sip of my, um - (suspending disbelief) - “special” water.
Note that I don’t believe this is limited to cancer trials. Ethical considerations mean that in any situation where a treatment is known to be effective, withholding it would be wrong, so the most effective drug must be competed with. In addition, the goal of a new drug is to be better than its competitors, and comparing it to a placebo wouldn’t help with this
If one believes that the placebo effect is true, it still doesn’t justify this sort of activity when its possible to buy sugar from a supermarket, put it into pills and take it, because even if you know its a placebo, it still works
Furthermore, if placebos have no effect, then how do you explain the fact that “a placebo can reduce pain by both opioid and non-opioid mechanisms… In the first case, placebo analgesia is typically blocked by the opioid antagonist naloxone, whereas in the second case it is not.” This shows there are objectively measurable chemical changes taking place, and as has been said elsewhere, the brain does affect the body, the deleterious effects of chronic stress being the most obvious example.
To quote, leave off the quotation marks and begin the line with a greater-than sign
>
.Fixed, thanks.
Why does one care about this? Does any of this undermine the need to use placebos, or eliminate ‘the placebo effect’ as a shorthand of how experiments/trials can fail to show that which their proponents would like them to show?
Illustrative of a common failure of rationality, with instrumental consequences such as buying sugar at ridiculous markups.
Nope.
“Control response” would be a better shorthand (obviating the need for the derived term “nocebo”). Better yet, if you’re criticizing an experimental design, would be to pinpoint the specific criticism to a mechanism: regression to the mean, natural course of the disease, measurement error, expectancy effect, and so on.
What LWers or rationalists in general actually do this?
How often can one pinpoint this? Is it really helpful to insist that the shorthand be expanded on to even more speculative criticisms, or are we just letting the perfect be the enemy of the better here?
(This is a serious question. I speculated a great deal about why poorly controlled dual n-back experiments showed a large effect, but it wasn’t until dozen of studies & 4 years later that Redick et al surveyed the subjects and enabled me to say ‘ah, so part of it was expectancy effect!’)
Part of the point I hoped to make was that “raising the sanity waterline” would be well served by better awareness of the processes of scientific inference—statistics, experimental design and so on. More people should know about regression to the mean, confounding, biases, unblinding, file drawer effects - specific criticisms.
As a specific example, take the Blackwell study which people elsewhere in this thread have pointed me to, supposedly showing that “chemically inert pills of different colors modulate vigilance”. (A claim counter to everyday experience, in which people take coffee to stay awake, rather than eat (red) strawberries.) I hope you’ll agree that “it’s the placebo effect” isn’t an appropriate shorthand to criticize that particular study.
In this case, “the placebo effect” acts as a semantic stopsign—it kills what legitimate curiosity people should have about this study, given that:
it was conducted in 1972 but appears not to have been replicated since
it fails to detail outcome measures, effect sizes, or significance levels
the sample is highly susceptible to selection bias (medical students)
the sample is highly susceptible to pressure (teacher is out to show something, students know it)
It’s very hard from the abstract to know what to make of the study, a fulltext can’t be found easily, yet people are citing this one “source” all over the place (and in some cases just making the bare claim without even bothering to cite a source). I’ve come to recognize these as red flags.
At least one of us is confused. What conflict do you see between the following two propositions? (1) Eating an otherwise inert red thing can make you more alert. (2) Drinking coffee does more to make you alert than eating a red strawberry does.
(Even if there were a conflict between those, I’d have a problem with what you said, since it could be true that (2′) drinking coffee doesn’t really wake you up more than eating strawberries but (3) people drink coffee anyway, e.g. by habit or because it’s cheaper or because strawberries are more fattening or something. But my main objection is that I don’t see any conflict between 1 and 2.)
No, it isn’t, but what’s meant to follow from that? I don’t think anyone’s claiming that “it’s the placebo effect” is some kind of universally-insightful response to any observation that involves possible placebos. (Your example even seems to have the wrong sign, as it were; someone inclined to overuse “placebo effect” as an explanation is surely more likely to be defending the Blackwell study than attacking it, and indeed you go on to suggest that those people would fail to criticize the study, not that they would criticize it in an unhelpful way.)
Possibly me. I’m provisionally retracting that; my reasoning was that if eating red things, drinking from red cups etc. reliably increased alertness someone would have noticed and we would be exploiting this effect, not looking for it within the restricted context of eating a pill. However, I’m now remembering that there is just such a claim, called the “red room effect”, which I have no particular reason to disbelieve.
Er… Teaching people about placebo effect is raising the sanity waterline, or do you think >50% of the population knows it and why it is relevant, and at all discounts studies based on it? (I’m pretty sure they don’t, since this is a fine point of randomized studies, while most people read credulously just regular correlative studies and certainly don’t appreciate any specifics of the evidence hierarchy!)
So, you are letting the perfect be the enemy of the better: in objecting that a valid criticism can be even more precisely specified. Thanks for finally being clear about it; now I can downvote the post with a clear conscience.
You can teach people about placebos (or about control responses, or about proper controls in general) without needing to perpetuate the errors stemming from a poorly-named “placebo effect” and the hidden inferences that come with the term “effect”.
Some of these criticisms are speculative because there are no standards for placebo disclosure. There is some argument that it would be useful even to researchers themselves to taboo the term “placebo effect” and instead actually think about their experimental design down to apparently minor details such as what exactly they put in the “placebo” pill.
If by “this” you mean buy actual homeopathic drugs, I have no evidence to offer—but the phrase “I knowingly use the placebo effect on myself” in this comment by mwengler (5 upvotes) strike me as representative of an LWer making the mistake I describe.
But is he wrong? You earlier agreed that many different effects & issues combined to yield a real placebo effect, and if mwengler expects the ibuprofen to work, then doesn’t this satisfy your criteria by neatly falling into one of those effects like subject-expectancy effect?
That depends on what relief he expects (other than analgesia), which his comment didn’t specify.
Hypothetically: suppose I sprain an ankle, and take ibuprofen with the theory that in addition to relieving the pain, ibuprofen will make the ankle heal faster. I may well convince myself that this is the case, and be inclined to report as much to anyone who asks (that’s the expectancy effect in action) - but I have no serious grounds to believe that the ibuprofen has in fact caused function to be restored to my ankle. In fact, my false belief may well make things worse, by encouraging me to put go for my next run sooner than I would otherwise have.
If that’s the kind of thing meant, then yes, that’s poor decision-making.
If and only if the ‘placebo effect’ is real, we should anticipate the existence of some mechanism for belief in treatment to lead to improved outcomes (e.g.).
Right, but I would say “a particular kind of placebo effect ____” to distinguish between the two meanings:
a) “expecting to heal makes you more likely to heal due to psychosomatic effects”
b) “the general stuff they do to you in conducting a trial (the regimentation, the social contact, the clinical environment) is likely to cause some healing regardless of the specific treatment being tested”
To the extent that Morendil is claiming we have to be careful to distinguish these meanings, which both go under the name “placebo effect”, I agree. Beyond that, I don’t see what kind of reification is going on; you don’t have to reject reductionism or naturalism to believe there are psychosomatic effects of the kind in a), especially in the realm of sensations of pain and well-being.
To repost skeptical lurker’s link: http://www.guardian.co.uk/science/2010/dec/22/placebo-effect-patients-sham-drug
The placebo effect could be driven by mechanisms unrelated to whether or not you believe you’re being treated and still mean pretty much the same thing—similar to priming, it could just be that mentioning the right words before giving people a sugar pill is enough, you don’t have to actually make them believe you’re giving them a painkiller.
A previous discussion about placebo effects between Morendil and I can be found here.
Interesting, I agree with what Morendil said in that previous conversation but rejected this post at the first line. I guess that’s just due to the difference between “there is no such thing as the placebo effect” and my position of “the placebo effect as commonly understood is almost entirely bullshit”.
Care to state your position more fully? Or where you disagree with the above, apart from the first line? (I do warn that my actual position is a bit more nuanced—I admit the reality of the expectancy effects on pain, and find the theories about classical conditioning somewhat plausible.)
For most part placebos don’t do anything.
Placebos make some degree of difference to how pain is processed by the brain (via ECG, not just self report).
There is some effect on short term stress levels.
If what I is read is correct the only actual physical ailments where studies suggest that placebos make any difference at all are those mediated by the aforementioned stress. A mild yet significant effect on ulcurs, for example.
The reason we use placebos when we test stuff is nothing to do with the idea that our body has magic healing powers that we can trick it into using. It is far more to do with avoiding the problem of humans lying to each other and themselves in response to social incentives.
I primarily disagree with that line—which you present as a summary. Making your thesis a lazy over-generalization really undermines your position (at least to those of us who take things literally.) I would likely agree with most of your ‘nuanced’ position.
Post edited as a result of the above. Thanks!
ETA: further edit as per the reply below.
I now love this post and reversed my vote! :)
(Perhaps consider saying “most people” instead of “you”. People are easier to persuade when they are implied to already belong in the ‘right’ group rather than implied to be in the ‘wrong majority’. It’s also a little pessimistic in as much as your central position can only be true so long as your reader doesn’t believe you!)
Or even “than you may think” instead “than you think”.
I am not sure I understand what you wanted to say in this article.
In addition to describing what you oppose, it would help me if you described your model of what exactly is happening when someone eats a piece of sugar and later shows or reports improvement.
You come to me, your doctor, with a complaint, say the common cold. I charge you a lot of money, write you a prescription for Phosphorus 6C, send you to the pharmacy where you spend a further silly amount of money on sugar with concentrations of phosphorus ten thousand times less than the allowable concentration of arsenic in drinking water.
Next week I ring you to ask how things are going. You believe that you’re supposed to have improved. It would make you feel rather silly to have gone to all this trouble to get essentially the same result as your cold clearing up of its own accord. You would rather feel clever about your choice of doctors and your overall judgement. None of that is conscious, but it still plays a role in picking your verbal response. You say “Much improved, Doc, thanks! That thing you gave me worked wonders!”
In all other particulars—nasal secretions, immune response, number of sneezes, actual misery experienced—this episode has been identical to what would have transpired if, counterfactually, you had decided to “tough it” and allow the cold to run its course. (I may hedge my bets on the “actual misery experienced” measure, if there were a way to actually get a number for that; I think the first-order misery is the same, but possibly you feel better about feeling miserable, knowing that it’s soon going to pass.)
Do I understand it correctly that if small dozes of sugar could make a measurable change in nasal secretions, immune response, number of sneezes, etc., then the premise of this article would be wrong?
Or would you have an ad-hoc explanation e.g. that the patient is suppressing the sneezes to avoid social embarrassment? If yes, where exactly is the line between measurable things that patient can do for social reasons, and things that patient cannot do for social reasons? I think understanding this boundary could be useful (for example we would know which illness can be treated by social means and which can not).
Yes, the distinction I’m drawing is between outcome measures that reflect voluntary behaviours (which can therefore be strongly influenced by expectations and so on) and involuntary physiological responses.
This strikes me as a confused way of thinking about the situation. If this is an important part of your model of what’s going on, can you expand on what you mean by phrases like “feeling miserable,” “feeling better”, and “first-order misery”? (If it’s not important to your model, feel free to ignore.)
BTW, some of my thinking on this is informed by reading Deborah Mayo’s “Error and the Growth of Experimental Knowledge”. It’s not a very fun book, but it’s interesting to me because it aims to be a vigorous defense of frequentism and condemnation of Bayesianism; from Cosma Shalizi’s review I gathered that it would be a useful antidote against Bayesian groupthink.
The parts in Mayo that jiggled my thinking on placebos are where she insists very firmly that the point of experimental trials is to put hypotheses to severe tests, and that this mainly consists of amplifying the differences between “actual” and “artifactual” effects.
Proposal: to avoid the confusion Morendil is concerned about (to the extend that it actually is a problem), I suggest a general change in terminology that highlights how placebos are used in trials to establish a baseline, and de-emphasizes the suggestion that a specific mechanism (psychosomatic healing) has already been validated. Any of the follow terms would, I suspect, accomplish this:
placebo portion
placebo fraction
placebo baseline
Then, I would recommend the following changes of expression:
“Yeah, patient X got better, but that was just the placebo effect.” → “Patient X’s improvement was within the placebo baseline.”
“You think you got better by using that supplement, but that was probably just the placebo effect.” → “If you got better, the improvement was probably within the placebo baseline, so I don’t think you can attribute it to any active ingredient of the supplement itself.”
“Patients in the test group showed improvement best explained by the placebo effect.” → “Patients in the test group showed improvement, but not beyond the placebo fraction.”
Morendil, do you think this would be a better way to talk about it?
How does this deal with studies (I confess I don’t have links, I read about them in Bad Science by Ben Goldacre, mentioned in this blog post I found) that show that two sugar pills ‘work’ better than one, and a saltwater injection ‘works’ better still at relieving symptoms, that different colours have different levels of effectiveness or even that one can induce negative side-effects (e.g. vomiting) simply by telling the subject that they exist.
None of these are at all explainable by regression to the mean, diseases running their course or increased compliance with medical instructions. I would even be surprised if there was biased self-reporting going on, unless the groups were aware of each-other’s existence I doubt the one-pillers were explicitly thinking “only one pill, I probably won’t get much better”.
This seems to point strongly towards a direct causal link between believing you are going to get better and getting better.
How would you compare your opinion to Hanson’s Placebos show care position?
The observation that animals modulate immune response depending on perceptual cues is interesting, but you have to cross a fair inferential distance before you get anywhere near “the placebo effect”.
This crossing goes something like, “you’re manipulating something (lighting conditions) that has an effect only via the animal’s perceptions”, so that in effect you are “making the animal believe it’s winter”—and then arguing that “since the result of this is a modulation of the immune system, it must be the case that beliefs can have an effect on the immune system”—and from there we bridge the final gap to “therefore this might be how the placebo effect works, by using the same mechanism whereby beliefs influence the immune system”.
This final gap is speculative. Even if the hypothesis were true, it would only explain a select few of the control responses observed in experimental trials, not all of them. And the hypothesis doesn’t mean that things like regression to the mean, measurement error, expectancy effects and so on are not also playing a role in control responses—it’s only one more mechanism that joins these others in confounding the results of experimental trials.
More importantly, we don’t know that what happens in placebo-controlled studies is anything like that, and the article adduces precisely zero evidence for that hypothesis. We have no idea whether humans respond to anything of the sort, and even if that was the case we wouldn’t necessarily be able to harness the effect for curative purposes.
On top of all which Hanson adds an extra bit of speculation—“evolution shaped us to interpret being cared for in the same way that we would interpret other cues, such as long days, that we are in a situation of abundant resources”—still with not a shred of evidence.
All this reminds me of Mark Twain’s quip: “There is something fascinating about science. One gets such a wholesale return of conjecture out of a trifling investment of fact.”
Also of interest: placebo effect, iPhone edition. ;)
Reification seems at work in the studies of the placebo effect for antidepressants. It’s found that except for severe depressions, antidepressants may have “little or no greater benefit than placebo.” The conclusion drawn is either that antidepressants aren’t effective or placebos are effective, when the truth is that most depressions have a short-term course, and the placebo group’s effects include the spontaneous remissions.
sooo...
Has anyone actually tested the placebo affect against a placebo?
full on pretend medicine vs take this vs no intervention. I wonder if the placebo effect is actually a thing.
So you’re saying the placebo effect reduces to the hawthorne effect?
The Placebo Effect should be the difference between No Treatment and Sham Treatment for an identified problem. One could also consider an Ignorance Treatment where patients are asymptomatic, and a Denial Treatment, where you just tell the patient “it’s nothing, don’t worry about it”.
I believe studies have been able to show a statistically significant outcomes between Placebo Treatment and No Treatment.
I’m personally quite annoyed with knee jerk “that’s just placebo” claims, which then go on to claim that if a certain statistic of a certain treatment regime for a certain substance on a certain population didn’t pass some statistical significance test relative to a sham treatment, then the substance “has no effect”, etc.
So I don’t think the problem is that Sham Treatments have no statistically significant effect on outcome, but that people leap to unwarranted conclusions from failures to reject null hypotheses.
Somewhat echoing gwern’s “why do we care”—can you taboo “placebo effect”?
That’s basically what I thought the article was doing.
OK. Judging from much of this discussion, it hasn’t really done that well. My taboo request was prompted because I was confused about why you would make two completely distinct points, namely the conceptual/reification point and the point that expectation effects are mostly bunk, in the same post, unless there is some sort of weird connection I’m missing.