OK, so you take it to your chem lab and they confirm that the composition is pure sugar, as far as they can tell. How many alternatives would you keep inventing before you update your probability of it actually working?
In other words, when do you update your probability that there is a sniper out there, as opposed to “there is a regular soldier close by”?
OK, I suppose this makes sense. Let me phrase it differently. What personal experience (not published peer-reviewed placebo-controlled randomized studies) would cause you to be convinced that what is essentially magically prepared water is as valid a remedy as, say, Claritin?
Well, I hate to say this for obvious reasons, but if the magic sugar water cured my hayfever just once, I’d try it again, and if it worked again, I’d try it again. And once it had worked a few times, I’d probably keep trying it even if it occasionally failed.
If it consistently worked reliably I’d start looking for better explanations. If no-one could offer one I’d probably start believing in magic.
I guess not believing in magic is something to do with not expecting this sort of thing to happen.
Well, I hate to say this for obvious reasons, but if the magic sugar water cured my hayfever just once, I’d try it again, and if it worked again, I’d try it again.
(This tripped my positive bias sense: only testing the outcome in the presence of an intervention doesn’t establish that it’s doing anything. It’s wrong to try again and again after something seemed to work, one should also try not doing it and see if it stops working. Scattering anti-tiger pills around town also “works”: if one does that every day, there will be no tigers in the neighborhood.)
Scattering anti-tiger pills around town also “works”: if one does that every day, there will be no tigers in the neighborhood.
That’s a bad analogy. If “anti-tiger pills” repeatedly got rid of a previously observed real tiger, you would be well advised to give the issue some thought.
That’s a bad analogy. If “anti-tiger pills” repeatedly got rid of a previously observed real tiger, you would be well advised to give the issue some thought.
What’s that line about how, if you treat a cold, you can get rid of it in seven days, but otherwise it lasts a week?
You would still want to check to see whether tigers disappear even when no “anti-tiger pills” are administered.
You would still want to check to see whether tigers disappear even when no “anti-tiger pills” are administered.
Depending on how likely the tiger is to eat people if it didn’t disappear, and your probability of the pills successfully repelling the tiger given that it’s always got rid of the tiger, and the cost of the pill and how many people the tiger is expected to eat if it doesn’t disappear? Not always.
A medical example of this is the lack of evidence for the efficacy of antihistamine against anaphylaxis. When I asked my sister (currently going through clinical school) about why, she said “because if you do a study, people in the control group will die if these things work, and we have good reason to believe they do”
and your probability of the pills successfully repelling the tiger given that it’s always got rid of the tiger
Yes. But the point is that this number should be negligible if you haven’t seen how the tiger behaves in the absence of the pills. (All of this assumes that you do not have any causal model linking pill-presence to tiger-absence.)
This case differs from the use of antihistamine against anaphylaxis for two reasons:
There is some theoretical reason to anticipate that antihistamine would help against anaphylaxis, even if the connection hasn’t been nailed down with double-blind experiments.
We have cases where people with anaphylaxis did not receive antihistamine, so we can compare cases with and without antihistamine. The observations might not have met the rigorous conditions of a scientific experiment, but that is not necessary for the evidence to be rational and to justify action.
Absolutely. The precise thing that matters is the probability tigers happen if you don’t use the pills. So, say, I wouldn’t recommend doing the experiment if you live in areas with high densities of tigers (which you do if there’s one showing up every day!) and you weren’t sure what was going into the pills (tiger poison?), but would recommend doing the experiment if you lived in London and knew that the pills were just sugar.
Similarly, I’m more likely to just go for a herbal remedy that hasn’t had scientific testing, but has lots of anecdotal evidence for lack of side-effects, than a homeopathic remedy with the same amount of recommendation.
It is positive bias (in that this isn’t the best way to acquire knowledge), but there’s a secondary effect: the value of knowing whether or not the magic sugar water cures his hayfever is being traded off against the value of not having hayfever.
Depending on how frequently he gets hayfever, and how long it took to go away without magic sugar water, and how bothersome it is, and how costly the magic sugar water is, it may be better to have an unexplained ritual for that portion of his life than to do informative experiments.
(And, given that the placebo effect is real, if he thinks the magic sugar water is placebo, that’s reason enough to drink it without superior alternatives.)
Agree with this. Knowing the truth has a value and a cost (doing the experiment).
I recently heard something along the lines of: “We don’t have proof that antihistamines work to treat anaphylaxis, because we haven’t done the study. But the reason we haven’t done the study is because we’re pretty sure the control group would die.”
I agree, I’d try not taking it too! I had hayfever as a child, and it was bloody awful. I used to put onionjuice in my eyes because it was the only thing that would provide relief. But even as a child I was curious enough to try it both ways.
Maybe, but it also explains why any other thing will cure my hayfever. And shouldn’t it go away if I realize it’s a placebo? And if I say ‘I have this one thing that cures my hayfever reliably, and no other thing does, but it has no mechanism except for the placebo effect’, is that very different from ‘I have this magic thing?’.
I’m not keen on explanations which don’t tell me what to anticipate. But maybe I misunderstand the placebo effect. How would I tell the difference between it and magic?
And if I say ‘I have this one thing that cures my hayfever reliably, and no other thing does, but it has no mechanism except for the placebo effect’, is that very different from ‘I have this magic thing?’.
No, not very. Also, if it turns out that only this one thing works, and no other thing works, then (correcting for the usual expectation effects) that is relatively strong evidence that something more than the placebo effect is going on. Conversely, if it is the placebo effect, I would expect that a variety of substances could replace the sugar pills without changing the effect much.
Another way of putting this is, if I believe that the placebo effect is curing my hayfever, that ultimately means the power to cure my hayfever resides inside my brain and the question is how to arrange things so that that power gets applied properly. If I believe that this pill cures my hayfever (whether via “the placebo effect” or via “magic” or via “science” or whatever other dimly understood label I tack onto the process), that means the power resides outside my brain and the question is how to secure a steady supply of the pill.
Apparently not. The effect might be less, I don’t think the study checked. But once you know it’s a placebo and the placebo works, then you’re no longer taking a sugar pill expecting nothing, you’re taking a sugar pill expecting to get better.
You could tell the difference between the placebo effect and magic by doing a double blind trial on yourself. e.g. Get someone to assign either “magic” pill or identical sugar pill (or solution) with a random number generator for a period where you’ll be taking the drug, prepare them and put them in order for you to take on successive days, and write down the order to check later. Then don’t talk to them for the period of the experiment. (If you want to talk to them you can apply your own shuffle and write down how to reverse it)
Exactly. You write down your observations for each day and then compare them to the list to see if you felt better on days when you were taking the actual pill.
Only if it’s not too costly to check, of course, and sometimes it is.
Edit: I think gwern’s done a number of self-trials, though I haven’t looked at his exact methodology.
Edit again: In case I haven’t been clear enough, I’m proposing a method to distinguish between “sugar pills that are magic” and “regular sugar pills”.
If you have a selection of ‘magic’ sugar pills, and you want to test them for being magic vs placebo effect, you do a study comparing their efficacy to that of ‘non-magic’ sugar pills.
If they are magic, then you aren’t comparing identical things, because only some of them have the ‘magic’ property
Well, you need it to work better than without the magic sugar water.
My approach is: I believe that the strategy of “if the magic sugar water worked with only 1 in a million probability of ‘worked’ being obtained by chance without any sugar water, and if only a small number of alternative cures were also tried, then adopt the belief that magic sugar water works” is a strategy that has only small risk of trusting in a non-working cure, but is very robust against unknown unknowns. It works even if you are living in a simulator where the beings-above mess with the internals doing all sorts of weird stuff that shouldn’t happen and for which you might be tempted to set very low prior.
Meanwhile the strategy of “make up a very low prior, then update it in vaguely Bayesian manner” has past history of screwing up big time leading up to significant preventable death, e.g. when antiseptic practices invented by this guy were rejected on the grounds of ‘sounds implausible’, and has pretty much no robustness against unknown unknowns, and as such is grossly irrational (in the conventional sense of ‘rational’) even though in the magic water example it sounds like awesomely good idea.
How many alternatives would you keep inventing before you update your probability of it actually working?
Precisely the hypotheses that are more likely than homeopathy. Once I’ve falsified those the probability starts pouring into homeopathy. Jaynes’ “Probability Theory: The Logic Of Science”, explains this really well in Chapter 4 and the “telepathy” example of Chapter 5. In particular I learnt a lot by staring at Figure 4.1.
hypothesize that they’re incompetent/dishonest at diluting and left some active agent in there.
OK, so you take it to your chem lab and they confirm that the composition is pure sugar, as far as they can tell. How many alternatives would you keep inventing before you update your probability of it actually working?
In other words, when do you update your probability that there is a sniper out there, as opposed to “there is a regular soldier close by”?
Before I update my probability of it working? The alternatives are rising to the surface because I’m updating the probability of it working.
OK, I suppose this makes sense. Let me phrase it differently. What personal experience (not published peer-reviewed placebo-controlled randomized studies) would cause you to be convinced that what is essentially magically prepared water is as valid a remedy as, say, Claritin?
Well, I hate to say this for obvious reasons, but if the magic sugar water cured my hayfever just once, I’d try it again, and if it worked again, I’d try it again. And once it had worked a few times, I’d probably keep trying it even if it occasionally failed.
If it consistently worked reliably I’d start looking for better explanations. If no-one could offer one I’d probably start believing in magic.
I guess not believing in magic is something to do with not expecting this sort of thing to happen.
(This tripped my positive bias sense: only testing the outcome in the presence of an intervention doesn’t establish that it’s doing anything. It’s wrong to try again and again after something seemed to work, one should also try not doing it and see if it stops working. Scattering anti-tiger pills around town also “works”: if one does that every day, there will be no tigers in the neighborhood.)
That’s a bad analogy. If “anti-tiger pills” repeatedly got rid of a previously observed real tiger, you would be well advised to give the issue some thought.
What’s that line about how, if you treat a cold, you can get rid of it in seven days, but otherwise it lasts a week?
You would still want to check to see whether tigers disappear even when no “anti-tiger pills” are administered.
Depending on how likely the tiger is to eat people if it didn’t disappear, and your probability of the pills successfully repelling the tiger given that it’s always got rid of the tiger, and the cost of the pill and how many people the tiger is expected to eat if it doesn’t disappear? Not always.
A medical example of this is the lack of evidence for the efficacy of antihistamine against anaphylaxis. When I asked my sister (currently going through clinical school) about why, she said “because if you do a study, people in the control group will die if these things work, and we have good reason to believe they do”
EDIT: I got beaten to posting this by the only other person I told about it
Yes. But the point is that this number should be negligible if you haven’t seen how the tiger behaves in the absence of the pills. (All of this assumes that you do not have any causal model linking pill-presence to tiger-absence.)
This case differs from the use of antihistamine against anaphylaxis for two reasons:
There is some theoretical reason to anticipate that antihistamine would help against anaphylaxis, even if the connection hasn’t been nailed down with double-blind experiments.
We have cases where people with anaphylaxis did not receive antihistamine, so we can compare cases with and without antihistamine. The observations might not have met the rigorous conditions of a scientific experiment, but that is not necessary for the evidence to be rational and to justify action.
Absolutely. The precise thing that matters is the probability tigers happen if you don’t use the pills. So, say, I wouldn’t recommend doing the experiment if you live in areas with high densities of tigers (which you do if there’s one showing up every day!) and you weren’t sure what was going into the pills (tiger poison?), but would recommend doing the experiment if you lived in London and knew that the pills were just sugar.
Similarly, I’m more likely to just go for a herbal remedy that hasn’t had scientific testing, but has lots of anecdotal evidence for lack of side-effects, than a homeopathic remedy with the same amount of recommendation.
It is positive bias (in that this isn’t the best way to acquire knowledge), but there’s a secondary effect: the value of knowing whether or not the magic sugar water cures his hayfever is being traded off against the value of not having hayfever.
Depending on how frequently he gets hayfever, and how long it took to go away without magic sugar water, and how bothersome it is, and how costly the magic sugar water is, it may be better to have an unexplained ritual for that portion of his life than to do informative experiments.
(And, given that the placebo effect is real, if he thinks the magic sugar water is placebo, that’s reason enough to drink it without superior alternatives.)
Agree with this. Knowing the truth has a value and a cost (doing the experiment).
I recently heard something along the lines of: “We don’t have proof that antihistamines work to treat anaphylaxis, because we haven’t done the study. But the reason we haven’t done the study is because we’re pretty sure the control group would die.”
I agree, I’d try not taking it too! I had hayfever as a child, and it was bloody awful. I used to put onion juice in my eyes because it was the only thing that would provide relief. But even as a child I was curious enough to try it both ways.
The placebo effect strikes me as a decent enough explanation.
Maybe, but it also explains why any other thing will cure my hayfever. And shouldn’t it go away if I realize it’s a placebo? And if I say ‘I have this one thing that cures my hayfever reliably, and no other thing does, but it has no mechanism except for the placebo effect’, is that very different from ‘I have this magic thing?’.
I’m not keen on explanations which don’t tell me what to anticipate. But maybe I misunderstand the placebo effect. How would I tell the difference between it and magic?
No, not very. Also, if it turns out that only this one thing works, and no other thing works, then (correcting for the usual expectation effects) that is relatively strong evidence that something more than the placebo effect is going on. Conversely, if it is the placebo effect, I would expect that a variety of substances could replace the sugar pills without changing the effect much.
Another way of putting this is, if I believe that the placebo effect is curing my hayfever, that ultimately means the power to cure my hayfever resides inside my brain and the question is how to arrange things so that that power gets applied properly. If I believe that this pill cures my hayfever (whether via “the placebo effect” or via “magic” or via “science” or whatever other dimly understood label I tack onto the process), that means the power resides outside my brain and the question is how to secure a steady supply of the pill.
Those two conditions seem pretty different to me.
Apparently not. The effect might be less, I don’t think the study checked. But once you know it’s a placebo and the placebo works, then you’re no longer taking a sugar pill expecting nothing, you’re taking a sugar pill expecting to get better.
You could tell the difference between the placebo effect and magic by doing a double blind trial on yourself. e.g. Get someone to assign either “magic” pill or identical sugar pill (or solution) with a random number generator for a period where you’ll be taking the drug, prepare them and put them in order for you to take on successive days, and write down the order to check later. Then don’t talk to them for the period of the experiment. (If you want to talk to them you can apply your own shuffle and write down how to reverse it)
Wait, what? I’m taking a stack of identical things and whether they work or not depends on a randomly generated list I’ve never seen?
Exactly. You write down your observations for each day and then compare them to the list to see if you felt better on days when you were taking the actual pill.
Only if it’s not too costly to check, of course, and sometimes it is.
Edit: I think gwern’s done a number of self-trials, though I haven’t looked at his exact methodology.
Edit again: In case I haven’t been clear enough, I’m proposing a method to distinguish between “sugar pills that are magic” and “regular sugar pills”.
Edit3: ninja’d
If you have a selection of ‘magic’ sugar pills, and you want to test them for being magic vs placebo effect, you do a study comparing their efficacy to that of ‘non-magic’ sugar pills.
If they are magic, then you aren’t comparing identical things, because only some of them have the ‘magic’ property
Well, you need it to work better than without the magic sugar water.
My approach is: I believe that the strategy of “if the magic sugar water worked with only 1 in a million probability of ‘worked’ being obtained by chance without any sugar water, and if only a small number of alternative cures were also tried, then adopt the belief that magic sugar water works” is a strategy that has only small risk of trusting in a non-working cure, but is very robust against unknown unknowns. It works even if you are living in a simulator where the beings-above mess with the internals doing all sorts of weird stuff that shouldn’t happen and for which you might be tempted to set very low prior.
Meanwhile the strategy of “make up a very low prior, then update it in vaguely Bayesian manner” has past history of screwing up big time leading up to significant preventable death, e.g. when antiseptic practices invented by this guy were rejected on the grounds of ‘sounds implausible’, and has pretty much no robustness against unknown unknowns, and as such is grossly irrational (in the conventional sense of ‘rational’) even though in the magic water example it sounds like awesomely good idea.
Precisely the hypotheses that are more likely than homeopathy. Once I’ve falsified those the probability starts pouring into homeopathy. Jaynes’ “Probability Theory: The Logic Of Science”, explains this really well in Chapter 4 and the “telepathy” example of Chapter 5. In particular I learnt a lot by staring at Figure 4.1.