Absent any other prior, why would you use anything other than “My body will react to hormones the same way most other people’s bodies react to hormones”?
And you can’t self-experiment on risk of a heart attack. Your only endpoint is “I had a heart attack” or “I didn’t have a heart attack”, and even if you don’t mind getting your experimental result exactly one instant too late to help you, with a sample size of one you can’t draw any conclusions about whether taking HRT for ten years contributed to your heart attack or not.
And probably the most important reason is that medicine is weird. Even when the smartest people try to predict results that should be obvious, they very often get them wrong. “Based on what I know about the body, this sounds like it should work” is the worst reason to do anything. I know that sounds contrary to Bayes, but getting burned again and again by things that sound like they should work has recalibrated me on this one.
If you’re saying that you have unusual incentives here—eg that you value the possibility of adding to your natural lifespan enough that you’re willing to accept a small risk of subtracting from it and a large risk that you’re wasting time and money, that’s fair enough.
And probably the most important reason is that medicine is weird. Even when the smartest people try to predict results that should be obvious, they very often get them wrong. “Based on what I know about the body, this sounds like it should work” is the worst reason to do anything. I know that sounds contrary to Bayes, but getting burned again and again by things that sound like they should work has recalibrated me on this one.
Reality isn’t weird. What this means is that you know less about the body then you think you do.
Well, “reality isn’t weird” can mean a couple of different things. “Weird” is a two-part predicate like “sexiness”; things are only weird in reference to some particular mind’s preconceptions. Even Yog-Sothoth doesn’t seem weird to his own mother.
But if we use the word “weird” as a red flag to tell others that they can expect to be surprised or confused when entering a certain field, as long as we can predict that their minds and preconceptions work somewhat like ours, it’s a useful word.
I think Eliezer’s “reality is not weird” post was just trying to say that we can’t blame reality for being weird, or expect things to be irreducibly weird even after we challenge our preconceptions. I don’t think Eliezer was saying that we can’t describe anything as “weird” if it actually exists; after all, he himself has been known to describe certain potential laws of physics as weird.
(man, basing an argument on the trivial word choices of a venerated community leader spotted in an old archive makes me feel so Jewish)
I think Eliezer’s “reality is not weird” post was just trying to say that we can’t blame reality for being weird,
But one can blame a theory for finding reality weird. In particular, you seem to be using “weird” to mean frequently behaves in ways that don’t agree with our models. That should cause you to lower your confidence in the models.
Absent any other prior, why would you use anything other than “My body will react to hormones the same way most other people’s bodies react to hormones”?
First, because I am not absent other informational priors. I have a lifetime of informational priors about my own body. I also have access to pubmed, wikipedia, my 23andme genomic data, my personal medical history, my family’s medical history, and lab testing services that can take accurate measurements of me.
There are no clinical trials that have controlled for that information.
Second, because I know others are not identical to me. Basing my choices solely on some statistical outcome on a pool of patients where I have none of that kind of information, and indeed the doctors involved didn’t take that information, and didn’t factor it into their solutions for their patients, strikes me as throwing out most all of my relevant data and trusting the results produced by a blind man with a shotgun.
Moreover, refusing to experiment on yourself is to refuse to look at reality and take actual data about the system you’re interested in—you. That’s poor decisions theory, poor inference, and poor problem solving.
Yes, medicine is weird. Therefore, instead of thinking that you have it all worked out, or that a clinical trial has it all worked out for you, the rational thing to do is to evaluate options that might work, their costs and risks, try things, take measurements, update your model based on that additional data, and try again. Sure, if there are clinical trials, avail yourself of that information as well. Nice place to find candidate treatments. But you’re deluded if you think a positive result means it will assuredly work for you, and you’re deluded if you think a negative results, a “failure to reject”, means it won’t. At a minimum, if the trial didn’t have a crossover study, it hasn’t ruled out that the treatment is a perfect cure for some subset of people with the problem.
Any decent doctor I’ve had has basically said that all treatments are experiments for a particular person—maybe it will work for you, maybe not.
I don’t know that I have incentives any different from anyone else with a malady. I wish to get better. I recognize that there are risks involved in the attempts to get better. What doctors fail to appreciate, probably because it’s not really their problem, is that doing nothing also has a cost—the likely continuance of my malady.
We don’t limit our pool of potential solutions to our problems to solutions “validated” by double blinded placebo controlled trials in any other aspect of life, because it isn’t rational to do so. It’s not rational for medical problems either.
It’s not about a positive result meaning something will “assuredly work for you”. Only a Sith deals in absolutes. It’s about cost-benefit analysis.
To give an example, no reasonable person would self-experiment to see if cyanide cures their rash. Although there’s a distant probability your body has some wildly unusual reaction to cyanide in which it cures rashes, it’s much more likely that cyanide will kill you, the same way it kills everyone else. Although it might be worth a shot if cyanide had no downside, we have very strong evidence that on average it has a very large downside.
The same is true of HRT. People were using it to improve their cardiovascular health. We found that, on average, it decreases cardiovascular health. You can still try using it on the grounds that it might paradoxically increase yours, but on average, you will lose utility.
Consider the analogy to a lottery. You have different numbers than everyone else does. Just because someone else lost the lottery with their numbers, doesn’t mean you will lose the lottery with your numbers. But if we study all lottery participants for ten years and find that on average they lose money, then unless you have a specific reason to think your numbers are better than everyone else’s (not just different), you should expect to lose money too.
Now things would be different with a treatment with no downside (like eating a lot of some kind of food, or taking a safe and cheap supplement) - as long as you don’t mind the loss of time and money you can experiment all you want with those (though I still think you’d have trouble with bias and privileging the hypothesis, and that a rational person wouldn’t find a lot of these harmless self-experiments worth the time and the money at all). And things would be different if the potential benefit and potential harm had different levels of utility for you: for example, if you wanted to cure your joint pain so badly you didn’t mind risking heart attack as a side effect. I think this is what you’re aiming at in your post above, and for those cases, I agree with you.
But when you’re taking a treatment like HRT which is intended to prevent heart attacks, but actually on average increases heart attacks, then shut up and multiply.
Also, don’t call it “self-experimentation” when you’re talking about preventing cardiovascular disease, since you never end up with any usable self-data (as opposed to, say, self-experimenting with medication for joint pain, where you might get a strong result of your joint pain disappearing that you can trace with some confidence to the medication). Call it what it is—gambling.
We don’t limit our pool of potential solutions to our problems to solutions “validated” by double blinded placebo controlled trials in any other aspect of life, because it isn’t rational to do so.
Wrong. We don’t do it because either there are no publications that answer our questions, so that we have to use something else, or because we have a more convenient method that works. Please don’t appeal to “rationality”.
Wrong. We don’t do it because either there are no publications that answer our questions, so that we have to use something else, or because we have a more convenient method that works. Please don’t appeal to “rationality”.
I don’t want to argue about who “we” is. I don’t so limit myself. YMMV.
I accurately identified a failing strategy of finding solutions. I see no reason not to accurately identify a failure in rationality as such, and every reason to do so.
Second, because I know others are not identical to me. Basing my choices solely on some statistical outcome on a pool of patients where I have none of that kind of information, and indeed the doctors involved didn’t take that information, and didn’t factor it into their solutions for their patients, strikes me as throwing out most all of my relevant data and trusting the results produced by a blind man with a shotgun.
Information is relevant only to the extent you can use it. How specifically can you use it to improve on prior provided by studies, and why would that modified estimate be an improvement? (Every improvement is a change, but not every change is an improvement.)
Absent any other prior, why would you use anything other than “My body will react to hormones the same way most other people’s bodies react to hormones”?
And you can’t self-experiment on risk of a heart attack. Your only endpoint is “I had a heart attack” or “I didn’t have a heart attack”, and even if you don’t mind getting your experimental result exactly one instant too late to help you, with a sample size of one you can’t draw any conclusions about whether taking HRT for ten years contributed to your heart attack or not.
And probably the most important reason is that medicine is weird. Even when the smartest people try to predict results that should be obvious, they very often get them wrong. “Based on what I know about the body, this sounds like it should work” is the worst reason to do anything. I know that sounds contrary to Bayes, but getting burned again and again by things that sound like they should work has recalibrated me on this one.
If you’re saying that you have unusual incentives here—eg that you value the possibility of adding to your natural lifespan enough that you’re willing to accept a small risk of subtracting from it and a large risk that you’re wasting time and money, that’s fair enough.
“Because this hasn’t worked for any of the thousands of people who have tried it before this is almost certainly going to work for me!”
Reality isn’t weird. What this means is that you know less about the body then you think you do.
Well, “reality isn’t weird” can mean a couple of different things. “Weird” is a two-part predicate like “sexiness”; things are only weird in reference to some particular mind’s preconceptions. Even Yog-Sothoth doesn’t seem weird to his own mother.
But if we use the word “weird” as a red flag to tell others that they can expect to be surprised or confused when entering a certain field, as long as we can predict that their minds and preconceptions work somewhat like ours, it’s a useful word.
I think Eliezer’s “reality is not weird” post was just trying to say that we can’t blame reality for being weird, or expect things to be irreducibly weird even after we challenge our preconceptions. I don’t think Eliezer was saying that we can’t describe anything as “weird” if it actually exists; after all, he himself has been known to describe certain potential laws of physics as weird.
(man, basing an argument on the trivial word choices of a venerated community leader spotted in an old archive makes me feel so Jewish)
But one can blame a theory for finding reality weird. In particular, you seem to be using “weird” to mean frequently behaves in ways that don’t agree with our models. That should cause you to lower your confidence in the models.
“Yes: that too is the tradition.”
And reality knows more. That’s why I advocate checking with reality.
First, because I am not absent other informational priors. I have a lifetime of informational priors about my own body. I also have access to pubmed, wikipedia, my 23andme genomic data, my personal medical history, my family’s medical history, and lab testing services that can take accurate measurements of me.
There are no clinical trials that have controlled for that information.
Second, because I know others are not identical to me. Basing my choices solely on some statistical outcome on a pool of patients where I have none of that kind of information, and indeed the doctors involved didn’t take that information, and didn’t factor it into their solutions for their patients, strikes me as throwing out most all of my relevant data and trusting the results produced by a blind man with a shotgun.
Moreover, refusing to experiment on yourself is to refuse to look at reality and take actual data about the system you’re interested in—you. That’s poor decisions theory, poor inference, and poor problem solving.
Yes, medicine is weird. Therefore, instead of thinking that you have it all worked out, or that a clinical trial has it all worked out for you, the rational thing to do is to evaluate options that might work, their costs and risks, try things, take measurements, update your model based on that additional data, and try again. Sure, if there are clinical trials, avail yourself of that information as well. Nice place to find candidate treatments. But you’re deluded if you think a positive result means it will assuredly work for you, and you’re deluded if you think a negative results, a “failure to reject”, means it won’t. At a minimum, if the trial didn’t have a crossover study, it hasn’t ruled out that the treatment is a perfect cure for some subset of people with the problem.
Any decent doctor I’ve had has basically said that all treatments are experiments for a particular person—maybe it will work for you, maybe not.
I don’t know that I have incentives any different from anyone else with a malady. I wish to get better. I recognize that there are risks involved in the attempts to get better. What doctors fail to appreciate, probably because it’s not really their problem, is that doing nothing also has a cost—the likely continuance of my malady.
We don’t limit our pool of potential solutions to our problems to solutions “validated” by double blinded placebo controlled trials in any other aspect of life, because it isn’t rational to do so. It’s not rational for medical problems either.
It’s not about a positive result meaning something will “assuredly work for you”. Only a Sith deals in absolutes. It’s about cost-benefit analysis.
To give an example, no reasonable person would self-experiment to see if cyanide cures their rash. Although there’s a distant probability your body has some wildly unusual reaction to cyanide in which it cures rashes, it’s much more likely that cyanide will kill you, the same way it kills everyone else. Although it might be worth a shot if cyanide had no downside, we have very strong evidence that on average it has a very large downside.
The same is true of HRT. People were using it to improve their cardiovascular health. We found that, on average, it decreases cardiovascular health. You can still try using it on the grounds that it might paradoxically increase yours, but on average, you will lose utility.
Consider the analogy to a lottery. You have different numbers than everyone else does. Just because someone else lost the lottery with their numbers, doesn’t mean you will lose the lottery with your numbers. But if we study all lottery participants for ten years and find that on average they lose money, then unless you have a specific reason to think your numbers are better than everyone else’s (not just different), you should expect to lose money too.
Now things would be different with a treatment with no downside (like eating a lot of some kind of food, or taking a safe and cheap supplement) - as long as you don’t mind the loss of time and money you can experiment all you want with those (though I still think you’d have trouble with bias and privileging the hypothesis, and that a rational person wouldn’t find a lot of these harmless self-experiments worth the time and the money at all). And things would be different if the potential benefit and potential harm had different levels of utility for you: for example, if you wanted to cure your joint pain so badly you didn’t mind risking heart attack as a side effect. I think this is what you’re aiming at in your post above, and for those cases, I agree with you.
But when you’re taking a treatment like HRT which is intended to prevent heart attacks, but actually on average increases heart attacks, then shut up and multiply.
Also, don’t call it “self-experimentation” when you’re talking about preventing cardiovascular disease, since you never end up with any usable self-data (as opposed to, say, self-experimenting with medication for joint pain, where you might get a strong result of your joint pain disappearing that you can trace with some confidence to the medication). Call it what it is—gambling.
Wrong. We don’t do it because either there are no publications that answer our questions, so that we have to use something else, or because we have a more convenient method that works. Please don’t appeal to “rationality”.
I don’t want to argue about who “we” is. I don’t so limit myself. YMMV.
I accurately identified a failing strategy of finding solutions. I see no reason not to accurately identify a failure in rationality as such, and every reason to do so.
Information is relevant only to the extent you can use it. How specifically can you use it to improve on prior provided by studies, and why would that modified estimate be an improvement? (Every improvement is a change, but not every change is an improvement.)