The gist of that post is that if you take dozens-hundreds of medications for something, each of which is unlikely to work but carries basically no risk, the net effect of the medicines is in theory positive. I think this is probably false for acute illnesses, but might be true for chronic illnesses since the net damage of the illness is larger (barring consequences like death from acute illness) and the odds of a negative interaction between medications is lower when you’re only taking a handful at a given time (again, so long as nothing kills you—I don’t know how to account for that because I don’t know how likely it is). As a result, while for each medication you take you might have negative expectations for that dose—a couple of random side effects with no effect on your condition—the information you gain means that the lifetime consequences of taking the dose are positive. I have no idea how to even guess at the frequency that a given medication will solve a given problem through pure luck, which is unfortunately the central number in this calculation.
Yeah, I would only make the case for this when a cure will be obvious. Medicine that insists its lack of effect is a sign you need more, ad-infinitum, is the worst- but I’d apply this to standard medicine too, and standard doctors take it worse than alt practitioners.
I also think Scott is mostly but not entirely right on the Algernon effect- that it’s easier to hurt yourself than help. That is mostly true, but often chronic problems are caused by your normal homeostasis mechanisms getting caught in a bad equilibrium, and knocking yourself out of that equilibrium can be a step into an adaptive valley. But that can only possibly hold for things where there’s a good equilibrium to be had, which I think is much more true for digestion and inflammation than for e.g. aging.
I just want to note the origin and context for “Algernon effect” for anyone who might stumble across this. Eliezer Yudkowsky based the term “Algernon’s Law” on the SF book Flowers for Algernon and used it loosely to refer to the idea that evolution has probably found most of the simple ways to increase human intelligence in ways that benefit transmission of the genes involved. Then Gwern built on Eliezer’s writing and others in his coverage of purported intelligence enhancing drugs and other practices. Scott cited Gwern in redefining Algernon’s Law to mean “your body is already mostly optimal, so adding more things is unlikely to have large positive effects unless there’s some really good reason,” and now it’s being used here to mean “it’s easier to hurt yourself than help.”
Genes that cause disadvantages at later ages (which impact fewer organisms) may give a reproductive advantage at a younger age, and thereby achieve a net reproductive advantage.
The optimizing pressure of natural selection diminishes with age, particular in the post-reproductive part of the life cycle.
This helps explain why people age, which is just another word for the development of health problems over time and the mortality risk they cause. It may also help explain evolutionary limits on intelligence. A gene that enhances intelligence, but lowers the chance of reproduction overall in the ancestral environment, will be selected against. For example, if a gene increases intelligence, but delays puberty, causing the organism to suffer more brushes with death in the wild, evolution may select it out of the gene pool—even though this particular form of evolutionary cost may not be one that we particularly care about, or that even impacts us very much in our modern, low-risk environment.
None of this is to necessarily contradict Elizabeth’s comment—just to add context.
This reminds me of Scott Alexander’s post on Pascalian Medicine: https://astralcodexten.substack.com/p/pascalian-medicine
The gist of that post is that if you take dozens-hundreds of medications for something, each of which is unlikely to work but carries basically no risk, the net effect of the medicines is in theory positive. I think this is probably false for acute illnesses, but might be true for chronic illnesses since the net damage of the illness is larger (barring consequences like death from acute illness) and the odds of a negative interaction between medications is lower when you’re only taking a handful at a given time (again, so long as nothing kills you—I don’t know how to account for that because I don’t know how likely it is). As a result, while for each medication you take you might have negative expectations for that dose—a couple of random side effects with no effect on your condition—the information you gain means that the lifetime consequences of taking the dose are positive. I have no idea how to even guess at the frequency that a given medication will solve a given problem through pure luck, which is unfortunately the central number in this calculation.
I’m glad you found something that works for you!
Yeah, I would only make the case for this when a cure will be obvious. Medicine that insists its lack of effect is a sign you need more, ad-infinitum, is the worst- but I’d apply this to standard medicine too, and standard doctors take it worse than alt practitioners.
I also think Scott is mostly but not entirely right on the Algernon effect- that it’s easier to hurt yourself than help. That is mostly true, but often chronic problems are caused by your normal homeostasis mechanisms getting caught in a bad equilibrium, and knocking yourself out of that equilibrium can be a step into an adaptive valley. But that can only possibly hold for things where there’s a good equilibrium to be had, which I think is much more true for digestion and inflammation than for e.g. aging.
I just want to note the origin and context for “Algernon effect” for anyone who might stumble across this. Eliezer Yudkowsky based the term “Algernon’s Law” on the SF book Flowers for Algernon and used it loosely to refer to the idea that evolution has probably found most of the simple ways to increase human intelligence in ways that benefit transmission of the genes involved. Then Gwern built on Eliezer’s writing and others in his coverage of purported intelligence enhancing drugs and other practices. Scott cited Gwern in redefining Algernon’s Law to mean “your body is already mostly optimal, so adding more things is unlikely to have large positive effects unless there’s some really good reason,” and now it’s being used here to mean “it’s easier to hurt yourself than help.”
I haven’t looked much into intelligence research, but the mainstream understanding of this idea in aging research is based on antagonistic pleiotropy and diminishing selection pressure with age.
Genes that cause disadvantages at later ages (which impact fewer organisms) may give a reproductive advantage at a younger age, and thereby achieve a net reproductive advantage.
The optimizing pressure of natural selection diminishes with age, particular in the post-reproductive part of the life cycle.
This helps explain why people age, which is just another word for the development of health problems over time and the mortality risk they cause. It may also help explain evolutionary limits on intelligence. A gene that enhances intelligence, but lowers the chance of reproduction overall in the ancestral environment, will be selected against. For example, if a gene increases intelligence, but delays puberty, causing the organism to suffer more brushes with death in the wild, evolution may select it out of the gene pool—even though this particular form of evolutionary cost may not be one that we particularly care about, or that even impacts us very much in our modern, low-risk environment.
None of this is to necessarily contradict Elizabeth’s comment—just to add context.