I know there are serious problems at other EA organizations, which produce largely fake research (and probably took in people who wanted to do real research, who become convinced by their experience to do fake research instead), although I don’t know the specifics as well. EAs generally think that the vast majority of charities are doing low-value and/or fake work.
Do I understand correctly that here by “fake” you mean low-value or only pretending to be aimed at solving the most important problems of the humanity, rather than actual falsifications going on, publishing false data, that kind of thing?
As an example of the difficulties in illusions of transparency, when I first read the post, my first interpretation of “largely fake research” was neither of what you said or what jessicata clarified below but I simply assumed that “fake research” ⇒ “untrue,” in the sense that people who updated from >50% of research from those orgs will on average have a worse Brier score on related topics. This didn’t seem unlikely to me on the face of it, since random error, motivated reasoning, and other systemic biases can all contribute to having bad models of the world.
Since 3 people can have 4 different interpretations of the same phrase, this makes me worried that there are many other semantic confusions I didn’t spot.
I mean pretending to be aimed at solving the most important problems, and also creating organizational incentives for actual bias in the data. For example, I heard from someone at GiveWell that, when they created a report saying that a certain intervention had (small) health downsides as well as upsides, their supervisor said that the fact that these downsides were investigated at all (even if they were small) decreased the chance of the intervention being approved, which creates an obvious incentive for not investigating downsides.
There’s also a divergence between GiveWell’s internal analysis and their more external presentation and marketing; for example, while SCI is and was listed as a global health charity, GiveWell’s analysis found that, while there was a measurable positive effect on income, there wasn’t one on health metrics.
while SCI is and was listed as a global health charity, GiveWell’s analysis found that, while there was a measurable positive effect on income, there wasn’t one on health metrics
That doesn’t sound right? They say there is strong evidence that deworming kills the parasites, and weaker evidence that it both improves short term health and leads to higher incomes later in life. But in as much as it improves income, it pretty much has to be doing that via making health better: there isn’t really any other plausible path from deworming to higher income. https://www.givewell.org/international/technical/programs/deworming
I’d expect that to show up in some long-run health metrics if that were the mechanism, though.
One way this could be net neutral is that it helps kids with worms but hurts kids without worms. They don’t test for high parasitic load before administering these pills, they give them to all the kids (using coercive methods).
But also, killing foreign creatures living in the body is often bad for health. This is a surprising fact—on first principles I’d have predicted that mass administration of antibiotics would improve health by killing off gut bacteria- but this seems not to be generically true, and sometimes we even suffer from the missing gut bugs. (E.g. probiotics, and more directly relevant, helminth therapy.)
While other metrics might show a change, if collected carefully, I think all we know at this point is that no one has done that research? Which is very different from saying that we do know that there is no effect on health?
While other metrics might show a change, if collected carefully, I think all we know at this point is that no one has done that research? Which is very different from saying that we do know that there is no effect on health?
Neither Jessica nor I said there was no effect on health. It seems like maybe we agree that there was no clearly significant, actually measured effect on long-run health. And GiveWell’s marketing presents its recommendations as reflecting a justified high level of epistemic confidence in the benefit claims of its top charities.
We know that people have looked for long-run effects on health and failed to find anything more significant than the levels that routinely fail replication. With an income effect that huge attributable to health I’d expect a huge, p<.001 improvement in some metric like reaction times or fertility or reduction the incidence of some well-defined easy-to-measure malnutrition-related disease.
Worth noting that antibiotics (in a similar epistemic reference class to dewormers for reasons I mentioned above) are used to fatten livestock, so we should end up with some combination of:
Skepticism of weight gain as evidence of benefit.
Increased credence that normal humans can individually get bigger and healthier by taking antibiotics to kill off their gut bacteria.
I mostly favor the former, because when I was prescribed antibiotics for acne as a kid they made me feel miserable, which I would not describe as an improvement in health, and because in general it seems like people trying to know about this stuff think antibiotics are bad for you, and only worth it if you have an unusually harmful bacterial infection.
Neither Jessica nor I said there was no effect on health
I had read “GiveWell’s analysis found that, while there was a measurable positive effect on income, there wasn’t one on health metrics” as “there was an effect on income that was measurable and positive, but there wasn’t an effect on health metrics”. Rereading, I think that’s probably not what Jessica meant, though? Sorry!
Do I understand correctly that here by “fake” you mean low-value or only pretending to be aimed at solving the most important problems of the humanity, rather than actual falsifications going on, publishing false data, that kind of thing?
As an example of the difficulties in illusions of transparency, when I first read the post, my first interpretation of “largely fake research” was neither of what you said or what jessicata clarified below but I simply assumed that “fake research” ⇒ “untrue,” in the sense that people who updated from >50% of research from those orgs will on average have a worse Brier score on related topics. This didn’t seem unlikely to me on the face of it, since random error, motivated reasoning, and other systemic biases can all contribute to having bad models of the world.
Since 3 people can have 4 different interpretations of the same phrase, this makes me worried that there are many other semantic confusions I didn’t spot.
I mean pretending to be aimed at solving the most important problems, and also creating organizational incentives for actual bias in the data. For example, I heard from someone at GiveWell that, when they created a report saying that a certain intervention had (small) health downsides as well as upsides, their supervisor said that the fact that these downsides were investigated at all (even if they were small) decreased the chance of the intervention being approved, which creates an obvious incentive for not investigating downsides.
There’s also a divergence between GiveWell’s internal analysis and their more external presentation and marketing; for example, while SCI is and was listed as a global health charity, GiveWell’s analysis found that, while there was a measurable positive effect on income, there wasn’t one on health metrics.
That doesn’t sound right? They say there is strong evidence that deworming kills the parasites, and weaker evidence that it both improves short term health and leads to higher incomes later in life. But in as much as it improves income, it pretty much has to be doing that via making health better: there isn’t really any other plausible path from deworming to higher income. https://www.givewell.org/international/technical/programs/deworming
I’d expect that to show up in some long-run health metrics if that were the mechanism, though.
One way this could be net neutral is that it helps kids with worms but hurts kids without worms. They don’t test for high parasitic load before administering these pills, they give them to all the kids (using coercive methods).
But also, killing foreign creatures living in the body is often bad for health. This is a surprising fact—on first principles I’d have predicted that mass administration of antibiotics would improve health by killing off gut bacteria- but this seems not to be generically true, and sometimes we even suffer from the missing gut bugs. (E.g. probiotics, and more directly relevant, helminth therapy.)
GiveWell discusses this here: https://www.givewell.org/international/technical/programs/deworming
Summary:
~0.1kg weight increase
Unusably noisy data on hemoglobin levels
No effect on height
While other metrics might show a change, if collected carefully, I think all we know at this point is that no one has done that research? Which is very different from saying that we do know that there is no effect on health?
Neither Jessica nor I said there was no effect on health. It seems like maybe we agree that there was no clearly significant, actually measured effect on long-run health. And GiveWell’s marketing presents its recommendations as reflecting a justified high level of epistemic confidence in the benefit claims of its top charities.
We know that people have looked for long-run effects on health and failed to find anything more significant than the levels that routinely fail replication. With an income effect that huge attributable to health I’d expect a huge, p<.001 improvement in some metric like reaction times or fertility or reduction the incidence of some well-defined easy-to-measure malnutrition-related disease.
Worth noting that antibiotics (in a similar epistemic reference class to dewormers for reasons I mentioned above) are used to fatten livestock, so we should end up with some combination of:
Skepticism of weight gain as evidence of benefit.
Increased credence that normal humans can individually get bigger and healthier by taking antibiotics to kill off their gut bacteria.
I mostly favor the former, because when I was prescribed antibiotics for acne as a kid they made me feel miserable, which I would not describe as an improvement in health, and because in general it seems like people trying to know about this stuff think antibiotics are bad for you, and only worth it if you have an unusually harmful bacterial infection.
I had read “GiveWell’s analysis found that, while there was a measurable positive effect on income, there wasn’t one on health metrics” as “there was an effect on income that was measurable and positive, but there wasn’t an effect on health metrics”. Rereading, I think that’s probably not what Jessica meant, though? Sorry!
Yeah, I meant there wasn’t a measureable positive health effect.