(One example of a red flag: on lifespan.io’s aging page they say “Deep learning is ideally suited to aging research, especially for tasks that require a high volume of data to be processed accurately; computers, unlike people, do not suffer bias and can consistently work to a high degree of accuracy at a far faster speed than we can.” Whoever wrote that sentence very likely has no idea what they’re talking about.)
You do mention:
So far, I can comment on the SENS projects that it financed, which I think were among the most effective pursued at SENS, and in the cause area in general
If we want to evaluate LEAF’s effectiveness, I’d expect this sort of thing to be the central topic. We need to look at what projects they’ve funded, what questions those projects have posed/answered, and how central they are to advancing our understanding of aging. This is not something which can be done while remaining agnostic about technical details, and it will inherently require criticizing a lot of work done in the field. We have to take a position on which aging research is most useful. (And no, things like citation counts or researcher surveys are not a good way to do this—that will mostly measure researcher status, which is not a particularly reliable proxy for research usefulness in a field where most researchers don’t even have a strong understanding of e.g. statistics and causality.)
That said… I’d make an exception here for policy advocacy. There, the goal is not to address the technical problems, but to create better incentives for anti-aging (e.g. by making “aging” a valid disease target for pharmaceuticals). That’s a different goal, and work in that direction should be evaluated differently than directing resources to research.
That said… I’d make an exception here for policy advocacy.
When it comes to policy advocay it’s important to be taken seriously by relevant stakeholders. I’m not sure to what extend lifespan.io is presently good at that.
The framing of the “Lifespan Heroes”-campaign might be good for motivating people to donate but at the same time I expect any mainstream medicine person to have their crackpot radar triggered when looking over the website.
When it comes to asking questions, it might be worth asking them for how they think about the tradeoffs between appearing serious and engaging people outside the medical field.
This is an interesting comment, I think you bring up good points.
One reason why I didn’t focus much on crowdfunding is that the money that goes in there is not really LEAF’s, and it’s just one among many focuses they have. If an EA decides to give money to LEAF (through the recurring campaign, or through a grant, for example) that money will probably not go to a crowdfunding campaign, and would probably not make much of an impact on how they decide who to crowdfund. It would go to their other projects. When donating to a campaign you donate to the specific org who benefits from the project of the campaign and not to LEAF. LEAF, unlike other orgs like Open Phil for example, doesn’t make grants directly, but only organizes campaigns so that people can bring money to a project.
You probably already knew all in the paragraph above, so: I think your point is correct. Where exactly they bring money by choosing who to finance is important in order to ascertain if the research which wouldn’t otherwise have happened is actually making an impact (an impact at all, given the characteristics of this field, yes). A plus to them from my POV is that they seem internally sympathetic to SENS’ approach (it’s obvious by reading their introductory articles), although they also financed different approaches (one campaign is for a project involving NMN supplementation led by David Sinclair, a couple of others on biomarkers...). But I admit it’s not much and a more detailed look would be ideal. For now, if you are more concerned about the science than YouTube/internet advocacy, policy influencing, etc. it is probably best to donate directly to orgs doing specific scientific research.
Not being able to evaluate much by looking at crowdfunding alone I followed the methodology of trying to gauge the ratio donations : money brought to the field, which I’ve seen used a lot for evaluating advocacy charities inside EA.
Maybe we’ll be able to ascertain their decision-making regarding crowdfunding better (although probably not a lot better) after the interview, since the first question is about that.
This post doesn’t really address the factors which I am most skeptical about (and would expect others to be most skeptical about).
I expect that the primary bottleneck to effectiveness of an advocacy/fundraising-focused aging foundation is not their effectiveness at attracting money or talent, but their effectiveness at directing that money or talent at the right questions. Identifying competent experts, without having some expertise in an area oneself, is hard. Large amounts of aging research don’t really tell us anything, because they weren’t asking useful questions from the start. It’s a very high dimensional problem, so searching in the right direction is orders-of-magnitude more important than just throwing lots of resources at it.
(One example of a red flag: on lifespan.io’s aging page they say “Deep learning is ideally suited to aging research, especially for tasks that require a high volume of data to be processed accurately; computers, unlike people, do not suffer bias and can consistently work to a high degree of accuracy at a far faster speed than we can.” Whoever wrote that sentence very likely has no idea what they’re talking about.)
You do mention:
If we want to evaluate LEAF’s effectiveness, I’d expect this sort of thing to be the central topic. We need to look at what projects they’ve funded, what questions those projects have posed/answered, and how central they are to advancing our understanding of aging. This is not something which can be done while remaining agnostic about technical details, and it will inherently require criticizing a lot of work done in the field. We have to take a position on which aging research is most useful. (And no, things like citation counts or researcher surveys are not a good way to do this—that will mostly measure researcher status, which is not a particularly reliable proxy for research usefulness in a field where most researchers don’t even have a strong understanding of e.g. statistics and causality.)
That said… I’d make an exception here for policy advocacy. There, the goal is not to address the technical problems, but to create better incentives for anti-aging (e.g. by making “aging” a valid disease target for pharmaceuticals). That’s a different goal, and work in that direction should be evaluated differently than directing resources to research.
When it comes to policy advocay it’s important to be taken seriously by relevant stakeholders. I’m not sure to what extend lifespan.io is presently good at that.
The framing of the “Lifespan Heroes”-campaign might be good for motivating people to donate but at the same time I expect any mainstream medicine person to have their crackpot radar triggered when looking over the website.
When it comes to asking questions, it might be worth asking them for how they think about the tradeoffs between appearing serious and engaging people outside the medical field.
This is an interesting comment, I think you bring up good points.
One reason why I didn’t focus much on crowdfunding is that the money that goes in there is not really LEAF’s, and it’s just one among many focuses they have. If an EA decides to give money to LEAF (through the recurring campaign, or through a grant, for example) that money will probably not go to a crowdfunding campaign, and would probably not make much of an impact on how they decide who to crowdfund. It would go to their other projects. When donating to a campaign you donate to the specific org who benefits from the project of the campaign and not to LEAF. LEAF, unlike other orgs like Open Phil for example, doesn’t make grants directly, but only organizes campaigns so that people can bring money to a project.
You probably already knew all in the paragraph above, so: I think your point is correct. Where exactly they bring money by choosing who to finance is important in order to ascertain if the research which wouldn’t otherwise have happened is actually making an impact (an impact at all, given the characteristics of this field, yes). A plus to them from my POV is that they seem internally sympathetic to SENS’ approach (it’s obvious by reading their introductory articles), although they also financed different approaches (one campaign is for a project involving NMN supplementation led by David Sinclair, a couple of others on biomarkers...). But I admit it’s not much and a more detailed look would be ideal. For now, if you are more concerned about the science than YouTube/internet advocacy, policy influencing, etc. it is probably best to donate directly to orgs doing specific scientific research.
Not being able to evaluate much by looking at crowdfunding alone I followed the methodology of trying to gauge the ratio donations : money brought to the field, which I’ve seen used a lot for evaluating advocacy charities inside EA.
Maybe we’ll be able to ascertain their decision-making regarding crowdfunding better (although probably not a lot better) after the interview, since the first question is about that.