I know many EAs and consider many of them friends, but I do not centrally view the world in EA terms, or share the EA moral or ethical frameworks. I don’t use what seem to for all practical purposes be their decision theories. I have very large, very deep, very central disagreements with EA and its core components and central organizations and modes of operation. I have deep worries that important things are deeply, deeply wrong, especially epistemically, and results in an increasingly Goodharted and inherently political and insider-biased system. I worry that this does intense psychological, epistemic and life experiential damage to many EAs.
(1) I wish we distinguished between endorsing doing good better and endorsing the EA movement/community/etc. The current definition of EA is something like:
Effective altruism is the project of:
Using evidence and reason to find the most promising causes to work on.
Taking action, by using our time and money to do the most good we can.
I assume that you roughly endorse this? At the least, one could endorse narrow principles of EA while being quite concerned about the movement/community/etc. So (2) I’m curious what “the EA moral or ethical frameworks” that you disagree with are. Indeed, the standard EA position is that there is no ‘EA moral framework,’ or perhaps the more honest consensus is ‘tentative rough welfarism.’ And most important:
(3) I’m curious what your “very large, very deep, very central disagreements with EA and its core components and central organizations and modes of operation” are; knowing this could be quite valuable to me and others. I consider myself an EA, but I think you know more about its “central organizations and modes of operation” than I do, and I would update against the movement/community/etc* if given reason to do so. If being involved in organized EA is a mistake, please help me see why.
(Responses from non-Zvi readers would also be valuable, as would be directing me to existing writing on these topics.)
*Edit: I meant (epistemically) update against my ability to do a lot of good within organized EA, compared to outside of it.
I intentionally dodged giving more details in these spots, because I want people to reason from the information and figure out what’s going on and what that means, and I don’t think updating ‘against’ (or for) things is the way one should be going about updating.
Also because Long Post Is Long and getting into those other things would be difficult to write well, make things much longer, and be a huge distraction from actually processing the information.
I think there’s a much better chance of people actually figuring things out this way.
That doesn’t mean you’re not asking good questions.
I’d give the following notes.
“Doing good better” implies a lot of framework already in ways worth thinking about.
The EA definition above has even more implicit framework, and as to whether I’d endorse it, my instinctive answer to whether I roughly endorse it would be Mu. My full answer is at least one post.
EA definitely has both shared moral frameworks that are like water to a fish, and also implied moral frameworks that fall out of actions and revealed preferences, many of which wouldn’t be endorsed consciously if made explicit. I disagree with much of both, but I want readers to be curious and ask what those are and figure that out, rather than taking my word for it. And leave whether I disagree with them for another time if and when I have the time and method to explain properly.
EA modes of operation disagreements I believe I do my best to largely answer through the full content of the post.
Apologies that I can’t more fully answer, at least for now.
That said, I fear that people in my position—viz., students who don’t really know non-student EAs*—don’t have the information to “figure out what’s going on and what that means.” So I want to note here that it would be valuable for people like me if you or someone else someday wrote a post explaining more what’s going on in organized EA (and I’ll finish reading this post carefully, since it seems relevant).
*I run my college’s EA group; even relative to other student groups I/we are relatively detached from organized EA.
Sidenote: my Zvi-model is consistent with Zvi being worried about organized EA both for reasons that would also worry me (e.g., “I have deep worries that important things are deeply, deeply wrong, especially epistemically, and results in an increasingly Goodharted and inherently political and insider-biased system”) and for reasons that would not worry me much (e.g., EA is quite demanding or quite utilitarian, or something related to “doing good better” or the definition of EA being bad). So I’m not well-positioned to infer much from the mere fact that Zvi (or someone else) has concerns. Of course, it’s much healthier to form beliefs on the basis of understanding rather than deference anyway, so it doesn’t really matter. I just wanted to note that I can’t infer much from your and others’ affects for this reason.
Nearly two years in the pandemic the core EA organizations still seem to show no sign of caring that they didn’t prevent it despite their mission including fighting biorisks. Doing so would require asking uncomfortable questions and accepting uncomfortable truths and there seems to be no willingness to do so.
The epistemic habits that would be required to engage such an issue seem to be absent.
When it comes to goodharting, Ben Hoffman’s criticism of GiveWell measuring their success by the cost GiveWell imposes on other people would be one example. Instead of writing reports that are as imformative as possible that pushes the report writing in a direction that motivates people to donate instead of being demotivated by potential issues (Ben worked at GiveWell).
Nearly two years in the pandemic the core EA organizations still seem to show no sign of caring that they didn’t prevent it despite their mission including fighting biorisks.
Which core organizations are you referring to, and which signs are you looking for?
This has been discussed to some extent on the Forum, particularly in this thread, where multiple orgs were explicitly criticized. (I want to see a lot more discussions like these than actually exist, but I would say the same thing about many other topics — EA just isn’t very big and most people there, as anywhere, don’t like writing things in public. I expect that many similar discussions happened within expert circles and didn’t appear on the Forum.)
I worked at CEA until recently, and while our mission isn’t especially biorisk-centric (we affect EA bio work in indirect ways on multi-year timescales), our executive director insisted that we should include a mention in the opening talk of the EA Picnic that EA clearly fell short of where it should have been on COVID. It’s not much, but I think it reflects a broader consensus that we could have done better and didn’t.
That said, the implication that EA not preventing the pandemic is a problem for EA seems reasonable only in a very loose sense (better things were possible, as they always are). Open Phil invested less than $100 million into all of its biosecurity grants put together prior to February 2020, and that’s over a five-year period. That this funding (and direct work from a few dozen people, if that) failed to prevent COVID seems very unsurprising, and hard to learn from.
Is there a path you have in mind whereby Open Phil (or anyone else in EA) could have spent that kind of money in a way that would likely have prevented the pandemic, given the information that was available to the relevant parties in the years 2015-2019?
Doing so would require asking uncomfortable questions and accepting uncomfortable truths and there seems to be no willingness to do so.
I find this kind of comment really unhelpful, especially in the context of LessWrong being a site about explaining your reasoning and models.
What are the uncomfortable questions and truths you are talking about? If you don’t even explain what you mean, it seems impossible to verify your claim that no one was asking/accepting these “truths”, or even whether they were truths at all.
I have argued to some EA leaders that the pandemic called for rapid and intense response as an opportunity to Do a Thing and thereby do a lot of good, and they had two general responses. One was the very reasonable ‘there’s a ton of uncertainty and logistics of actually doing useful things is hard yo’ but what I still don’t understand was the arguments against a hypothetical use of funds that by assumption would work.
In particular (this was pre-Omicron), I presented this hypothetical, based on a claim from David Manheim, doesn’t matter for this purpose if the model of action would have worked or not because we’re assuming it does:
In May 2020, let’s say you know for a fact that the vaccines are highly safe and effective, and on what schedule they will otherwise be available. You can write a $4 billion check to build vaccine manufacturing plants for mRNA vaccines. As a result, in December 2020, there will be enough vaccine for whoever wants one, throughout the world.
Do you write the check?
The answer I got back was an emphatic not only no, but that it was such a naive thing to think it would be a good idea to do, and I needed to learn more about EA.
I will point out that my work proposing funding mechanisms to work on that, and the idea, was being funded by exactly those EA orgs which OpenPhil and others were funding. (But I’m not sure why they people you spoke with claim that they wouldn’t fund this, and following your lead, I’ll ignore the various issues with the practicalities—we didn’t know mRNA was the right thing to bet on in May 2020, the total cost for enough manufacturing for the world to be vaccinated in <6 months is probably a (single digit) multiple of $4bn, etc.)
I haven’t done much research on this, but from a naive perspective, spending 4 billion dollars to move up vaccine access by a few months sounds incredibly unlikely to be a good idea? Is the idea that it is more effective than standard global health interventions in terms of QALYs or a similar metric, or that there’s some other benefit that is incommensurable with other global health interventions? (This feels like asking the wrong question but maybe it will at least help me understand your perspective)
The idea is that the extra production capacity funded with that $4b doesn’t just move up access a few months for rich countries, it also means poor countries get enough doses in months not years, and that there is capacity for making boosters, etc. (It’s a one-time purchase to increase the speed of vaccines for the medium term future. In other words, it changes the derivative, not the level or the delivery date.)
”COVAX, the global program for purchasing and distributing COVID-19 vaccines, has struggled to secure enough vaccine doses since its inception..
Nearly 100 low-income nations are relying on the program for vaccines. COVAX was initially aiming to deliver 2 billion doses by the end of 2021, enough to vaccinate only the most high-risk groups in developing countries. However, its delivery forecast was wound back in September to only 1.425 billion doses by the end of the year.
And by the end of November, less than 576 million doses had actually been delivered.”
I’ve been writing the EA Newsletter and running the EA Forum for three years, and I’m currently a facilitator for the In-Depth EA Program, so I think I’ve learned enough about EA not to be too naïve.
I’m also an employee of Open Philanthropy starting January 3rd, though I don’t speak for them here.
Given your hypothetical and a few minutes of thought, I’d want Open Phil to write the check. It seems like an incredible buy given their stated funding standards for health interventions and reasonable assumptions about the “fewer manufacturing plants” counterfactual. (This makes me wonder whether Alexander Berger is among the leaders you mentioned, though I assume you can’t say.)
Are any of the arguments that you heard against doing so available for others to read? And were the people you heard back from unanimous?
I ask not in the spirit of doubt, but in the spirit of “I’m surprised and trying to figure things out”.
(Also, David Manheim is a major researcher in the EA community, which makes the whole situation/debate feel especially strange. I’d guess that he has more influence on actually EA-funded COVID decisions than most of the people I’d classify as “EA leaders”.)
What are the uncomfortable questions and truths you are talking about?
COVID-19 is airbone. Biosafety level 2 is not sufficient to protect against airbone infections. The Chinese did gain-of-function research on coronoviruses under biosafety level 2 in Wuhan and publically said so in their published papers. This is the most likely reason we have the pandemic. There are strong efforts to cover that up the lab leak, from the Chinese, the US and other parties.
Is there a path you have in mind whereby Open Phil (or anyone else in EA) could have spent that kind of money in a way that would likely have prevented the pandemic, given the information that was available to the relevant parties in the years 2015-2019?
Fund a project that lists who does what gain-of-function research with what safety procautions to understand the threat better. After discovering that the Chinese did their gain-of-function research at biosafety level 2, put public pressure on them to not do that.
After being done with putting pressure on shutting down all the biosafety level 2 gain-of-function research attempt to do the same with biosafety level 3 gain-of-function research. Without the power to push through a global ban on the research pushing for only doing it in biosafety level 4 might be a fight worth having.
If you had done even a bit of homework, you’d see that there was money going in to all of this. iGem and the Blue ribbon panel have been getting funded for over half a decade, and CHS for not much less. The problem was that there were too few people working on the problem, and there was no public will to ban scientific research which was risky. And starting from 2017, when I was doing work on exactly these issues—lab safety and precautions, and trying to make the case for why lack of monitoring was a problem—the limitation wasn’t a lack funding from EA orgs. Quite the contrary—almost no-one important in biosecurity wasn’t getting funded well to do everything that seemed potentially valuable.
So it’s pretty damn frustrating to hear someone say that someone should have been working on this, or funding this. Because we were, and they were.
If you would have done your research you would know that I opened previous threads and have done plenty of research.
I haven’t claimed that there wasn’t any money being invested into “working on biosecurity” but that most of it wasn’t effectively invested to stop the pandemic. The people funding the gain-of-function research are also seeing themselves as working in biosafety.
The problem was that there were too few people working on the problem, and there was no public will to ban scientific research which was risky.
The position at the time shouldn’t be to target banning gain-of-function research in general given that’s politically not achievable but to say that it should only happen under biosafety 4.
It would have been possible to have a press campaign about how the Trump administration wants to allow dangerous gain-of-function research that was previously banned to happen under conditions that aren’t even the highest available biosafety level.
It’s probably still true today that “no gain-of-function outside of biosafety level 4” is the correct political demand.
The Chinese were written openly in their papers that they were doing the work under biosafety level 2. The problem was not about a lack of monitoring of their labs. It was just that nobody cared about them openly doing research in a dangerous setting.
iGem and the Blue ribbon panel have been getting funded for over half a decade, and CHS for not much less.
iGem seems to a be a project about getting people to do more dangerous research and no project about reducing the amount of dangerous research that happens. Such an organization has bad incentives to take on the virology community to stop them from doing harm.
CHS seems to be doing net beneficial work. I’m still a bit confused about why they ran the Coronovirus pandemic exercise after the chaos started in the WIV. That’s sort of between “someone was very clever” and “someone should have reacted much better”.
I can go through details, and you’re wrong about what the mentioned orgs have done which matters, but even ignoring that, I strongly disagree about how we can and should push for better policy, and don’t think that even giving unlimited funding (which we effectively had,) there could have been enough people working on this to have done what you suggest (and we still don’t have enough people for high priority projects, despite, again, an effectively blank check!) and think you’re suggesting that we should have prioritized a single task, stopping Chinese BSL-2 work, based purely on post-hoc information, instead of pursuing the highest EV work as it was, IMO correctly, assessed at the time.
But even granting prophecy, I think that there is no world in which even an extra billion dollars per year 2015-2020 would have been able to pay for enough people and resources to get your suggested change done. And if we had tried to push on the idea, it would have destroyed EA Bio’s ability to do things now. And more critically, given any limited level of public attention and policy influence, focusing on mitigating existential risks instead of relatively minor events like COVID would probably have been the right move even knowing that COVID was coming! (Though it would certainly have changed the strategy so we could have responded better.)
iGem seems to a be a project about getting people to do more dangerous research and no project about reducing the amount of dangerous research that happens. Such an organization has bad incentives to take on the virology community to stop them from doing harm.
Or would you prefer that safety people not try to influence education and safety standards of people actually doing the work? Because if you ignore everyone with bad incentives, you can’t actually change the behaviors of the worst actors.
I don’t think that funding this work is net negative. On the other hand, I don’t think it can do what’s necessary to prevent the Coronavirus lab leak in 2019 or either or the two potential Coronavirus lab leaks in 2021.
It took the White House Office of Science and Technology to create the first moratorium because the NIH wasn’t capable and it would also need outside pressure to achieve anything else that’s strong enough to be sufficient to deal with the problem.
You didn’t respond to my comment that addressed this, but; “even granting prophecy, I think that there is no world in which even an extra billion dollars per year 2015-2020 would have been able to pay for enough people and resources to get your suggested change done. And if we had tried to push on the idea, it would have destroyed EA Bio’s ability to do things now. And more critically, given any limited level of public attention and policy influence, focusing on mitigating existential risks instead of relatively minor events like COVID would probably have been the right move even knowing that COVID was coming!”
Thanks for sharing a specific answer! I appreciate the detail and willingness to engage.
I don’t have the requisite biopolitical knowledge to weigh in on whether the approach you mentioned seems promising, but it does qualify as something someone could have been doing pre-COVID, and a plausible intervention at that.
My default assumptions for cases of “no one in EA has funded X”, in order from most to least likely:
No one ever asked funders in EA to fund X.
Funders in EA considered funding X, but it seemed like a poor choice from a (hits-based or cost-effectiveness) perspective.
Funders in EA considered funding X, but couldn’t find anyone who seemed like a good fit for it.
Various other factors, including “X seemed like a great thing to fund, but would have required acknowledging something the funders thought was both true and uncomfortable”.
In the case of this specific plausible thing, I’d guess it was (2) or (3) rather than (1). While anything involving China can be sensitive, Open Phil and other funders have spent plenty of money on work that involves Chinese policy. (CSET got $100 million from Open Phil, and runs a system tracking PRC “talent initiatives” that specifically refers to China’s “military goals” — their newsletter talks about Chinese AI progress all the time, with the clear implication that it’s a potential global threat.)
That’s not to say that I think (4) is impossible — it just doesn’t get much weight from me compared to those other options.
FWIW, as far as I’ve seen, the EA community has been unanimous in support of the argument “it’s totally fine to debate whether this was a lab leak”. (This is different from the argument “this was definitely a lab leak”.) Maybe I’m forgetting something from the early days when that point was more controversial, or I just didn’t see some big discussion somewhere. But when I think about “big names in EA pontificating on leaks”, things like this and this come to mind.
*****
Do you know of anyone who was trying to build out the gain-of-function project you mentioned during the time before the pandemic? And whether they ever approached anyone in EA about funding? Or whether any organizations actually considered this internally?
See my reply above, but this was actually none of your 4 options—it was “funders in EA were pouring money into this as quickly as they could find people willing to work on it.”
And the reasons no-one was pushing the specific proposal of “publicly shame China into stopping [so-called] GoF work” include the fact that US labs have done and still do similar work in only slightly safer conditions, as do microbiologists everywhere else, and that building public consensus about something no-one but a few specific groups of experts care about isn’t an effective use of funds.
Thanks for the further detail. It sounds like this wasn’t actually a case of “no one in EA has funded X”, which makes my list irrelevant.
(Maybe the first item on the list should be “actually, people in EA are definitely funding X”, since that’s something I often find when I look into claims like Christian’s, though it wasn’t obvious to me in this case.)
(1) I wish we distinguished between endorsing doing good better and endorsing the EA movement/community/etc. The current definition of EA is something like:
I assume that you roughly endorse this? At the least, one could endorse narrow principles of EA while being quite concerned about the movement/community/etc. So (2) I’m curious what “the EA moral or ethical frameworks” that you disagree with are. Indeed, the standard EA position is that there is no ‘EA moral framework,’ or perhaps the more honest consensus is ‘tentative rough welfarism.’ And most important:
(3) I’m curious what your “very large, very deep, very central disagreements with EA and its core components and central organizations and modes of operation” are; knowing this could be quite valuable to me and others. I consider myself an EA, but I think you know more about its “central organizations and modes of operation” than I do, and I would update against the movement/community/etc* if given reason to do so. If being involved in organized EA is a mistake, please help me see why.
(Responses from non-Zvi readers would also be valuable, as would be directing me to existing writing on these topics.)
*Edit: I meant (epistemically) update against my ability to do a lot of good within organized EA, compared to outside of it.
I intentionally dodged giving more details in these spots, because I want people to reason from the information and figure out what’s going on and what that means, and I don’t think updating ‘against’ (or for) things is the way one should be going about updating.
Also because Long Post Is Long and getting into those other things would be difficult to write well, make things much longer, and be a huge distraction from actually processing the information.
I think there’s a much better chance of people actually figuring things out this way.
That doesn’t mean you’re not asking good questions.
I’d give the following notes.
“Doing good better” implies a lot of framework already in ways worth thinking about.
The EA definition above has even more implicit framework, and as to whether I’d endorse it, my instinctive answer to whether I roughly endorse it would be Mu. My full answer is at least one post.
EA definitely has both shared moral frameworks that are like water to a fish, and also implied moral frameworks that fall out of actions and revealed preferences, many of which wouldn’t be endorsed consciously if made explicit. I disagree with much of both, but I want readers to be curious and ask what those are and figure that out, rather than taking my word for it. And leave whether I disagree with them for another time if and when I have the time and method to explain properly.
EA modes of operation disagreements I believe I do my best to largely answer through the full content of the post.
Apologies that I can’t more fully answer, at least for now.
OK, thanks; this sounds reasonable.
That said, I fear that people in my position—viz., students who don’t really know non-student EAs*—don’t have the information to “figure out what’s going on and what that means.” So I want to note here that it would be valuable for people like me if you or someone else someday wrote a post explaining more what’s going on in organized EA (and I’ll finish reading this post carefully, since it seems relevant).
*I run my college’s EA group; even relative to other student groups I/we are relatively detached from organized EA.
Sidenote: my Zvi-model is consistent with Zvi being worried about organized EA both for reasons that would also worry me (e.g., “I have deep worries that important things are deeply, deeply wrong, especially epistemically, and results in an increasingly Goodharted and inherently political and insider-biased system”) and for reasons that would not worry me much (e.g., EA is quite demanding or quite utilitarian, or something related to “doing good better” or the definition of EA being bad). So I’m not well-positioned to infer much from the mere fact that Zvi (or someone else) has concerns. Of course, it’s much healthier to form beliefs on the basis of understanding rather than deference anyway, so it doesn’t really matter. I just wanted to note that I can’t infer much from your and others’ affects for this reason.
Nearly two years in the pandemic the core EA organizations still seem to show no sign of caring that they didn’t prevent it despite their mission including fighting biorisks. Doing so would require asking uncomfortable questions and accepting uncomfortable truths and there seems to be no willingness to do so.
The epistemic habits that would be required to engage such an issue seem to be absent.
When it comes to goodharting, Ben Hoffman’s criticism of GiveWell measuring their success by the cost GiveWell imposes on other people would be one example. Instead of writing reports that are as imformative as possible that pushes the report writing in a direction that motivates people to donate instead of being demotivated by potential issues (Ben worked at GiveWell).
Which core organizations are you referring to, and which signs are you looking for?
This has been discussed to some extent on the Forum, particularly in this thread, where multiple orgs were explicitly criticized. (I want to see a lot more discussions like these than actually exist, but I would say the same thing about many other topics — EA just isn’t very big and most people there, as anywhere, don’t like writing things in public. I expect that many similar discussions happened within expert circles and didn’t appear on the Forum.)
I worked at CEA until recently, and while our mission isn’t especially biorisk-centric (we affect EA bio work in indirect ways on multi-year timescales), our executive director insisted that we should include a mention in the opening talk of the EA Picnic that EA clearly fell short of where it should have been on COVID. It’s not much, but I think it reflects a broader consensus that we could have done better and didn’t.
That said, the implication that EA not preventing the pandemic is a problem for EA seems reasonable only in a very loose sense (better things were possible, as they always are). Open Phil invested less than $100 million into all of its biosecurity grants put together prior to February 2020, and that’s over a five-year period. That this funding (and direct work from a few dozen people, if that) failed to prevent COVID seems very unsurprising, and hard to learn from.
Is there a path you have in mind whereby Open Phil (or anyone else in EA) could have spent that kind of money in a way that would likely have prevented the pandemic, given the information that was available to the relevant parties in the years 2015-2019?
I find this kind of comment really unhelpful, especially in the context of LessWrong being a site about explaining your reasoning and models.
What are the uncomfortable questions and truths you are talking about? If you don’t even explain what you mean, it seems impossible to verify your claim that no one was asking/accepting these “truths”, or even whether they were truths at all.
I have argued to some EA leaders that the pandemic called for rapid and intense response as an opportunity to Do a Thing and thereby do a lot of good, and they had two general responses. One was the very reasonable ‘there’s a ton of uncertainty and logistics of actually doing useful things is hard yo’ but what I still don’t understand was the arguments against a hypothetical use of funds that by assumption would work.
In particular (this was pre-Omicron), I presented this hypothetical, based on a claim from David Manheim, doesn’t matter for this purpose if the model of action would have worked or not because we’re assuming it does:
In May 2020, let’s say you know for a fact that the vaccines are highly safe and effective, and on what schedule they will otherwise be available. You can write a $4 billion check to build vaccine manufacturing plants for mRNA vaccines. As a result, in December 2020, there will be enough vaccine for whoever wants one, throughout the world.
Do you write the check?
The answer I got back was an emphatic not only no, but that it was such a naive thing to think it would be a good idea to do, and I needed to learn more about EA.
I will point out that my work proposing funding mechanisms to work on that, and the idea, was being funded by exactly those EA orgs which OpenPhil and others were funding. (But I’m not sure why they people you spoke with claim that they wouldn’t fund this, and following your lead, I’ll ignore the various issues with the practicalities—we didn’t know mRNA was the right thing to bet on in May 2020, the total cost for enough manufacturing for the world to be vaccinated in <6 months is probably a (single digit) multiple of $4bn, etc.)
I haven’t done much research on this, but from a naive perspective, spending 4 billion dollars to move up vaccine access by a few months sounds incredibly unlikely to be a good idea? Is the idea that it is more effective than standard global health interventions in terms of QALYs or a similar metric, or that there’s some other benefit that is incommensurable with other global health interventions? (This feels like asking the wrong question but maybe it will at least help me understand your perspective)
The idea is that the extra production capacity funded with that $4b doesn’t just move up access a few months for rich countries, it also means poor countries get enough doses in months not years, and that there is capacity for making boosters, etc. (It’s a one-time purchase to increase the speed of vaccines for the medium term future. In other words, it changes the derivative, not the level or the delivery date.)
Is there currently a supply shortage of vaccines?
Yes, a huge one.
”COVAX, the global program for purchasing and distributing COVID-19 vaccines, has struggled to secure enough vaccine doses since its inception..
Nearly 100 low-income nations are relying on the program for vaccines. COVAX was initially aiming to deliver 2 billion doses by the end of 2021, enough to vaccinate only the most high-risk groups in developing countries. However, its delivery forecast was wound back in September to only 1.425 billion doses by the end of the year.
And by the end of November, less than 576 million doses had actually been delivered.”
Thanks for sharing your experience.
I’ve been writing the EA Newsletter and running the EA Forum for three years, and I’m currently a facilitator for the In-Depth EA Program, so I think I’ve learned enough about EA not to be too naïve.
I’m also an employee of Open Philanthropy starting January 3rd, though I don’t speak for them here.
Given your hypothetical and a few minutes of thought, I’d want Open Phil to write the check. It seems like an incredible buy given their stated funding standards for health interventions and reasonable assumptions about the “fewer manufacturing plants” counterfactual. (This makes me wonder whether Alexander Berger is among the leaders you mentioned, though I assume you can’t say.)
Are any of the arguments that you heard against doing so available for others to read? And were the people you heard back from unanimous?
I ask not in the spirit of doubt, but in the spirit of “I’m surprised and trying to figure things out”.
(Also, David Manheim is a major researcher in the EA community, which makes the whole situation/debate feel especially strange. I’d guess that he has more influence on actually EA-funded COVID decisions than most of the people I’d classify as “EA leaders”.)
COVID-19 is airbone. Biosafety level 2 is not sufficient to protect against airbone infections. The Chinese did gain-of-function research on coronoviruses under biosafety level 2 in Wuhan and publically said so in their published papers. This is the most likely reason we have the pandemic. There are strong efforts to cover that up the lab leak, from the Chinese, the US and other parties.
Fund a project that lists who does what gain-of-function research with what safety procautions to understand the threat better. After discovering that the Chinese did their gain-of-function research at biosafety level 2, put public pressure on them to not do that.
After being done with putting pressure on shutting down all the biosafety level 2 gain-of-function research attempt to do the same with biosafety level 3 gain-of-function research. Without the power to push through a global ban on the research pushing for only doing it in biosafety level 4 might be a fight worth having.
It’s probably still worth funding such a project.
If you had done even a bit of homework, you’d see that there was money going in to all of this. iGem and the Blue ribbon panel have been getting funded for over half a decade, and CHS for not much less. The problem was that there were too few people working on the problem, and there was no public will to ban scientific research which was risky. And starting from 2017, when I was doing work on exactly these issues—lab safety and precautions, and trying to make the case for why lack of monitoring was a problem—the limitation wasn’t a lack funding from EA orgs. Quite the contrary—almost no-one important in biosecurity wasn’t getting funded well to do everything that seemed potentially valuable.
So it’s pretty damn frustrating to hear someone say that someone should have been working on this, or funding this. Because we were, and they were.
If you would have done your research you would know that I opened previous threads and have done plenty of research.
I haven’t claimed that there wasn’t any money being invested into “working on biosecurity” but that most of it wasn’t effectively invested to stop the pandemic. The people funding the gain-of-function research are also seeing themselves as working in biosafety.
The position at the time shouldn’t be to target banning gain-of-function research in general given that’s politically not achievable but to say that it should only happen under biosafety 4.
It would have been possible to have a press campaign about how the Trump administration wants to allow dangerous gain-of-function research that was previously banned to happen under conditions that aren’t even the highest available biosafety level.
It’s probably still true today that “no gain-of-function outside of biosafety level 4” is the correct political demand.
The Chinese were written openly in their papers that they were doing the work under biosafety level 2. The problem was not about a lack of monitoring of their labs. It was just that nobody cared about them openly doing research in a dangerous setting.
iGem seems to a be a project about getting people to do more dangerous research and no project about reducing the amount of dangerous research that happens. Such an organization has bad incentives to take on the virology community to stop them from doing harm.
CHS seems to be doing net beneficial work. I’m still a bit confused about why they ran the Coronovirus pandemic exercise after the chaos started in the WIV. That’s sort of between “someone was very clever” and “someone should have reacted much better”.
I can go through details, and you’re wrong about what the mentioned orgs have done which matters, but even ignoring that, I strongly disagree about how we can and should push for better policy, and don’t think that even giving unlimited funding (which we effectively had,) there could have been enough people working on this to have done what you suggest (and we still don’t have enough people for high priority projects, despite, again, an effectively blank check!) and think you’re suggesting that we should have prioritized a single task, stopping Chinese BSL-2 work, based purely on post-hoc information, instead of pursuing the highest EV work as it was, IMO correctly, assessed at the time.
But even granting prophecy, I think that there is no world in which even an extra billion dollars per year 2015-2020 would have been able to pay for enough people and resources to get your suggested change done. And if we had tried to push on the idea, it would have destroyed EA Bio’s ability to do things now. And more critically, given any limited level of public attention and policy influence, focusing on mitigating existential risks instead of relatively minor events like COVID would probably have been the right move even knowing that COVID was coming! (Though it would certainly have changed the strategy so we could have responded better.)
Did you look at what Open Philanthropy is actually funding? https://igem.org/Safety
Or would you prefer that safety people not try to influence education and safety standards of people actually doing the work? Because if you ignore everyone with bad incentives, you can’t actually change the behaviors of the worst actors.
I don’t think that funding this work is net negative. On the other hand, I don’t think it can do what’s necessary to prevent the Coronavirus lab leak in 2019 or either or the two potential Coronavirus lab leaks in 2021.
It took the White House Office of Science and Technology to create the first moratorium because the NIH wasn’t capable and it would also need outside pressure to achieve anything else that’s strong enough to be sufficient to deal with the problem.
You didn’t respond to my comment that addressed this, but; “even granting prophecy, I think that there is no world in which even an extra billion dollars per year 2015-2020 would have been able to pay for enough people and resources to get your suggested change done. And if we had tried to push on the idea, it would have destroyed EA Bio’s ability to do things now. And more critically, given any limited level of public attention and policy influence, focusing on mitigating existential risks instead of relatively minor events like COVID would probably have been the right move even knowing that COVID was coming!”
Thanks for sharing a specific answer! I appreciate the detail and willingness to engage.
I don’t have the requisite biopolitical knowledge to weigh in on whether the approach you mentioned seems promising, but it does qualify as something someone could have been doing pre-COVID, and a plausible intervention at that.
My default assumptions for cases of “no one in EA has funded X”, in order from most to least likely:
No one ever asked funders in EA to fund X.
Funders in EA considered funding X, but it seemed like a poor choice from a (hits-based or cost-effectiveness) perspective.
Funders in EA considered funding X, but couldn’t find anyone who seemed like a good fit for it.
Various other factors, including “X seemed like a great thing to fund, but would have required acknowledging something the funders thought was both true and uncomfortable”.
In the case of this specific plausible thing, I’d guess it was (2) or (3) rather than (1). While anything involving China can be sensitive, Open Phil and other funders have spent plenty of money on work that involves Chinese policy. (CSET got $100 million from Open Phil, and runs a system tracking PRC “talent initiatives” that specifically refers to China’s “military goals” — their newsletter talks about Chinese AI progress all the time, with the clear implication that it’s a potential global threat.)
That’s not to say that I think (4) is impossible — it just doesn’t get much weight from me compared to those other options.
FWIW, as far as I’ve seen, the EA community has been unanimous in support of the argument “it’s totally fine to debate whether this was a lab leak”. (This is different from the argument “this was definitely a lab leak”.) Maybe I’m forgetting something from the early days when that point was more controversial, or I just didn’t see some big discussion somewhere. But when I think about “big names in EA pontificating on leaks”, things like this and this come to mind.
*****
Do you know of anyone who was trying to build out the gain-of-function project you mentioned during the time before the pandemic? And whether they ever approached anyone in EA about funding? Or whether any organizations actually considered this internally?
See my reply above, but this was actually none of your 4 options—it was “funders in EA were pouring money into this as quickly as they could find people willing to work on it.”
And the reasons no-one was pushing the specific proposal of “publicly shame China into stopping [so-called] GoF work” include the fact that US labs have done and still do similar work in only slightly safer conditions, as do microbiologists everywhere else, and that building public consensus about something no-one but a few specific groups of experts care about isn’t an effective use of funds.
Thanks for the further detail. It sounds like this wasn’t actually a case of “no one in EA has funded X”, which makes my list irrelevant.
(Maybe the first item on the list should be “actually, people in EA are definitely funding X”, since that’s something I often find when I look into claims like Christian’s, though it wasn’t obvious to me in this case.)