I think a bounty for actually ending malaria is great. I think a bounty for unilaterally releasing gene drives is probably quite bad for the world.
Like, I think malaria is really bad, and worth making quite aggressive sacrifices for to end, but that at the end of the day, there are even bigger games in town, and setting the precedent of “people are rewarded for unilaterally doing crazy biotech shenanigans” has a non-negligible chance of increasing global catastrophic risk, and potentially even existential risk.
I think the pathways towards doing that are twofold:
We further erode the currently very fragile and unclear norms around not unilaterally releasing pathogen-adjacent stuff. While I think specifically a malaria gene-drive is unlikely to have catastrophic consequences here, the same logic feels like it gets you much closer to stuff that does sure end up dangerous, or towards technologies that enable much more dangerous things (in-general I am pretty wary of naive consequentialist reasoning of this type. A negative utilitarian could go through the same reasoning to conclude why it’s a good idea to release an omnicidal pathogen)
We broadly destabilize the world by having more people do naive consequentialist estimates of stuff, and then taking large world-reshaping actions to achieve them, without consulting very much with the people who are most likely to have properly thought through the consequences of those actions. In this case, my sense is that actually rushing to release gene drives is not a great idea and might very well prevent future gene drives.This is like a classical unilateralist situation, and clearly it’s much worse if we somehow fuck up our gene drives forever than if we have to have a few more years of malaria. I think there is a time when it makes sense to go “wow, the current delay here is really unacceptable and someone should just go ahead and do this”, but I don’t think that time is right now, and this bounty feels like it pushes towards the “release faster” direction.
Like, I think actions can just have really large unintended consequences, and this is the kind of domain where I do actually like quite a bit of status quo bias and conservatism. I frequently talk to capabilities researchers who are like “I don’t care about your purported risks from AI, making models better can help millions of people right now, and I don’t want to be bogged down by your lame ethical concerns”, and I think this reasoning sure is really bad for the world and will likely have catastrophic consequences for all of humanity. I think this posts treatment of the gene drives issue gives me some pretty similar vibes, and this is a reference class check I really don’t want to get wrong.
At the object level I think actors like Target Malaria, the Bill and Melinda Gates Foundation, Open Philanthropy, and Kevin Esvelt are right to support a legal process approved by affected populations and states, and that such a unilateral illegal release would be very bad in terms of expected lives saved with biotech. Some of the considerations:
Eradication of malaria will require a lot more than a gene drive against Anopheles gambiae s.l., meaning government cooperation is still required.
Resistance can and does develop to gene drives, so that development of better drives and coordinated support (massive releases, other simultaneous countermeasures, extremely broad coverage) are necessary to wipe out malaria in regions. This research will be set back or blocked by a release attempt.
This could wreck the prospects for making additional gene drives for other malaria carrying mosquitoes, schistosomiasis causing worms, Tsetse flies causing trypanosomiasis, and other diseases, as well as agricultural applications. Collectively such setbacks could cost millions more lives than the lost lives from the delay now.
There could be large spillover to other even more beneficial controversial biotechnologies outside of gene drives. The thalidomide scandal involved 10,000 pregnancies with death or deformity of the babies. But it led to the institution of more restrictive FDA (and analogs around the world imitating the FDA) regulation, which has by now cost many millions of lives, e.g. in slowing the creation of pharmaceuticals to prevent AIDS and Covid-19. A single death set back gene therapy for decades. On the order of 70 million people die a year, and future controversial technologies like CRISPR therapies may reduce that by a lot more than malaria eradication.
I strongly oppose a prize that would pay out for illegal releases of gene drives without local consent from the affected regions, and any prizes for ending malaria should not incentivize that. Knowingly paying people to commit illegal actions is also generally illegal!
Putting aside the concerns about potential backfire effects of unilateral action[1], calling the release of gene drive mosquitoes “illegal” is unsubstantiated. The claim that actually cashes out to is “every single country where Anopheles gambiae are a substantial vector for the spread of malaria has laws that narrowly prohibit the release of release of mosquitoes”. The alternative interpretation, that “every single country will stretch obviously unrelated laws as far as necessary to throw the book at you if you do this”, may be true, but isn’t very interesting, since that can be used as a fully general argument against doing anything ever.[2]
Though you’re more likely to have the book thrown at you for some things than for others, and it’d be silly to deny that we have non-zero information about what those things are in advance. I still think the distinction is substantial.
Unilateral action in general might be bad, but most of these reasons you’ve given to not support an illegal one (if gene drives were explicitly illegal, which they’re not) seem completely misguided or misleading. I can’t parse whether or not this is deliberate. I’m against lying as a means of stopping unilateral action in most real world scenarios; people who want to obtain or give multilateral consensus will need to understand where actual risks come from, not made up risks designed to discourage bad actors.
Eradication of malaria will require a lot more than a gene drive against Anopheles gambiae s.l., meaning government cooperation is still required.
AFAICT this is basically incorrect. What government cooperation do you suspect is necessary? You have to release a “significant amount” of mosquitoes across a “large enough” part of Africa, yes, but unless the local population is unusually isolated, the mosquitoes travel to other countries. Metacelsus estimates you could do it with maybe ~50k on a shoestring budget and most of the work would be catching the mosquitoes.
If what you really mean is “you have to not be actively hunted by every local government that has jurisdiction over a suitable place to start the gene drive” then A. that level of cooperation currently exists, and B. I think you are both underestimating the number of governments in Sub-Saharan Africa and overestimating their state capacity.
Resistance can and does develop to gene drives, so that development of better drives and coordinated support (massive releases, other simultaneous countermeasures, extremely broad coverage) are necessary to wipe out malaria in regions. This research will be set back or blocked by a release attempt.
Resistance may develop to the doublesex gene, and it may not. So far none of the trials done by Target Malaria have suggested this happens in practice. It is of course still theoretically possible, for that gene, that a resistance could develop at larger population sizes, which is one of the only good reasons why you shouldn’t unilaterally release gene drive mosquitoes. But there are other gene drives you could deploy that would simply engineer mosquitoes not to transmit the parasite for malaria, so this is not a concern about gene drives in general, just the genes that introduce sterility.
Other than resistances, the claim “research will be set back or blocked by a release attempt” is also obviously incorrect. If you somehow only exterminate a completely isolated subsection of the larger population of Anopheles gambiae (which again, I find pretty unlikely) you can just try again. The Senegalese government is not going to both detect and then ban gene drives in response to a deployment that failed to reach escape velocity.
This could wreck the prospects for making additional gene drives for other malaria carrying mosquitoes, schistosomiasis causing worms, Tsetse flies causing trypanosomiasis, and other diseases, as well as agricultural applications. Collectively such setbacks could cost millions more lives than the lost lives from the delay now.
Read one way, this is a bizarrre technical misconception: the doublesex gene does not work on worms. A gene drive on Anopholese Gambiae would involve epsilon risk introducing a resistance on schistosomiasis causing worms.
Read another way:
There could be large spillover to other even more beneficial controversial biotechnologies outside of gene drives. The thalidomide scandal involved 10,000 pregnancies with death or deformity of the babies. But it led to the institution of more restrictive FDA (and analogs around the world imitating the FDA) regulation, which has by now cost many millions of lives, e.g. in slowing the creation of pharmaceuticals to prevent AIDS and Covid-19. A single death set back gene therapy for decades. On the order of 70 million people die a year, and future controversial technologies like CRISPR therapies may reduce that by a lot more than malaria eradication.
The FDA does not, in fact, have jurisdiction over any of the places affected by the parasites you mentioned. The U.S. government has the money, political capital, and state capacity to start institutions like the FDA, Kenya generally does not. And the U.S. government generally starts deadly regulatory crazes like that in response to things happening to U.S. citizens, not Africans.
So, altogether, what you are proposing will/may happen if someone releases a gene drive to eradicate mosquitoes without getting “government permission” first is:
It doesn’t work.
Even though it didn’t work to kill mosquitoes, it, through a completely unspecified and at this point imaginary mechanical process, kills a small number of Kenyans, makes some women bear deformed children, etc.
This negative side effect of the gene drive is somehow detected by either e.g. the Kenyan government or NGOs from first world countries.
Those first world nations unanimously care enough about the death of ~1/10/100 Kenyans from a bad gene drive in Kenya to block CRISPR technology in agriculture, or maybe other third world nations unanimously care enough to proactively and effectively block further attempts to destroy other parasites.
In other words: Despite the fact that the biotechnology efforts to kill those parasites you mentioned do not currently exist, we’re supposed to wait until 2030 to deploy the gene drive that stops malaria, because if they did exist they might be stopped in response to a side effect that does not sound mechanically possible and does not sound like it would be detected if it were possible. Pretty ridiculous IMO, even granting the (incorrect, but common on LessWrong) model of regulation that says the existence of FDA analogs is a result of every government “copying” the FDA, rather than normies just being extremely conservative when it comes to medicine as applied to humans.
This is frankly what jimrandhomh talks about when he says the safety concerns about “gene drives” are analogous to safety concerns about “nuclear power”, in that they can’t be addressed by science or policy predictions because they’re actually anxieties provoked by the words “gene”, “biotechnology”, “unilateral”, etc. Would you like to make some bets regarding this story? I’ll give you great odds.
And this is all in addition to RobertM’s obvious critique that releasing gene drives is not, in fact, actually illegal. I’m keeping Habryka’s unilateral action clause, was always going to include some clause that says we’re not going to pay out the bounty if it would mean pledgers get prosecuted, solely because more people are likely to be willing to pledge that way. These concerns do not seem to be coherent enough to merit amending the bounty otherwise.
without consulting very much with the people who have tried thinking more deeply through the consequences
I’m not sure what the standard is here, but this doesn’t feel quite right. Sometime people who have tried thinking a lot about an area are worth ignoring (e.g. the field of medical ethics).
Agree that this shouldn’t be measured in “amount of time thinking through something” or in “degree to which they look like they are supposed to have thought through something”. I think it should just be measured in the likelihood that a group of people has actually figured out the most important considerations.
I edited it a bit. Now says “without consulting very much with the people who are most likely to have properly thought through the consequences of those actions”.
Yeah, if you can find a group of people who have thought deeply about the consequences that seems great. I also think for many great actions, there may not be any such body of people. In such a case (my first thought is that) I would try to create one? Like, find some very intelligent people with enough understanding of the details to engage on the object level, and who disagree, and put in the effort to debate them, and then actually change their minds. Might be a bit more robust.
(But of course, in our civilization, there are no real adults who can make sure a decision is the right one for the whole course of the world, there are only gradations of reduction of uncertainty, and at some point you have to make decisions according to your most true-and-tried principles.)
I think a bounty for actually ending malaria is great. I think a bounty for unilaterally releasing gene drives is probably quite bad for the world.
Like, I think malaria is really bad, and worth making quite aggressive sacrifices for to end, but that at the end of the day, there are even bigger games in town, and setting the precedent of “people are rewarded for unilaterally doing crazy biotech shenanigans” has a non-negligible chance of increasing global catastrophic risk, and potentially even existential risk.
Like, I think actions can just have really large unintended consequences, and this is the kind of domain where I do actually like quite a bit of status quo bias and conservatism. I frequently talk to capabilities researchers who are like “I don’t care about your purported risks from AI, making models better can help millions of people right now, and I don’t want to be bogged down by your lame ethical concerns”, and I think this reasoning sure is really bad for the world and will likely have catastrophic consequences for all of humanity. I think this posts treatment of the gene drives issue gives me some pretty similar vibes, and this is a reference class check I really don’t want to get wrong.
Alright. You’re probably right. And I don’t want to increase existential risk. But I do want the person or group that ends Malaria to get >=$5,000, which is what this bounty is actually written to do, gene drive or no. Is there a way we can actually do that, or should I scrap it entirely?
Can we add an amendment that requires significant consultation with stakeholders? Should it be mandatory that it be done as part of a large nonprofit so that future individual people aren’t encouraged to act unilaterally anyways? Should it be amended to spread the money also among the people doing reasonable, informed research on safety concerns?
What would responsible buy-in actually look like, for technology like this? I hope not just “get buy in from local elected political leaders”, as I don’t expect that to actually correlate well with risk at all.
I am literally yours here; I don’t like unilateral action either, and am interested in introducing whatever safety protocols you propose. But if nobody can come up with an N-step process to turn unilateral action into multilateral consensus, the anxiety is not about this particular action being unilateral, it’s about the action having unintended negative consequences at all.
In this specific case, I feel like I think the problem would basically be solved if you would just say “I am not going to award this bounty if you unilaterally release a gene drive, without also writing a post that convinces me that the effects of doing so in a rushed way were worth the costs”. Basically just inverting the burden of proof in that one case.
I agree about inverting the burden of proof in that case. I’d prefer to operationalize “unilaterally” more. Here’s an alternative:
“I am not going to award this bounty if other people with a strong understanding of the science involved point out straightforward flaws that make the project appear catastrophically net negative, or if a post on LessWrong about the project leads users to point out straightforward reasons why the project is catastrophically net negative, or if I think you made no attempt to get such people to actually check and then change their minds (or engage with their arguments for 100+ hours) before going ahead.”
I think a bounty for actually ending malaria is great. I think a bounty for unilaterally releasing gene drives is probably quite bad for the world.
Like, I think malaria is really bad, and worth making quite aggressive sacrifices for to end, but that at the end of the day, there are even bigger games in town, and setting the precedent of “people are rewarded for unilaterally doing crazy biotech shenanigans” has a non-negligible chance of increasing global catastrophic risk, and potentially even existential risk.
I think the pathways towards doing that are twofold:
We further erode the currently very fragile and unclear norms around not unilaterally releasing pathogen-adjacent stuff. While I think specifically a malaria gene-drive is unlikely to have catastrophic consequences here, the same logic feels like it gets you much closer to stuff that does sure end up dangerous, or towards technologies that enable much more dangerous things (in-general I am pretty wary of naive consequentialist reasoning of this type. A negative utilitarian could go through the same reasoning to conclude why it’s a good idea to release an omnicidal pathogen)
We broadly destabilize the world by having more people do naive consequentialist estimates of stuff, and then taking large world-reshaping actions to achieve them, without consulting very much with the people who are most likely to have properly thought through the consequences of those actions. In this case, my sense is that actually rushing to release gene drives is not a great idea and might very well prevent future gene drives.This is like a classical unilateralist situation, and clearly it’s much worse if we somehow fuck up our gene drives forever than if we have to have a few more years of malaria. I think there is a time when it makes sense to go “wow, the current delay here is really unacceptable and someone should just go ahead and do this”, but I don’t think that time is right now, and this bounty feels like it pushes towards the “release faster” direction.
Like, I think actions can just have really large unintended consequences, and this is the kind of domain where I do actually like quite a bit of status quo bias and conservatism. I frequently talk to capabilities researchers who are like “I don’t care about your purported risks from AI, making models better can help millions of people right now, and I don’t want to be bogged down by your lame ethical concerns”, and I think this reasoning sure is really bad for the world and will likely have catastrophic consequences for all of humanity. I think this posts treatment of the gene drives issue gives me some pretty similar vibes, and this is a reference class check I really don’t want to get wrong.
At the object level I think actors like Target Malaria, the Bill and Melinda Gates Foundation, Open Philanthropy, and Kevin Esvelt are right to support a legal process approved by affected populations and states, and that such a unilateral illegal release would be very bad in terms of expected lives saved with biotech. Some of the considerations:
Eradication of malaria will require a lot more than a gene drive against Anopheles gambiae s.l., meaning government cooperation is still required.
Resistance can and does develop to gene drives, so that development of better drives and coordinated support (massive releases, other simultaneous countermeasures, extremely broad coverage) are necessary to wipe out malaria in regions. This research will be set back or blocked by a release attempt.
This could wreck the prospects for making additional gene drives for other malaria carrying mosquitoes, schistosomiasis causing worms, Tsetse flies causing trypanosomiasis, and other diseases, as well as agricultural applications. Collectively such setbacks could cost millions more lives than the lost lives from the delay now.
There could be large spillover to other even more beneficial controversial biotechnologies outside of gene drives. The thalidomide scandal involved 10,000 pregnancies with death or deformity of the babies. But it led to the institution of more restrictive FDA (and analogs around the world imitating the FDA) regulation, which has by now cost many millions of lives, e.g. in slowing the creation of pharmaceuticals to prevent AIDS and Covid-19. A single death set back gene therapy for decades. On the order of 70 million people die a year, and future controversial technologies like CRISPR therapies may reduce that by a lot more than malaria eradication.
I strongly oppose a prize that would pay out for illegal releases of gene drives without local consent from the affected regions, and any prizes for ending malaria should not incentivize that. Knowingly paying people to commit illegal actions is also generally illegal!
Putting aside the concerns about potential backfire effects of unilateral action[1], calling the release of gene drive mosquitoes “illegal” is unsubstantiated. The claim that actually cashes out to is “every single country where Anopheles gambiae are a substantial vector for the spread of malaria has laws that narrowly prohibit the release of release of mosquitoes”. The alternative interpretation, that “every single country will stretch obviously unrelated laws as far as necessary to throw the book at you if you do this”, may be true, but isn’t very interesting, since that can be used as a fully general argument against doing anything ever.[2]
Which I’m inclined to agree with, though notably I haven’t actually seen a cost/benefit analysis from any of those sources.
Though you’re more likely to have the book thrown at you for some things than for others, and it’d be silly to deny that we have non-zero information about what those things are in advance. I still think the distinction is substantial.
Unilateral action in general might be bad, but most of these reasons you’ve given to not support an illegal one (if gene drives were explicitly illegal, which they’re not) seem completely misguided or misleading. I can’t parse whether or not this is deliberate. I’m against lying as a means of stopping unilateral action in most real world scenarios; people who want to obtain or give multilateral consensus will need to understand where actual risks come from, not made up risks designed to discourage bad actors.
AFAICT this is basically incorrect. What government cooperation do you suspect is necessary? You have to release a “significant amount” of mosquitoes across a “large enough” part of Africa, yes, but unless the local population is unusually isolated, the mosquitoes travel to other countries. Metacelsus estimates you could do it with maybe ~50k on a shoestring budget and most of the work would be catching the mosquitoes.
If what you really mean is “you have to not be actively hunted by every local government that has jurisdiction over a suitable place to start the gene drive” then A. that level of cooperation currently exists, and B. I think you are both underestimating the number of governments in Sub-Saharan Africa and overestimating their state capacity.
Resistance may develop to the doublesex gene, and it may not. So far none of the trials done by Target Malaria have suggested this happens in practice. It is of course still theoretically possible, for that gene, that a resistance could develop at larger population sizes, which is one of the only good reasons why you shouldn’t unilaterally release gene drive mosquitoes. But there are other gene drives you could deploy that would simply engineer mosquitoes not to transmit the parasite for malaria, so this is not a concern about gene drives in general, just the genes that introduce sterility.
Other than resistances, the claim “research will be set back or blocked by a release attempt” is also obviously incorrect. If you somehow only exterminate a completely isolated subsection of the larger population of Anopheles gambiae (which again, I find pretty unlikely) you can just try again. The Senegalese government is not going to both detect and then ban gene drives in response to a deployment that failed to reach escape velocity.
Read one way, this is a bizarrre technical misconception: the doublesex gene does not work on worms. A gene drive on Anopholese Gambiae would involve epsilon risk introducing a resistance on schistosomiasis causing worms.
Read another way:
The FDA does not, in fact, have jurisdiction over any of the places affected by the parasites you mentioned. The U.S. government has the money, political capital, and state capacity to start institutions like the FDA, Kenya generally does not. And the U.S. government generally starts deadly regulatory crazes like that in response to things happening to U.S. citizens, not Africans.
So, altogether, what you are proposing will/may happen if someone releases a gene drive to eradicate mosquitoes without getting “government permission” first is:
It doesn’t work.
Even though it didn’t work to kill mosquitoes, it, through a completely unspecified and at this point imaginary mechanical process, kills a small number of Kenyans, makes some women bear deformed children, etc.
This negative side effect of the gene drive is somehow detected by either e.g. the Kenyan government or NGOs from first world countries.
Those first world nations unanimously care enough about the death of ~1/10/100 Kenyans from a bad gene drive in Kenya to block CRISPR technology in agriculture, or maybe other third world nations unanimously care enough to proactively and effectively block further attempts to destroy other parasites.
In other words: Despite the fact that the biotechnology efforts to kill those parasites you mentioned do not currently exist, we’re supposed to wait until 2030 to deploy the gene drive that stops malaria, because if they did exist they might be stopped in response to a side effect that does not sound mechanically possible and does not sound like it would be detected if it were possible. Pretty ridiculous IMO, even granting the (incorrect, but common on LessWrong) model of regulation that says the existence of FDA analogs is a result of every government “copying” the FDA, rather than normies just being extremely conservative when it comes to medicine as applied to humans.
This is frankly what jimrandhomh talks about when he says the safety concerns about “gene drives” are analogous to safety concerns about “nuclear power”, in that they can’t be addressed by science or policy predictions because they’re actually anxieties provoked by the words “gene”, “biotechnology”, “unilateral”, etc. Would you like to make some bets regarding this story? I’ll give you great odds.
And this is all in addition to RobertM’s obvious critique that releasing gene drives is not, in fact, actually illegal. I’m keeping Habryka’s unilateral action clause, was always going to include some clause that says we’re not going to pay out the bounty if it would mean pledgers get prosecuted, solely because more people are likely to be willing to pledge that way. These concerns do not seem to be coherent enough to merit amending the bounty otherwise.
I’m not sure what the standard is here, but this doesn’t feel quite right. Sometime people who have tried thinking a lot about an area are worth ignoring (e.g. the field of medical ethics).
Agree that this shouldn’t be measured in “amount of time thinking through something” or in “degree to which they look like they are supposed to have thought through something”. I think it should just be measured in the likelihood that a group of people has actually figured out the most important considerations.
I edited it a bit. Now says “without consulting very much with the people who are most likely to have properly thought through the consequences of those actions”.
Yeah, if you can find a group of people who have thought deeply about the consequences that seems great. I also think for many great actions, there may not be any such body of people. In such a case (my first thought is that) I would try to create one? Like, find some very intelligent people with enough understanding of the details to engage on the object level, and who disagree, and put in the effort to debate them, and then actually change their minds. Might be a bit more robust.
(But of course, in our civilization, there are no real adults who can make sure a decision is the right one for the whole course of the world, there are only gradations of reduction of uncertainty, and at some point you have to make decisions according to your most true-and-tried principles.)
Alright. You’re probably right. And I don’t want to increase existential risk. But I do want the person or group that ends Malaria to get >=$5,000, which is what this bounty is actually written to do, gene drive or no. Is there a way we can actually do that, or should I scrap it entirely?
Can we add an amendment that requires significant consultation with stakeholders? Should it be mandatory that it be done as part of a large nonprofit so that future individual people aren’t encouraged to act unilaterally anyways? Should it be amended to spread the money also among the people doing reasonable, informed research on safety concerns?
What would responsible buy-in actually look like, for technology like this? I hope not just “get buy in from local elected political leaders”, as I don’t expect that to actually correlate well with risk at all.
I am literally yours here; I don’t like unilateral action either, and am interested in introducing whatever safety protocols you propose. But if nobody can come up with an N-step process to turn unilateral action into multilateral consensus, the anxiety is not about this particular action being unilateral, it’s about the action having unintended negative consequences at all.
In this specific case, I feel like I think the problem would basically be solved if you would just say “I am not going to award this bounty if you unilaterally release a gene drive, without also writing a post that convinces me that the effects of doing so in a rushed way were worth the costs”. Basically just inverting the burden of proof in that one case.
Done then.
I’ll think about this more and formalize it a bit before I actually create the nonprofit’s pledge.
I agree about inverting the burden of proof in that case. I’d prefer to operationalize “unilaterally” more. Here’s an alternative: