I think a bounty for actually ending malaria is great. I think a bounty for unilaterally releasing gene drives is probably quite bad for the world.
Like, I think malaria is really bad, and worth making quite aggressive sacrifices for to end, but that at the end of the day, there are even bigger games in town, and setting the precedent of “people are rewarded for unilaterally doing crazy biotech shenanigans” has a non-negligible chance of increasing global catastrophic risk, and potentially even existential risk.
Like, I think actions can just have really large unintended consequences, and this is the kind of domain where I do actually like quite a bit of status quo bias and conservatism. I frequently talk to capabilities researchers who are like “I don’t care about your purported risks from AI, making models better can help millions of people right now, and I don’t want to be bogged down by your lame ethical concerns”, and I think this reasoning sure is really bad for the world and will likely have catastrophic consequences for all of humanity. I think this posts treatment of the gene drives issue gives me some pretty similar vibes, and this is a reference class check I really don’t want to get wrong.
Alright. You’re probably right. And I don’t want to increase existential risk. But I do want the person or group that ends Malaria to get >=$5,000, which is what this bounty is actually written to do, gene drive or no. Is there a way we can actually do that, or should I scrap it entirely?
Can we add an amendment that requires significant consultation with stakeholders? Should it be mandatory that it be done as part of a large nonprofit so that future individual people aren’t encouraged to act unilaterally anyways? Should it be amended to spread the money also among the people doing reasonable, informed research on safety concerns?
What would responsible buy-in actually look like, for technology like this? I hope not just “get buy in from local elected political leaders”, as I don’t expect that to actually correlate well with risk at all.
I am literally yours here; I don’t like unilateral action either, and am interested in introducing whatever safety protocols you propose. But if nobody can come up with an N-step process to turn unilateral action into multilateral consensus, the anxiety is not about this particular action being unilateral, it’s about the action having unintended negative consequences at all.
In this specific case, I feel like I think the problem would basically be solved if you would just say “I am not going to award this bounty if you unilaterally release a gene drive, without also writing a post that convinces me that the effects of doing so in a rushed way were worth the costs”. Basically just inverting the burden of proof in that one case.
I agree about inverting the burden of proof in that case. I’d prefer to operationalize “unilaterally” more. Here’s an alternative:
“I am not going to award this bounty if other people with a strong understanding of the science involved point out straightforward flaws that make the project appear catastrophically net negative, or if a post on LessWrong about the project leads users to point out straightforward reasons why the project is catastrophically net negative, or if I think you made no attempt to get such people to actually check and then change their minds (or engage with their arguments for 100+ hours) before going ahead.”
Alright. You’re probably right. And I don’t want to increase existential risk. But I do want the person or group that ends Malaria to get >=$5,000, which is what this bounty is actually written to do, gene drive or no. Is there a way we can actually do that, or should I scrap it entirely?
Can we add an amendment that requires significant consultation with stakeholders? Should it be mandatory that it be done as part of a large nonprofit so that future individual people aren’t encouraged to act unilaterally anyways? Should it be amended to spread the money also among the people doing reasonable, informed research on safety concerns?
What would responsible buy-in actually look like, for technology like this? I hope not just “get buy in from local elected political leaders”, as I don’t expect that to actually correlate well with risk at all.
I am literally yours here; I don’t like unilateral action either, and am interested in introducing whatever safety protocols you propose. But if nobody can come up with an N-step process to turn unilateral action into multilateral consensus, the anxiety is not about this particular action being unilateral, it’s about the action having unintended negative consequences at all.
In this specific case, I feel like I think the problem would basically be solved if you would just say “I am not going to award this bounty if you unilaterally release a gene drive, without also writing a post that convinces me that the effects of doing so in a rushed way were worth the costs”. Basically just inverting the burden of proof in that one case.
Done then.
I’ll think about this more and formalize it a bit before I actually create the nonprofit’s pledge.
I agree about inverting the burden of proof in that case. I’d prefer to operationalize “unilaterally” more. Here’s an alternative: