1. I did look at Openphil’s grant database back in May and found nothing. Could you point where we could find more information about this?
2. Hmm are you saying an AI-engineered pandemic would likely be similar to natural pandemics we’re already defending against? If so, I would probably disagree here. I’m also unsure how AI-engineered pandemics might create highly novel strains, but I think the risk of this happening seems non-trivial.
And wouldn’t AI accelerate such developments as well?
3. Thanks for the info! I don’t think we have access to a lot of personal views of researchers besides what we’ve seen in literature.
4. Ah did you mean AI x Cybersecurity x Non-LOC risks? That sounds right. I don’t think we’ve actually thought about this.
Reflecting on this, it feels like we could have done better if we spoke to at least a few researchers instead of depending so much on lit reviews.
The grants are to groups and individuals doing work in this area, so it encompasses much of the grants that are on biorisk in general—as well as several people employed by OpenPhil doing direct work on the topic.
I’m saying that defense looks similar—early detection, quarantines, contact tracing, and rapid production of vaccines all help, and it’s unclear how non-superhuman AI could do any of the scariest things people have discussed as threats.
Public statements to the press already discuss this, as does some research.
Yes, and again, I think it’s a big deal.
And yes, but expert elicitation is harder than lit reviews.
Hey David, thanks for the feedback!
1. I did look at Openphil’s grant database back in May and found nothing. Could you point where we could find more information about this?
2. Hmm are you saying an AI-engineered pandemic would likely be similar to natural pandemics we’re already defending against? If so, I would probably disagree here. I’m also unsure how AI-engineered pandemics might create highly novel strains, but I think the risk of this happening seems non-trivial.
And wouldn’t AI accelerate such developments as well?
3. Thanks for the info! I don’t think we have access to a lot of personal views of researchers besides what we’ve seen in literature.
4. Ah did you mean AI x Cybersecurity x Non-LOC risks? That sounds right. I don’t think we’ve actually thought about this.
Reflecting on this, it feels like we could have done better if we spoke to at least a few researchers instead of depending so much on lit reviews.
The grants are to groups and individuals doing work in this area, so it encompasses much of the grants that are on biorisk in general—as well as several people employed by OpenPhil doing direct work on the topic.
I’m saying that defense looks similar—early detection, quarantines, contact tracing, and rapid production of vaccines all help, and it’s unclear how non-superhuman AI could do any of the scariest things people have discussed as threats.
Public statements to the press already discuss this, as does some research.
Yes, and again, I think it’s a big deal.
And yes, but expert elicitation is harder than lit reviews.