1. Openphil’s biosecurity work already focuses on AIxBio quite a bit.
2. re: engineered pandemics, I think the analysis is wrong on several fronts, but primarily, it seems unclear how AI making a pandemic would differ from the types of pandemic we’re already defending against, and so assuming no LoC, typical biosecurity approaches seem fine.
3. Everyone is already clear that we want AI away from nuclear systems, and I don’t know that there is anything more to say about it, other than focusing on how little we understand Deep Learning systems and ensuring politicians and others are aware.
4. This analysis ignores AIxCybersecurity issues, which also seem pretty important for non-LoC risks.
1. I did look at Openphil’s grant database back in May and found nothing. Could you point where we could find more information about this?
2. Hmm are you saying an AI-engineered pandemic would likely be similar to natural pandemics we’re already defending against? If so, I would probably disagree here. I’m also unsure how AI-engineered pandemics might create highly novel strains, but I think the risk of this happening seems non-trivial.
And wouldn’t AI accelerate such developments as well?
3. Thanks for the info! I don’t think we have access to a lot of personal views of researchers besides what we’ve seen in literature.
4. Ah did you mean AI x Cybersecurity x Non-LOC risks? That sounds right. I don’t think we’ve actually thought about this.
Reflecting on this, it feels like we could have done better if we spoke to at least a few researchers instead of depending so much on lit reviews.
The grants are to groups and individuals doing work in this area, so it encompasses much of the grants that are on biorisk in general—as well as several people employed by OpenPhil doing direct work on the topic.
I’m saying that defense looks similar—early detection, quarantines, contact tracing, and rapid production of vaccines all help, and it’s unclear how non-superhuman AI could do any of the scariest things people have discussed as threats.
Public statements to the press already discuss this, as does some research.
Yes, and again, I think it’s a big deal.
And yes, but expert elicitation is harder than lit reviews.
A few brief notes.
1. Openphil’s biosecurity work already focuses on AIxBio quite a bit.
2. re: engineered pandemics, I think the analysis is wrong on several fronts, but primarily, it seems unclear how AI making a pandemic would differ from the types of pandemic we’re already defending against, and so assuming no LoC, typical biosecurity approaches seem fine.
3. Everyone is already clear that we want AI away from nuclear systems, and I don’t know that there is anything more to say about it, other than focusing on how little we understand Deep Learning systems and ensuring politicians and others are aware.
4. This analysis ignores AIxCybersecurity issues, which also seem pretty important for non-LoC risks.
Hey David, thanks for the feedback!
1. I did look at Openphil’s grant database back in May and found nothing. Could you point where we could find more information about this?
2. Hmm are you saying an AI-engineered pandemic would likely be similar to natural pandemics we’re already defending against? If so, I would probably disagree here. I’m also unsure how AI-engineered pandemics might create highly novel strains, but I think the risk of this happening seems non-trivial.
And wouldn’t AI accelerate such developments as well?
3. Thanks for the info! I don’t think we have access to a lot of personal views of researchers besides what we’ve seen in literature.
4. Ah did you mean AI x Cybersecurity x Non-LOC risks? That sounds right. I don’t think we’ve actually thought about this.
Reflecting on this, it feels like we could have done better if we spoke to at least a few researchers instead of depending so much on lit reviews.
The grants are to groups and individuals doing work in this area, so it encompasses much of the grants that are on biorisk in general—as well as several people employed by OpenPhil doing direct work on the topic.
I’m saying that defense looks similar—early detection, quarantines, contact tracing, and rapid production of vaccines all help, and it’s unclear how non-superhuman AI could do any of the scariest things people have discussed as threats.
Public statements to the press already discuss this, as does some research.
Yes, and again, I think it’s a big deal.
And yes, but expert elicitation is harder than lit reviews.