Followup point on the Gain-of-Function-Ban as practice-run for AI:
My sense is that the biorisk people who were thinking about Gain-of-Function-Ban were not primarily modeling it as a practice run for regulating AGI. This may result in them not really prioritizing it.
I think biorisk is significantly lower than AGI risk, so if it’s tractable and useful to regulate Gain of Function research as a practice run for regulating AGI, it’s plausible this is actually much more important than business-as-usual biorisk.
BUT I think smart people I know seem to disagree about how any of this works, so the “if tractable and useful” conditional is pretty non-obvious to me.
If bio-and-AI-people haven’t had a serious conversation about this where they mapped out the considerations in more detail, I do think that should happen.
Followup point on the Gain-of-Function-Ban as practice-run for AI:
My sense is that the biorisk people who were thinking about Gain-of-Function-Ban were not primarily modeling it as a practice run for regulating AGI. This may result in them not really prioritizing it.
I think biorisk is significantly lower than AGI risk, so if it’s tractable and useful to regulate Gain of Function research as a practice run for regulating AGI, it’s plausible this is actually much more important than business-as-usual biorisk.
BUT I think smart people I know seem to disagree about how any of this works, so the “if tractable and useful” conditional is pretty non-obvious to me.
If bio-and-AI-people haven’t had a serious conversation about this where they mapped out the considerations in more detail, I do think that should happen.