Biorisk is an Unhelpful Analogy for AI Risk

There are two main areas of catastrophic or existential risk which have recently received significant attention; biorisk, from natural sources, biological accidents, and biological weapons, and artificial intelligence, from detrimental societal impacts of systems, incautious or intentional misuse of highly capable systems, and direct risks from agentic AGI/​ASI. These have been compared extensively in research, and have even directly inspired policies. Comparisons are often useful, but in this case, I think the disanalogies are much more compelling than the analogies. Below, I lay these out piecewise, attempting to keep the pairs of paragraphs describing first biorisk, then AI risk, parallel to each other.

While I think the disanalogies are compelling, comparison can still be useful as an analytic tool—while keeping in mind that the ability to directly learn lessons from biorisk to apply to AI is limited by the vast array of other disanalogies. (Note that this is not discussing the interaction of these two risks, which is a critical but different topic.)

Comparing the Risk: Attack Surface

Pathogens, whether natural or artificial, have a fairly well-defined attack surface; the hosts’ bodies. Human bodies are pretty much static targets, are the subject of massive research effort, have undergone eons of adaptation to be more or less defensible, and our ability to fight pathogens is increasingly well understood.

Risks from artificial intelligence, on the other hand, have a near unlimited attack surface against humanity, not only including our deeply insecure but increasingly vital computer systems, but also our bodies, our social, justice, political, and governance systems, and our highly complex and interconnected but poorly understood infrastructure and economic systems. Few of these are understood to be robust, and the classes of failures are both manifold, and not adapted or constructed for their resilience to attack.

Comparing the Risk: Mitigation

Avenues to mitigate impacts of pandemics are well explored, and many partially effective systems are in place. Global health, in various ways, is funded with on the order of tens of trillions of dollars yearly, much of which has been at times directly refocused on fighting infectious disease pandemics. Accident risk with pathogens is a major area of focus, and while manifestly insufficient to stop all accidents, decades of effort have greatly reduced the rate of accidents in laboratories working with both clinical and research pathogens. Biological weapons are banned internationally, and breaches of the treaty are both well understood to be unacceptable norm violations, and limited to a few small and unsuccessful attempts in the past decades.

The risks and mitigation paths for AI, both societally and for misuse, are poorly understood and almost entirely theoretical. Recent efforts like the EU AI act have unclear impact. The ecosystem for managing the risks is growing quickly, but at present likely includes no more than a few thousand people, with optimistically a few tens of millions of dollars of annual funding, and has no standards or clarity about how to respond to different challenges. Accidental negative impacts of current systems, both those poorly vetted or untested, and those which were developed with safety in mind, are more common than not, and the scale of the risk is almost certainly increasing far faster than the response efforts. There are no international laws banning risky or intentional misuse or development of dangerous AI systems, much less norms for caution or against abuse.

Comparing the Risk: Standards

A wide variety of mandatory standards exist for disease reporting, data collection, tracking, and response. The bodies which receive the reports, both at a national and international level, are well known. There are also clear standards for safely working with pathogen agents which are largely effective when followed properly, and weak requirements to follow those standards not only in cases where known dangerous agents are used, but even in cases where danger is speculative—though these are often ignored. While all could be more robust, improvements are on policymakers’ agendas, and in general, researchers agree with following risk-mitigation protocols because it is aligned with their personal safety.

In AI, it is unclear what should be reported, what data should be collected about incidents, and whether firms or users need to report even admittedly worrying incidents. There is no body in place to receive or handle reports. There are no standards in place for developing novel risky AI systems, and the potential safeguards in place are admitted to be insufficient for the types of systems the developers say they are actively trying to create. No requirement to follow these standards exists, and the norms are opposed to doing so. Policymakers are conflicted about whether to put any safeguards in place, and many researchers actively oppose attempts to do so, calling claimed dangers absurd or theoretical.

Conclusion

Attempts to build safety systems are critical, and different domains require different types of systems, different degrees of caution, and different conceptual models which are appropriate to the risks being mitigated. At the same time, disanalogies listed here aren’t in and of themselves reasons that similar strategies cannot sometimes be useful, once the limitations are understood. For that reason, disanalogies should be a reminder and a caution against analogizing, not a reason on its own to reject parallel approaches in the different domains.

Crossposted from EA Forum (22 points, 4 comments)