Has your experience with this project given you any insights into bioterrorism risk?
Suppose that, rather than synthesizing a vaccine, you’d wanted to synthesize a new pandemic. Would that have been remotely possible? Do you think the current safeguards will be enough to prevent that sort of thing as the technology develops over the next decade or so?
Not really, was concerned about biological X-risks before and continue to be.
I don’t currently see any plausible defense against them—even if we somehow got a sufficient number of nations to stop/moderate gain-of-function research and think twice about what information to publish, genetic engineering will continue to become easier and cheaper over time. As a result, I can see us temporarily offsetting the decline in minimum IQ*money*tech_level needed to destroy humanity but not stop it, and that’s already in a geopolitically optimistic scenario.
Luckily there are some intimidatingly smart people working on the problem and I hope they can leverage the pandemic to get at least some of the funding the subject deserves.
If you know of someone working on a solution such that think we’re lucky rather than doomed, I’m curious whose work gives you hope?
I’m pretty hopeless on the subject, not because it appears technically hard, but because the political economy of the coordination problem seems insurmountable. Many scientists seem highly opposed to the kinds of things that seem like they would naively be adequate to prevent the risk.
If I’m missing something, and smart people are on the job in a way that gives you hope, that would be happy news :-)
Hm, most of the people I’m thinking of are rather technical, e.g. Kevin Esvelt’s research on distributed secure research.
Coordination and incentive problems are of another nature and I only manage to be prescriptively optimistic. I’ve been interested in algorithms for decentralized economic planning for a while, plan to specialize in that area and am working with a local left-acc group to organize a think tank that works on these questions. Thanks to mechanism design taking off as a discipline and crypto hype fueling a lot of work on trustless computing, there’s actually a surprising amount of relevant research.
I can respect consciously prescriptive optimism <3
(I’d personally be more respectful to someone who was strong and sane enough to carry out a relatively simple plan to put dangerous mad scientists in Safety Level 5 facilities while they do their research behind a causal buffer (and also put rogue scientists permanently in jail if they do dangerous research outside of an SL5)… though I could also respect someone who found an obviously better path than this. I’m not committed to this, its just that when I grind out the math I don’t see much hope for any other option.)
If you want to synthesize a new pandemic you would need to know what proteins to add. That’s very hard. It’s much easier to
It seems the South African’s for example put older variants together in the lab with antibodies against the spike protein to test how soon it evolves to get immune evasion. That’s the kind of research with the potential to produce new pandemics waves like Omicron.
Has your experience with this project given you any insights into bioterrorism risk?
Suppose that, rather than synthesizing a vaccine, you’d wanted to synthesize a new pandemic. Would that have been remotely possible? Do you think the current safeguards will be enough to prevent that sort of thing as the technology develops over the next decade or so?
Not really, was concerned about biological X-risks before and continue to be.
I don’t currently see any plausible defense against them—even if we somehow got a sufficient number of nations to stop/moderate gain-of-function research and think twice about what information to publish, genetic engineering will continue to become easier and cheaper over time. As a result, I can see us temporarily offsetting the decline in minimum IQ*money*tech_level needed to destroy humanity but not stop it, and that’s already in a geopolitically optimistic scenario.
Luckily there are some intimidatingly smart people working on the problem and I hope they can leverage the pandemic to get at least some of the funding the subject deserves.
If you know of someone working on a solution such that think we’re lucky rather than doomed, I’m curious whose work gives you hope?
I’m pretty hopeless on the subject, not because it appears technically hard, but because the political economy of the coordination problem seems insurmountable. Many scientists seem highly opposed to the kinds of things that seem like they would naively be adequate to prevent the risk.
If I’m missing something, and smart people are on the job in a way that gives you hope, that would be happy news :-)
Hm, most of the people I’m thinking of are rather technical, e.g. Kevin Esvelt’s research on distributed secure research.
Coordination and incentive problems are of another nature and I only manage to be prescriptively optimistic. I’ve been interested in algorithms for decentralized economic planning for a while, plan to specialize in that area and am working with a local left-acc group to organize a think tank that works on these questions. Thanks to mechanism design taking off as a discipline and crypto hype fueling a lot of work on trustless computing, there’s actually a surprising amount of relevant research.
I can respect consciously prescriptive optimism <3
(I’d personally be more respectful to someone who was strong and sane enough to carry out a relatively simple plan to put dangerous mad scientists in Safety Level 5 facilities while they do their research behind a causal buffer (and also put rogue scientists permanently in jail if they do dangerous research outside of an SL5)… though I could also respect someone who found an obviously better path than this. I’m not committed to this, its just that when I grind out the math I don’t see much hope for any other option.)
If you want to synthesize a new pandemic you would need to know what proteins to add. That’s very hard. It’s much easier to
It seems the South African’s for example put older variants together in the lab with antibodies against the spike protein to test how soon it evolves to get immune evasion. That’s the kind of research with the potential to produce new pandemics waves like Omicron.