You start out talking about “large scale” attacks, then segue into the question of killing everyone, as though it were the same thing. Most of the post seems to be about universal fatality.
The scale of the attacks I’m trying to talk about are ones aimed at human extinction or otherwise severely limiting human potential (ex: preventing off-world spread). Either directly, through infecting and killing nearly everyone, or indirectly through causing global civilizational collapse. You’re right that I’m slightly sloppy in calling this “extinction”, but the alternatives are verbosity or jargon.
You haven’t supported the idea that a recognizably biological pathogen that can kill everyone can actually exist. To do that, it has to …
I agree the post does not argue for this, and it’s not trying to. Making the full case is really hard to do without making us less safe through information hazards, but:
it has to be have a 100 percent fatality rate
Instead of one 100% fatal pathogen you could combine several, each with an ~independent lower rate.
keep the host alive long enough to spread to multiple other hosts
See Securing Civilisation Against Catastrophic Pandemics for the idea of “wildfire” and “stealth” pandemics. The idea is that to be a danger to civilization would likely either need to be so infectious that we are not able to contain it (consider a worse measles) or have a long enough incubation period that by the time we learn about it it’s already too late (consider a worse HIV).
have modes of spread that work over long distances and aren’t easily interrupted
In the wildfire scenario, one possibility is an extremely infectious airborne pathogen. In the stealth scenario, this is not required because the spread happens before people know there is something to interrupt.
probably be able to linger in the environment to catch isolated stragglers
This depends a lot on how much you think a tiny number of isolated stragglers would be able to survive and restart civilization.
either be immune to therapy or vaccination, or move fast enough to obviate them
In the wildfire scenario, this is your second one: moving very fast. In the stealth scenario, we don’t know that we need therapy/vaccination until it’s too late.
be genetically stable enough that the “kill everybody” variant, as opposed to mutants, is the one that actually spreads
I think this is probably not possible to answer without getting into information hazards. I think the best I can do here is to say that I’m pretty sure Kevin Esvelt (MIT professor, biologist, CRISPR gene drive inventor, etc) doesn’t see this as a blocker.
(for the threat actor you posit) leave off-target species alone
This doesn’t seem like much of a barrier to me?
If it can exist, you haven’t supported the idea that it can be created by intentional design.
This is another one where right now for information hazards reasons the best I can offer is that Esvelt thinks it can.
If it can be created by intentional design, you haven’t supported the idea that it can be created confidently without large-scale experimentation
Ditto
you haven’t supported the idea that it can be manufactured or delivered without large resources, in such a way that it will be able to do its job without dying out or falling to countermeasures
This is the scary thing about a pandemic: once it is well seated it spreads on its own through normal human interaction. Most things where you might want to cause similar harm you would need to set up a massive distribution network, but not this.
It isn’t easy to come up with plausible threat actors who want to kill everybody.
In an LW context I think the easiest actors to imagine are suffering-focused ones. Consider someone who thinks that suffering matters far more than anything else, enough that they’d strongly prefer ending humanity to spreading life beyond earth.
why isn’t 95 percent fatality bad enough to worry about? Or even 5 percent?
I also think those are quite bad, and worth working to prevent! And, note that everything I’ve proposed at the end of the post is the kind of thing that you would also do if you were trying to reduce the risk of something that kills 5%.
But the point I am arguing in the post is that something that might kill everyone, or close enough to end global civilization, is much more likely than you would get from extrapolating historical attacks by small groups.
Bioweapons in general are actually kind of lousy for non-movie-villains at most scales, including large scales, because they’re so unpredictable, so poorly controllable, and so poorly targetable.
I don’t think those apply for the kind of omnicidal actors I’m covering here?
It would be kind of sidetracking things to get into the reasons why, but just to put it on the record, I have serious doubts about your countermeasures, too.
Happy to get into these too if you like!
Overall, I do think folks who are skeptical of experts who won’t share their full reasons or who trust different experts who don’t think this is practical should end up with a much more skeptical view than I have. I think we can make some progress as we get a clearer idea of which concepts are too dangerous to share, but probably not enough.
Pulling this to the top, because it seems, um, cruxish...
I think the best I can do here is to say that Kevin Esvelt (MIT professor, biologist, CRISPR gene drive inventor, etc) doesn’t see this as a blocker.
In this sort of case, I think appeal to authority is appropriate, and that’s a lot better authority than I have.
Just to be clear and pull all of the Esvelt stuff together, are you saying he thinks that...
Given his own knowledge and/or what’s available or may soon be available to the public,
plus a “reasonable” lab that might be accessible to a small “outsider” group or maybe a slightly wealthy individual,
and maybe a handful of friends,
plus at least some access to the existing biology-as-a-service infrastructure,
he could design and build a pathogen, as opposed to evolving one using large scale in vivo work,
and without having to passage it through a bunch of live hosts,
that he’d believe would have a “high” probability of either working on the first try, or
failing stealthily enough that he could try again,
including not killing him when he released it,
and working within a few tries,
to kill enough humans to be either an extinction risk or a civilization-collapsing risk,
and that a relatively sophisticated person with “lesser” qualifications, perhaps a BS in microbiology, could
learn to do the same from the literature, or
be coached to do it by an LLM in the near future.
Is that close to correct? Are any of those wrong, incomplete, or missing the point?
When he gets into a room with people with similar qualifications, how do they react to those ideas? Have you talked it over with epidemiologists?
The scale of the attacks I’m trying to talk about are ones aimed at human extinction or otherwise severely limiting human potential (ex: preventing off-world spread). Either directly, through infecting and killing nearly everyone, or indirectly through causing global civilizational collapse. You’re right that I’m slightly sloppy in calling this “extinction”, but the alternatives are verbosity or jargon.
I think that, even if stragglers die on their own, killing literally everyone is qualitatively harder than killing an “almost everyone” number like 95 percent. And killing “almost everyone” is qualitatively harder than killing (or disabling) enough people to cause a collapse of civilization.
I also doubt that a simple collapse of civilization[1] would be the kind of permanent limiting event you describe[2].
I think there’s a significant class of likely-competent actors who might be risk-tolerant enough to skate the edge of “collapsing civilization” scale, but wouldn’t want to cause extinction or even get close to that, and certainly would never put in extra effort to get extinction. Many such actors probably have vastly more resources than anybody who wants extinction. So they’re a big danger for sub-extinction events, and probably not a big danger for extinction events. I tend to worry more about those actors than about omnicidal maniacs.
So I think it’s really important to keep the various levels distinct.
Instead of one 100% fatal pathogen you could combine several, each with a ~independent lower rate.
How do you make them independent? If one disease provokes widespread paranoia and/or an organized quarantine, that affects all of them. Same if the population gets so sparse that it’s hard for any of them to spread.
Also, how does that affect the threat model? Coming up with a bunch of independent pathogens presumably takes a better-resourced, better-organized threat than coming up with just one. Usually when you see some weird death cult or whatever, they seem to do a one-shot thing, or at most one thing they’ve really concentrated on and one or two low effort add-ons. Anybody with limited resources is going to dislike the idea of having the work multiplied.
The idea is that to be a danger to civilization would likely either need to be so infectious that we are not able to contain it (consider a worse measles) or have a long enough incubation period that by the time we learn about it it’s already too late (consider a worse HIV).
The two don’t seem incompatible, really. You could imagine something that played along asymptomatically (while spreading like crazy), then pulled out the aces when the time was right (syphilis).
Which is not to say that you could actually create it. I don’t know about that (and tend to doubt it). I also don’t know how long you could avoid surveillance even if you were asymptomatic, or how much risk you’d run of allowing rapid countermeasure development, or how closely you’d have to synchronize the “aces” part.
This depends a lot on how much you think a tiny number of isolated stragglers would be able to survive and restart civilization.
True indeed. I think there’s obviously some level of isolation where they all just die off, but there’s probably some lower level of isolation where they find each other enough to form some kind of sustainable group… after the pathogen has died out. Humans are pretty long-lived.
You might even have a sustainable straggler group survive all together. Andaman islanders or the like.
By the way, I don’t think “sustainable group” is the same as “restart civilization”. As long as they can maintain a population in hunter-gatherer or primitive pastoralist mode, restarting civilization can wait for thousands of years if it has to.
In the stealth scenario, we don’t know that we need therapy/vaccination until it’s too late.
Doesn’t that mean that every case has to “come out of incubation” at relatively close to the same time, so that the first deaths don’t tip people off? That seems really hard to engineer.
Bioweapons in general are actually kind of lousy for non-movie-villains at most scales, including large scales, because they’re so unpredictable, so poorly controllable, and so poorly targetable.
I don’t think those apply for the kind of omnicidal actors I’m covering here?
Well, yes, but what I was trying to get at was that omnicidal actors don’t seem to me like the most plausible people to be doing very naughty things.
It kind of depends on what kind of resources you need to pull off something really dramatic. If you need to be a significant institution working toward an official purpose, then the supply of omnicidal actors may be nil. If you need to have at least a small group and be generally organized and functional and on-task, I’d guess it’d be pretty small, but not zero. If any random nut can do it on a whim, then we have a problem.
I was writing on the assumption that reality is closer to the beginning of that list.
Happy to get into these too if you like!
I might like, all right, but at the moment I’m not sure I can or should commit the time. I’ll see how things look tomorrow.
Full disclosure: Bostromian species potential ideas don’t work for me anyhow. I think killing everybody alive is roughly twice as bad as killing half of them, not roughly infinity times as bad. I don’t think that matters much; we all agree that killing any number is bad.
Just to be clear and pull all of the Esvelt stuff together, are you saying he thinks that...
I can’t speak for him, but I’m pretty sure he’d agree, yes.
When he gets into a room with people with similar qualifications, how do they react to those ideas? Have you talked it over with epidemiologists?
I don’t know, sorry! My guess is that they are generally much less concerned than he is, primarily because they’ve spent their careers thinking about natural risks instead of human ones and haven’t (not that I think they should!) spent a lot of time thinking about how someone might cause large-scale harm.
If one disease provokes widespread paranoia and/or an organized quarantine, that affects all of them. Same if the population gets so sparse that it’s hard for any of them to spread.
Sorry, I was thinking about ‘independence’ in the sense of not everyone being susceptible to the same illnesses, because I’ve mostly been thinking about the stealth scenario where you don’t know to react until it’s too late. You’re right that in a wildfire scenario reactions to one disease can restrict the spread of another (recently: covid lockdowns in 2020 cutting the spread of almost everything else).
Anybody with limited resources is going to dislike the idea of having the work multiplied.
Probably depends a lot on how the work scales with more pathogens?
The two don’t seem incompatible, really. You could imagine something that played along asymptomatically (while spreading like crazy), then pulled out the aces when the time was right (syphilis).
I don’t think they’re incompatible; I wasn’t trying to give an exclusive “or”.
Which is not to say that you could actually create it. I don’t know about that (and tend to doubt it). I also don’t know how long you could avoid surveillance even if you were asymptomatic, or how much risk you’d run of allowing rapid countermeasure development, or how closely you’d have to synchronize the “aces” part. … Doesn’t that mean that every case has to “come out of incubation” at relatively close to the same time, so that the first deaths don’t tip people off? That seems really hard to engineer.
I think this is all pretty hard to get into without bringing up infohazards, unfortunately.
It kind of depends on what kind of resources you need to pull off something really dramatic. If you need to be a significant institution working toward an official purpose, then the supply of omnicidal actors may be nil. If you need to have at least a small group and be generally organized and functional and on-task, I’d guess it’d be pretty small, but not zero. If any random nut can do it on a whim, then we have a problem.
If we continue not doing anything then I think we do get to where one smart and reasonably dedicated person can do it; perhaps another Kaczynski?
Full disclosure: Bostromian species potential ideas don’t work for me anyhow. I think killing everybody alive is roughly twice as bad as killing half of them, not roughly infinity times as bad. I don’t think that matters much; we all agree that killing any number is bad.
While full-scale astronomical waste arguments don’t work for a lot of people, it sounds like your views are almost as extreme in the other direction? If you’re up for getting into this, is it that you don’t think we should consider people who don’t exist yet in our decisions?
I can’t speak for him, but I’m pretty sure he’d agree, yes.
Hrm. That modifies my view in an unfortunate direction.
I still don’t fully believe it, because I’ve seen a strong regularity that everything looks easy until you try it, no matter how much of an expert you are… and in this case actually making viruses is only one part of the necessary expertise. But it makes me more nervous.
I don’t know, sorry! My guess is that they are generally much less concerned than he is, primarily because they’ve spent their careers thinking about natural risks instead of human ones and haven’t (not that I think they should!) spent a lot of time thinking about how someone might cause large-scale harm.
Just for the record, I’ve spent a lot of my life thinking about humans trying to cause large scale harm (or at least doing things that could have large scale harm as an effect). Yes, in a different area, but nonetheless it’s led me to believe that people tend to overestimate risks. And you’re talking about a scale of effecicacy that I don’t think I could get with a computer program, which is a much more predictable thing working in a much more predictable environment.
If you’re up for getting into this, is it that you don’t think we should consider people who don’t exist yet in our decisions?
I’ve written a lot about it on Less Wrong. But, yes, your one-sentence summary is basically right. The only quibble is that “yet” is cheating. They don’t exist, period. Even if you take a “timeless” view, they still don’t exist, anywhere in spacetime, if they never actually come into being.
The scale of the attacks I’m trying to talk about are ones aimed at human extinction or otherwise severely limiting human potential (ex: preventing off-world spread). Either directly, through infecting and killing nearly everyone, or indirectly through causing global civilizational collapse. You’re right that I’m slightly sloppy in calling this “extinction”, but the alternatives are verbosity or jargon.
I agree the post does not argue for this, and it’s not trying to. Making the full case is really hard to do without making us less safe through information hazards, but:
Instead of one 100% fatal pathogen you could combine several, each with an ~independent lower rate.
See Securing Civilisation Against Catastrophic Pandemics for the idea of “wildfire” and “stealth” pandemics. The idea is that to be a danger to civilization would likely either need to be so infectious that we are not able to contain it (consider a worse measles) or have a long enough incubation period that by the time we learn about it it’s already too late (consider a worse HIV).
In the wildfire scenario, one possibility is an extremely infectious airborne pathogen. In the stealth scenario, this is not required because the spread happens before people know there is something to interrupt.
This depends a lot on how much you think a tiny number of isolated stragglers would be able to survive and restart civilization.
In the wildfire scenario, this is your second one: moving very fast. In the stealth scenario, we don’t know that we need therapy/vaccination until it’s too late.
I think this is probably not possible to answer without getting into information hazards. I think the best I can do here is to say that I’m pretty sure Kevin Esvelt (MIT professor, biologist, CRISPR gene drive inventor, etc) doesn’t see this as a blocker.
This doesn’t seem like much of a barrier to me?
This is another one where right now for information hazards reasons the best I can offer is that Esvelt thinks it can.
Ditto
This is the scary thing about a pandemic: once it is well seated it spreads on its own through normal human interaction. Most things where you might want to cause similar harm you would need to set up a massive distribution network, but not this.
In an LW context I think the easiest actors to imagine are suffering-focused ones. Consider someone who thinks that suffering matters far more than anything else, enough that they’d strongly prefer ending humanity to spreading life beyond earth.
I also think those are quite bad, and worth working to prevent! And, note that everything I’ve proposed at the end of the post is the kind of thing that you would also do if you were trying to reduce the risk of something that kills 5%.
But the point I am arguing in the post is that something that might kill everyone, or close enough to end global civilization, is much more likely than you would get from extrapolating historical attacks by small groups.
I don’t think those apply for the kind of omnicidal actors I’m covering here?
Happy to get into these too if you like!
Overall, I do think folks who are skeptical of experts who won’t share their full reasons or who trust different experts who don’t think this is practical should end up with a much more skeptical view than I have. I think we can make some progress as we get a clearer idea of which concepts are too dangerous to share, but probably not enough.
Pulling this to the top, because it seems, um, cruxish...
In this sort of case, I think appeal to authority is appropriate, and that’s a lot better authority than I have.
Just to be clear and pull all of the Esvelt stuff together, are you saying he thinks that...
Given his own knowledge and/or what’s available or may soon be available to the public,
plus a “reasonable” lab that might be accessible to a small “outsider” group or maybe a slightly wealthy individual,
and maybe a handful of friends,
plus at least some access to the existing biology-as-a-service infrastructure,
he could design and build a pathogen, as opposed to evolving one using large scale in vivo work,
and without having to passage it through a bunch of live hosts,
that he’d believe would have a “high” probability of either working on the first try, or
failing stealthily enough that he could try again,
including not killing him when he released it,
and working within a few tries,
to kill enough humans to be either an extinction risk or a civilization-collapsing risk,
and that a relatively sophisticated person with “lesser” qualifications, perhaps a BS in microbiology, could
learn to do the same from the literature, or
be coached to do it by an LLM in the near future.
Is that close to correct? Are any of those wrong, incomplete, or missing the point?
When he gets into a room with people with similar qualifications, how do they react to those ideas? Have you talked it over with epidemiologists?
I think that, even if stragglers die on their own, killing literally everyone is qualitatively harder than killing an “almost everyone” number like 95 percent. And killing “almost everyone” is qualitatively harder than killing (or disabling) enough people to cause a collapse of civilization.
I also doubt that a simple collapse of civilization[1] would be the kind of permanent limiting event you describe[2].
I think there’s a significant class of likely-competent actors who might be risk-tolerant enough to skate the edge of “collapsing civilization” scale, but wouldn’t want to cause extinction or even get close to that, and certainly would never put in extra effort to get extinction. Many such actors probably have vastly more resources than anybody who wants extinction. So they’re a big danger for sub-extinction events, and probably not a big danger for extinction events. I tend to worry more about those actors than about omnicidal maniacs.
So I think it’s really important to keep the various levels distinct.
How do you make them independent? If one disease provokes widespread paranoia and/or an organized quarantine, that affects all of them. Same if the population gets so sparse that it’s hard for any of them to spread.
Also, how does that affect the threat model? Coming up with a bunch of independent pathogens presumably takes a better-resourced, better-organized threat than coming up with just one. Usually when you see some weird death cult or whatever, they seem to do a one-shot thing, or at most one thing they’ve really concentrated on and one or two low effort add-ons. Anybody with limited resources is going to dislike the idea of having the work multiplied.
The two don’t seem incompatible, really. You could imagine something that played along asymptomatically (while spreading like crazy), then pulled out the aces when the time was right (syphilis).
Which is not to say that you could actually create it. I don’t know about that (and tend to doubt it). I also don’t know how long you could avoid surveillance even if you were asymptomatic, or how much risk you’d run of allowing rapid countermeasure development, or how closely you’d have to synchronize the “aces” part.
True indeed. I think there’s obviously some level of isolation where they all just die off, but there’s probably some lower level of isolation where they find each other enough to form some kind of sustainable group… after the pathogen has died out. Humans are pretty long-lived.
You might even have a sustainable straggler group survive all together. Andaman islanders or the like.
By the way, I don’t think “sustainable group” is the same as “restart civilization”. As long as they can maintain a population in hunter-gatherer or primitive pastoralist mode, restarting civilization can wait for thousands of years if it has to.
Doesn’t that mean that every case has to “come out of incubation” at relatively close to the same time, so that the first deaths don’t tip people off? That seems really hard to engineer.
Well, yes, but what I was trying to get at was that omnicidal actors don’t seem to me like the most plausible people to be doing very naughty things.
It kind of depends on what kind of resources you need to pull off something really dramatic. If you need to be a significant institution working toward an official purpose, then the supply of omnicidal actors may be nil. If you need to have at least a small group and be generally organized and functional and on-task, I’d guess it’d be pretty small, but not zero. If any random nut can do it on a whim, then we have a problem.
I was writing on the assumption that reality is closer to the beginning of that list.
I might like, all right, but at the moment I’m not sure I can or should commit the time. I’ll see how things look tomorrow.
… depleted fossil resources or no…
Full disclosure: Bostromian species potential ideas don’t work for me anyhow. I think killing everybody alive is roughly twice as bad as killing half of them, not roughly infinity times as bad. I don’t think that matters much; we all agree that killing any number is bad.
I can’t speak for him, but I’m pretty sure he’d agree, yes.
I don’t know, sorry! My guess is that they are generally much less concerned than he is, primarily because they’ve spent their careers thinking about natural risks instead of human ones and haven’t (not that I think they should!) spent a lot of time thinking about how someone might cause large-scale harm.
Sorry, I was thinking about ‘independence’ in the sense of not everyone being susceptible to the same illnesses, because I’ve mostly been thinking about the stealth scenario where you don’t know to react until it’s too late. You’re right that in a wildfire scenario reactions to one disease can restrict the spread of another (recently: covid lockdowns in 2020 cutting the spread of almost everything else).
Probably depends a lot on how the work scales with more pathogens?
I don’t think they’re incompatible; I wasn’t trying to give an exclusive “or”.
I think this is all pretty hard to get into without bringing up infohazards, unfortunately.
If we continue not doing anything then I think we do get to where one smart and reasonably dedicated person can do it; perhaps another Kaczynski?
While full-scale astronomical waste arguments don’t work for a lot of people, it sounds like your views are almost as extreme in the other direction? If you’re up for getting into this, is it that you don’t think we should consider people who don’t exist yet in our decisions?
Hrm. That modifies my view in an unfortunate direction.
I still don’t fully believe it, because I’ve seen a strong regularity that everything looks easy until you try it, no matter how much of an expert you are… and in this case actually making viruses is only one part of the necessary expertise. But it makes me more nervous.
Just for the record, I’ve spent a lot of my life thinking about humans trying to cause large scale harm (or at least doing things that could have large scale harm as an effect). Yes, in a different area, but nonetheless it’s led me to believe that people tend to overestimate risks. And you’re talking about a scale of effecicacy that I don’t think I could get with a computer program, which is a much more predictable thing working in a much more predictable environment.
I’ve written a lot about it on Less Wrong. But, yes, your one-sentence summary is basically right. The only quibble is that “yet” is cheating. They don’t exist, period. Even if you take a “timeless” view, they still don’t exist, anywhere in spacetime, if they never actually come into being.