Any time you find yourself talking about 100 percent fatality, or about anybody trying to achieve 100 percent fatality, I think it’s a good idea to sit back and check your thought processes for dramatic bias. I mean, why isn’t 95 percent fatality bad enough to worry about? Or even 5 percent?
I agree that drama bias is a serious issue here, exacerbated by how much EAs/LWers place importance on pivotal acts for x-risk which is a massive red flag for drama/story biases.
In the other hand, this is focusing on existential risk, and with some exceptions, 5% lethality probably matters little to the question, though I agree that lower fatality percents matter here, especially since exponential population is not a safe assumption anymore.
It isn’t easy to come up with plausible threat actors who want to kill everybody.
While I kind of agree with this, I think unfortunately, this is the easiest element of the threat model, compared to your other points, so I’d not rely on it.
Unfortunately, I feel like a lot of your comment is asking for things that are likely to be info hazardous, and I’d like to see an explanation for why to shift the burden of proof to the people that are warning us.
Unfortunately, I feel like a lot of your comment is asking for things that are likely to be info hazardous, and
Well, actually it’s more like pointing out that those things don’t exist. I think (1) through (4) are in fact false/impossible.
But if I’m wrong, it could still be possible to support them without giving instructions.
I’d like to see an explanation for why to shift the burden of proof to the people that are warning us.
Well, I think one applicable “rationalist” concept tag would be “Pascal’s Mugging”.
But there are other issues.
If you go in talking about mad environmentalists or whoever trying to kill all humans, it’s going to be a hard sell. If you try to get people to buy into it, you may instead bring all security concerns about synthetic biology into disrepute.
To whatever degree you get past that and gain influence, if you’re fixated on “absolutely everybody dies in the plague” scenarios (which again are probably impossible), then you start to think in terms of threat actors who, well, want absolutely everybody to die. Whatever hypotheticals you come up with there, they’re going to involve very small groups, possibly even individuals, and they’re going to be “outsiders”. And deranged in a focused, methodical, and actually very unusal way.
Thinking about outsiders leads you to at least deemphasize the probably greater risks from “insiders”. A large institution is far more likely to kill millions, either accidentally or on purpose, than a small subversive cell. But it almost certainly won’t try to kill everybody.
… and because you’re thinking about outsiders, you can start to overemphasize limiting factors that tend to affect outsiders, but not insiders. For example, information and expertise may be bottlenecks for some random cult, but they’re not remotely as serious bottlenecks for major governments. That can easily lead you to misdirect your countermeasures. For example all of the LLM suggestions in the original post.
Similarly, thinking only about deranged fanatics can lead you to go looking for deranged fanatics… whereas relatively normal people behaving in what seem to them like relatively normal ways are perhaps a greater threat. You may even miss opportunities to deal with people who are deranged, but not focused, or who are just plain dumb.
In the end, by spending time on an extremely improbable scenario where eight billion people die, you can seriously misdirect your resources and end up failing to prevent, or mitigate, less improbabl cases where 400 million die. Or even a bunch of cases where a few hundred die.
I agree that drama bias is a serious issue here, exacerbated by how much EAs/LWers place importance on pivotal acts for x-risk which is a massive red flag for drama/story biases.
In the other hand, this is focusing on existential risk, and with some exceptions, 5% lethality probably matters little to the question, though I agree that lower fatality percents matter here, especially since exponential population is not a safe assumption anymore.
While I kind of agree with this, I think unfortunately, this is the easiest element of the threat model, compared to your other points, so I’d not rely on it.
Unfortunately, I feel like a lot of your comment is asking for things that are likely to be info hazardous, and I’d like to see an explanation for why to shift the burden of proof to the people that are warning us.
Well, actually it’s more like pointing out that those things don’t exist. I think (1) through (4) are in fact false/impossible.
But if I’m wrong, it could still be possible to support them without giving instructions.
Well, I think one applicable “rationalist” concept tag would be “Pascal’s Mugging”.
But there are other issues.
If you go in talking about mad environmentalists or whoever trying to kill all humans, it’s going to be a hard sell. If you try to get people to buy into it, you may instead bring all security concerns about synthetic biology into disrepute.
To whatever degree you get past that and gain influence, if you’re fixated on “absolutely everybody dies in the plague” scenarios (which again are probably impossible), then you start to think in terms of threat actors who, well, want absolutely everybody to die. Whatever hypotheticals you come up with there, they’re going to involve very small groups, possibly even individuals, and they’re going to be “outsiders”. And deranged in a focused, methodical, and actually very unusal way.
Thinking about outsiders leads you to at least deemphasize the probably greater risks from “insiders”. A large institution is far more likely to kill millions, either accidentally or on purpose, than a small subversive cell. But it almost certainly won’t try to kill everybody.
… and because you’re thinking about outsiders, you can start to overemphasize limiting factors that tend to affect outsiders, but not insiders. For example, information and expertise may be bottlenecks for some random cult, but they’re not remotely as serious bottlenecks for major governments. That can easily lead you to misdirect your countermeasures. For example all of the LLM suggestions in the original post.
Similarly, thinking only about deranged fanatics can lead you to go looking for deranged fanatics… whereas relatively normal people behaving in what seem to them like relatively normal ways are perhaps a greater threat. You may even miss opportunities to deal with people who are deranged, but not focused, or who are just plain dumb.
In the end, by spending time on an extremely improbable scenario where eight billion people die, you can seriously misdirect your resources and end up failing to prevent, or mitigate, less improbabl cases where 400 million die. Or even a bunch of cases where a few hundred die.