Unfortunately, I feel like a lot of your comment is asking for things that are likely to be info hazardous, and
Well, actually it’s more like pointing out that those things don’t exist. I think (1) through (4) are in fact false/impossible.
But if I’m wrong, it could still be possible to support them without giving instructions.
I’d like to see an explanation for why to shift the burden of proof to the people that are warning us.
Well, I think one applicable “rationalist” concept tag would be “Pascal’s Mugging”.
But there are other issues.
If you go in talking about mad environmentalists or whoever trying to kill all humans, it’s going to be a hard sell. If you try to get people to buy into it, you may instead bring all security concerns about synthetic biology into disrepute.
To whatever degree you get past that and gain influence, if you’re fixated on “absolutely everybody dies in the plague” scenarios (which again are probably impossible), then you start to think in terms of threat actors who, well, want absolutely everybody to die. Whatever hypotheticals you come up with there, they’re going to involve very small groups, possibly even individuals, and they’re going to be “outsiders”. And deranged in a focused, methodical, and actually very unusal way.
Thinking about outsiders leads you to at least deemphasize the probably greater risks from “insiders”. A large institution is far more likely to kill millions, either accidentally or on purpose, than a small subversive cell. But it almost certainly won’t try to kill everybody.
… and because you’re thinking about outsiders, you can start to overemphasize limiting factors that tend to affect outsiders, but not insiders. For example, information and expertise may be bottlenecks for some random cult, but they’re not remotely as serious bottlenecks for major governments. That can easily lead you to misdirect your countermeasures. For example all of the LLM suggestions in the original post.
Similarly, thinking only about deranged fanatics can lead you to go looking for deranged fanatics… whereas relatively normal people behaving in what seem to them like relatively normal ways are perhaps a greater threat. You may even miss opportunities to deal with people who are deranged, but not focused, or who are just plain dumb.
In the end, by spending time on an extremely improbable scenario where eight billion people die, you can seriously misdirect your resources and end up failing to prevent, or mitigate, less improbabl cases where 400 million die. Or even a bunch of cases where a few hundred die.
Well, actually it’s more like pointing out that those things don’t exist. I think (1) through (4) are in fact false/impossible.
But if I’m wrong, it could still be possible to support them without giving instructions.
Well, I think one applicable “rationalist” concept tag would be “Pascal’s Mugging”.
But there are other issues.
If you go in talking about mad environmentalists or whoever trying to kill all humans, it’s going to be a hard sell. If you try to get people to buy into it, you may instead bring all security concerns about synthetic biology into disrepute.
To whatever degree you get past that and gain influence, if you’re fixated on “absolutely everybody dies in the plague” scenarios (which again are probably impossible), then you start to think in terms of threat actors who, well, want absolutely everybody to die. Whatever hypotheticals you come up with there, they’re going to involve very small groups, possibly even individuals, and they’re going to be “outsiders”. And deranged in a focused, methodical, and actually very unusal way.
Thinking about outsiders leads you to at least deemphasize the probably greater risks from “insiders”. A large institution is far more likely to kill millions, either accidentally or on purpose, than a small subversive cell. But it almost certainly won’t try to kill everybody.
… and because you’re thinking about outsiders, you can start to overemphasize limiting factors that tend to affect outsiders, but not insiders. For example, information and expertise may be bottlenecks for some random cult, but they’re not remotely as serious bottlenecks for major governments. That can easily lead you to misdirect your countermeasures. For example all of the LLM suggestions in the original post.
Similarly, thinking only about deranged fanatics can lead you to go looking for deranged fanatics… whereas relatively normal people behaving in what seem to them like relatively normal ways are perhaps a greater threat. You may even miss opportunities to deal with people who are deranged, but not focused, or who are just plain dumb.
In the end, by spending time on an extremely improbable scenario where eight billion people die, you can seriously misdirect your resources and end up failing to prevent, or mitigate, less improbabl cases where 400 million die. Or even a bunch of cases where a few hundred die.