I think you should think about how your work generalizes between the topics, and try to make it possible for alignment researchers to take as much as they can from it; this is because I expect software pandemics are going to become increasingly similar to wetware pandemics, and so significant conceptual parts of defenses for either will generalize somewhat. That said, I also think that the stronger form of the alignment problem is likely to be useful to you directly on your work anyway; if detecting pandemics in any way involves ML, you’re going to run into adversarial examples, and will quickly be facing the same collapsed set of problems (what objective do I train for? how well did it work, can an adversarial optimization process eg evolution or malicious bioengineers break this? what side effects will my system have if deployed?) as anyone who tries to deploy ML. If you’re instead not using ML, I just think your system won’t work very well and you’re being unambitious at your primary goal, because serious bioengineered dangers are likely to involve present-day ML bio tools by the time they’re a major issue.
But I think you in particular are doing something sufficiently important that it’s quite plausible to me that you’re correct. This is very unusual and I wouldn’t say it to many people. (normally I’d just not bother directly saying they should switch to working on alignment because of not wanting to waste their time when I’m confident they won’t be worth my time to try to spin up, and I instead just make noise about the problem vaguely in people’s vicinity and let them decide to jump on it if desired.)
I think you should think about how your work generalizes between the topics, and try to make it possible for alignment researchers to take as much as they can from it; this is because I expect software pandemics are going to become increasingly similar to wetware pandemics, and so significant conceptual parts of defenses for either will generalize somewhat. That said, I also think that the stronger form of the alignment problem is likely to be useful to you directly on your work anyway; if detecting pandemics in any way involves ML, you’re going to run into adversarial examples, and will quickly be facing the same collapsed set of problems (what objective do I train for? how well did it work, can an adversarial optimization process eg evolution or malicious bioengineers break this? what side effects will my system have if deployed?) as anyone who tries to deploy ML. If you’re instead not using ML, I just think your system won’t work very well and you’re being unambitious at your primary goal, because serious bioengineered dangers are likely to involve present-day ML bio tools by the time they’re a major issue.
But I think you in particular are doing something sufficiently important that it’s quite plausible to me that you’re correct. This is very unusual and I wouldn’t say it to many people. (normally I’d just not bother directly saying they should switch to working on alignment because of not wanting to waste their time when I’m confident they won’t be worth my time to try to spin up, and I instead just make noise about the problem vaguely in people’s vicinity and let them decide to jump on it if desired.)