Control research exclusively cares about intentional deception/scheming; it does not aim to solve any other failure mode.
(nitpick, doesn’t address main point of article) I think this is incomplete. Though control research does indeed care a lot about scheming, control can be used more broadly to handle any worst-case deployment behavior. See Josh Clymer’s post about Extending control evaluations to non-scheming threats.
Hmm, when I imagine “Scheming AI that is not easy to shut down with concerted nation-state effort, are attacking you with bioweapons, but are weak enough such that you can bargain/negotiate with them” I can imagine this outcome inspiring a lot more caution relative to many other worlds where control techniques work well but we can’t get any convincing demos/evidence to inspire caution (especially if control techniques inspire overconfidence).
But the ‘is currently working on becoming more powerful’ part of your statement does carry a lot of weight.