I think there is a danger that the current abilities of ML models in drug design are being overstated. The authors appear to have taken a known toxicity mode ( probably acetylcholinesterase inhibition—the biological target of VX, Novichok and many old pesticides) and trained their model to produce other structures with activity against this enzyme. Their model claims to have produced significantly more active structures but none were synthesised. Current ML models in drug design are good at finding similar examples of known drugs, but are much less good (to my own disappointment—this is what I spend many a working day on) at finding better examples, at least in a single property optimisation. This is largely because, in predicting stronger chemicals, the models are generally moving beyond their zone of applicability. In point of fact, the field of acetylcholine esterase inhibition has been so well studied (much of it in secret) it is quite likely IMO that the list of predicted highly toxic designs is, at best, only very sparsely populated with significantly stronger nerve agents than the best available. Identifying which structures those are, out of potentially thousands of good designs, still remains a very difficult task.
This is not to take away from the authors’ main point, that ML models could be helpful in designing better chemical weapons. A practical application might be to attempt to introduce a new property (E.g brain penetration) into a known neurotoxin class that lacked that that property. An ML model optimised on both brain penetration and neurotoxicity would certainly be helpful in the search for such agents.
Good analysis, but I did not upvote because of the potential info-hazard that explaining how to use AI to more effectively create hazardous compounds poses. I’d like others to do the same, and you should consider deleting this comment.
I think there is a danger that the current abilities of ML models in drug design are being overstated. The authors appear to have taken a known toxicity mode ( probably acetylcholinesterase inhibition—the biological target of VX, Novichok and many old pesticides) and trained their model to produce other structures with activity against this enzyme. Their model claims to have produced significantly more active structures but none were synthesised. Current ML models in drug design are good at finding similar examples of known drugs, but are much less good (to my own disappointment—this is what I spend many a working day on) at finding better examples, at least in a single property optimisation. This is largely because, in predicting stronger chemicals, the models are generally moving beyond their zone of applicability. In point of fact, the field of acetylcholine esterase inhibition has been so well studied (much of it in secret) it is quite likely IMO that the list of predicted highly toxic designs is, at best, only very sparsely populated with significantly stronger nerve agents than the best available. Identifying which structures those are, out of potentially thousands of good designs, still remains a very difficult task.
This is not to take away from the authors’ main point, that ML models could be helpful in designing better chemical weapons. A practical application might be to attempt to introduce a new property (E.g brain penetration) into a known neurotoxin class that lacked that that property. An ML model optimised on both brain penetration and neurotoxicity would certainly be helpful in the search for such agents.
Good analysis, but I did not upvote because of the potential info-hazard that explaining how to use AI to more effectively create hazardous compounds poses. I’d like others to do the same, and you should consider deleting this comment.