More shocked at the tone of the paper, which implies that this is surprising to the model developers than the result.
If you want to convince the reviewers of your paper that the experiment you did one evening is worth publishing, then you have to present it as something significantly new.
They don’t mention accident scenarios, as far as that goes, I imagine that some of the compounds they found by looking for bad stuff might show up if they’re looking for memantine like Alzheimer’s drugs and don’t take special efforts to avoid the toxic modes of action.
They literally have the scoring for toxicity to avoid a constructed molecule as a drug candidate from being toxic.
“how could it possibly be toxic in vivo, we had a scoring for toxicity in our combinational chemistry model!”
Usually when you’re screening for tox effects in a candidate you’re looking for off target effects (some metabolic process produces a toxic aniline compound which goes off into some other part of the body, usually the liver and breaks something), in this particular case, that isn’t the whole picture. Galantamine (useful drug, originally said memantine which is taken with it but isn’t in the same class) and VX (nerve agent) are both acetylcholinesterase inhibitors, a key difference is that VX is much better at it.
One way to achieve the aim in the paper would be to set the model to produce putative acetylcholinesterase inhibitors, rank them by estimated binding efficiency, then setting a breakpoint line between assessed ‘safe’ and assessed ‘toxic’. Usually you’d be looking under the line (safe), in this case, they’re looking above it (toxic).
My point was that in my opinion, being open to the possibility of the line having been placed in the wrong place (special efforts) is probably wise. This opens up an interesting question about research ethics—would doing experimental work to characterize edge cases in order to refine the model (location of the line) be legitimate or malign?
If you want to convince the reviewers of your paper that the experiment you did one evening is worth publishing, then you have to present it as something significantly new.
They literally have the scoring for toxicity to avoid a constructed molecule as a drug candidate from being toxic.
“how could it possibly be toxic in vivo, we had a scoring for toxicity in our combinational chemistry model!”
Usually when you’re screening for tox effects in a candidate you’re looking for off target effects (some metabolic process produces a toxic aniline compound which goes off into some other part of the body, usually the liver and breaks something), in this particular case, that isn’t the whole picture. Galantamine (useful drug, originally said memantine which is taken with it but isn’t in the same class) and VX (nerve agent) are both acetylcholinesterase inhibitors, a key difference is that VX is much better at it.
One way to achieve the aim in the paper would be to set the model to produce putative acetylcholinesterase inhibitors, rank them by estimated binding efficiency, then setting a breakpoint line between assessed ‘safe’ and assessed ‘toxic’. Usually you’d be looking under the line (safe), in this case, they’re looking above it (toxic).
My point was that in my opinion, being open to the possibility of the line having been placed in the wrong place (special efforts) is probably wise. This opens up an interesting question about research ethics—would doing experimental work to characterize edge cases in order to refine the model (location of the line) be legitimate or malign?