“how could it possibly be toxic in vivo, we had a scoring for toxicity in our combinational chemistry model!”
Usually when you’re screening for tox effects in a candidate you’re looking for off target effects (some metabolic process produces a toxic aniline compound which goes off into some other part of the body, usually the liver and breaks something), in this particular case, that isn’t the whole picture. Galantamine (useful drug, originally said memantine which is taken with it but isn’t in the same class) and VX (nerve agent) are both acetylcholinesterase inhibitors, a key difference is that VX is much better at it.
One way to achieve the aim in the paper would be to set the model to produce putative acetylcholinesterase inhibitors, rank them by estimated binding efficiency, then setting a breakpoint line between assessed ‘safe’ and assessed ‘toxic’. Usually you’d be looking under the line (safe), in this case, they’re looking above it (toxic).
My point was that in my opinion, being open to the possibility of the line having been placed in the wrong place (special efforts) is probably wise. This opens up an interesting question about research ethics—would doing experimental work to characterize edge cases in order to refine the model (location of the line) be legitimate or malign?
“how could it possibly be toxic in vivo, we had a scoring for toxicity in our combinational chemistry model!”
Usually when you’re screening for tox effects in a candidate you’re looking for off target effects (some metabolic process produces a toxic aniline compound which goes off into some other part of the body, usually the liver and breaks something), in this particular case, that isn’t the whole picture. Galantamine (useful drug, originally said memantine which is taken with it but isn’t in the same class) and VX (nerve agent) are both acetylcholinesterase inhibitors, a key difference is that VX is much better at it.
One way to achieve the aim in the paper would be to set the model to produce putative acetylcholinesterase inhibitors, rank them by estimated binding efficiency, then setting a breakpoint line between assessed ‘safe’ and assessed ‘toxic’. Usually you’d be looking under the line (safe), in this case, they’re looking above it (toxic).
My point was that in my opinion, being open to the possibility of the line having been placed in the wrong place (special efforts) is probably wise. This opens up an interesting question about research ethics—would doing experimental work to characterize edge cases in order to refine the model (location of the line) be legitimate or malign?