All biological sciences research is dual use. If you don’t see the evil, you’re not looking hard enough. More shocked at the tone of the paper, which implies that this is surprising to the model developers than the result. When you can do combinatorial chemistry in silico, you can make all sorts of stuff...
They don’t mention accident scenarios, as far as that goes, I imagine that some of the compounds they found by looking for bad stuff might show up if they’re looking for memantine (edit: meant galantamine, whoops) like Alzheimer’s drugs and don’t take special efforts to avoid the toxic modes of action.
There’s a quick CFAR class that I taught sometimes, which was basically “if you understand a bug, you should be able to make it worse as well as better.” [That is, suppose you develop an understanding of which drugs cause old cells to die; presumably you could use that understanding to develop a drug that causes them to stick around longer than they should, speeding up aging, or maybe to kill young cells, or so on.]
More shocked at the tone of the paper, which implies that this is surprising to the model developers than the result.
If you want to convince the reviewers of your paper that the experiment you did one evening is worth publishing, then you have to present it as something significantly new.
They don’t mention accident scenarios, as far as that goes, I imagine that some of the compounds they found by looking for bad stuff might show up if they’re looking for memantine like Alzheimer’s drugs and don’t take special efforts to avoid the toxic modes of action.
They literally have the scoring for toxicity to avoid a constructed molecule as a drug candidate from being toxic.
“how could it possibly be toxic in vivo, we had a scoring for toxicity in our combinational chemistry model!”
Usually when you’re screening for tox effects in a candidate you’re looking for off target effects (some metabolic process produces a toxic aniline compound which goes off into some other part of the body, usually the liver and breaks something), in this particular case, that isn’t the whole picture. Galantamine (useful drug, originally said memantine which is taken with it but isn’t in the same class) and VX (nerve agent) are both acetylcholinesterase inhibitors, a key difference is that VX is much better at it.
One way to achieve the aim in the paper would be to set the model to produce putative acetylcholinesterase inhibitors, rank them by estimated binding efficiency, then setting a breakpoint line between assessed ‘safe’ and assessed ‘toxic’. Usually you’d be looking under the line (safe), in this case, they’re looking above it (toxic).
My point was that in my opinion, being open to the possibility of the line having been placed in the wrong place (special efforts) is probably wise. This opens up an interesting question about research ethics—would doing experimental work to characterize edge cases in order to refine the model (location of the line) be legitimate or malign?
All biological sciences research is dual use. If you don’t see the evil, you’re not looking hard enough. More shocked at the tone of the paper, which implies that this is surprising to the model developers than the result. When you can do combinatorial chemistry in silico, you can make all sorts of stuff...
They don’t mention accident scenarios, as far as that goes, I imagine that some of the compounds they found by looking for bad stuff might show up if they’re looking for memantine (edit: meant galantamine, whoops) like Alzheimer’s drugs and don’t take special efforts to avoid the toxic modes of action.
… aging research?
(also inb4 “muh immortal elites”, hereditary power transfer already effectively does the same thing)
There’s a quick CFAR class that I taught sometimes, which was basically “if you understand a bug, you should be able to make it worse as well as better.” [That is, suppose you develop an understanding of which drugs cause old cells to die; presumably you could use that understanding to develop a drug that causes them to stick around longer than they should, speeding up aging, or maybe to kill young cells, or so on.]
If you want to convince the reviewers of your paper that the experiment you did one evening is worth publishing, then you have to present it as something significantly new.
They literally have the scoring for toxicity to avoid a constructed molecule as a drug candidate from being toxic.
“how could it possibly be toxic in vivo, we had a scoring for toxicity in our combinational chemistry model!”
Usually when you’re screening for tox effects in a candidate you’re looking for off target effects (some metabolic process produces a toxic aniline compound which goes off into some other part of the body, usually the liver and breaks something), in this particular case, that isn’t the whole picture. Galantamine (useful drug, originally said memantine which is taken with it but isn’t in the same class) and VX (nerve agent) are both acetylcholinesterase inhibitors, a key difference is that VX is much better at it.
One way to achieve the aim in the paper would be to set the model to produce putative acetylcholinesterase inhibitors, rank them by estimated binding efficiency, then setting a breakpoint line between assessed ‘safe’ and assessed ‘toxic’. Usually you’d be looking under the line (safe), in this case, they’re looking above it (toxic).
My point was that in my opinion, being open to the possibility of the line having been placed in the wrong place (special efforts) is probably wise. This opens up an interesting question about research ethics—would doing experimental work to characterize edge cases in order to refine the model (location of the line) be legitimate or malign?