Suppose we’re in a bad-case modern scenario, where there’s been close industry involvement, including us documenting early parts of the experiment, as well as some attention in the professional literature, and some researchers poised to follow up on our results. And then we directly discover something that would be catastrophic if used, so we have to keep it in as few peoples’ hands as possible, we can’t just be like Paul Berg and write an article asking for a moratorium on research. Let’s say it’s self-replicating nanotechnology or something.
One process you could follow is sort of like getting off facebook. Step one is to obfuscate what you’ve already done. Step two is to discredit your product. Step three is to think up a non-dangerous alternative. Step four is to start warning about the dangers.
In the case of nanotech, this would mean releasing disinformation in our technical reports for a while, then claiming contamination or instant failure of the samples, with e.g. real data cherry picked from real failures to back it up, then pushing industrial nanotech for protein processing using our own manufactured failure as a supporting argument, then talking to other researchers about the danger of self-replicating nanotech research.
Your bad-case modern scenario seems more like the average to me (extent depending on the field). Most research that promises breakthroughs requires a lot of funding these days, which implies either close industry involvement or being part of some government sponsored project. Which both imply close supervision and teams of researchers, no Dr. Perelman type one-man-show. Even if there’s no corporate/academic supervisor pestering you, if you want to do (default:expensive) research, you and your team better publish, or perish, as the aphorism goes.
Note I did not suggest just throwing away samples, both falsifying your reports / releasing disinformation opens you up to legal liabilities, damages, pariah status, and depends on convincing your research group as well. Unless you envision yourself as the team leader, in which case it’s unlikely you’ll be the first to notice the danger, and in which case you’ll probably be self selected for being enthusiastic about what you do.
Take nanotech, say you start thinking that your current project may open the door to self-replicators. Well, most any nanotech related research paves part of the way there, whether a large or a small chunk. So stop altogether? But you went into the field willingly (presumably), so it’s not like you’re strictly against any progress that could be dual-used for self-replicators.
What I’m getting at is a researcher a) noticing the dangerous implications of his current research and then b) devoting himself to stopping it effectively and c) those efforts having a significant effect on the outcome is a contrived scenario in almost any scenario that doesn’t seem Chinese-Room like concocted.
Maybe it’s selection bias from the scientific news cycle, but unless there is a large “dark figure” of secret one-man researcher hermits like Perelman for whom your techniques may potentially work, there’s little stopping the (hopefully hypothetical) doomsday clock.
I agree; it seems a very contrived scenario. Though, should such a contrived scenario occur, then it seems to me that legal liabilities, pariah status, and damages will seem negiligible problems next to the alternative.
Suppose we’re in a bad-case modern scenario, where there’s been close industry involvement, including us documenting early parts of the experiment, as well as some attention in the professional literature, and some researchers poised to follow up on our results. And then we directly discover something that would be catastrophic if used, so we have to keep it in as few peoples’ hands as possible, we can’t just be like Paul Berg and write an article asking for a moratorium on research. Let’s say it’s self-replicating nanotechnology or something.
One process you could follow is sort of like getting off facebook. Step one is to obfuscate what you’ve already done. Step two is to discredit your product. Step three is to think up a non-dangerous alternative. Step four is to start warning about the dangers.
In the case of nanotech, this would mean releasing disinformation in our technical reports for a while, then claiming contamination or instant failure of the samples, with e.g. real data cherry picked from real failures to back it up, then pushing industrial nanotech for protein processing using our own manufactured failure as a supporting argument, then talking to other researchers about the danger of self-replicating nanotech research.
Your bad-case modern scenario seems more like the average to me (extent depending on the field). Most research that promises breakthroughs requires a lot of funding these days, which implies either close industry involvement or being part of some government sponsored project. Which both imply close supervision and teams of researchers, no Dr. Perelman type one-man-show. Even if there’s no corporate/academic supervisor pestering you, if you want to do (default:expensive) research, you and your team better publish, or perish, as the aphorism goes.
Note I did not suggest just throwing away samples, both falsifying your reports / releasing disinformation opens you up to legal liabilities, damages, pariah status, and depends on convincing your research group as well. Unless you envision yourself as the team leader, in which case it’s unlikely you’ll be the first to notice the danger, and in which case you’ll probably be self selected for being enthusiastic about what you do.
Take nanotech, say you start thinking that your current project may open the door to self-replicators. Well, most any nanotech related research paves part of the way there, whether a large or a small chunk. So stop altogether? But you went into the field willingly (presumably), so it’s not like you’re strictly against any progress that could be dual-used for self-replicators.
What I’m getting at is a researcher a) noticing the dangerous implications of his current research and then b) devoting himself to stopping it effectively and c) those efforts having a significant effect on the outcome is a contrived scenario in almost any scenario that doesn’t seem Chinese-Room like concocted.
Maybe it’s selection bias from the scientific news cycle, but unless there is a large “dark figure” of secret one-man researcher hermits like Perelman for whom your techniques may potentially work, there’s little stopping the (hopefully hypothetical) doomsday clock.
I agree; it seems a very contrived scenario. Though, should such a contrived scenario occur, then it seems to me that legal liabilities, pariah status, and damages will seem negiligible problems next to the alternative.